CN104205171A - System and method for avatar generation, rendering and animation - Google Patents
System and method for avatar generation, rendering and animation Download PDFInfo
- Publication number
- CN104205171A CN104205171A CN201280071879.8A CN201280071879A CN104205171A CN 104205171 A CN104205171 A CN 104205171A CN 201280071879 A CN201280071879 A CN 201280071879A CN 104205171 A CN104205171 A CN 104205171A
- Authority
- CN
- China
- Prior art keywords
- incarnation
- facial
- parameter
- face
- range
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention provides a video communication system that replaces actual live images of the participating users with animated avatars. The system allows generation, rendering and animation of a two-dimensional (2-D) avatar of a user's face. The 2-D avatar represents a user's basic face shape and key facial characteristics, including, but not limited to, position and shape of the eyes, nose, mouth, and face contour. The system further allows adaptive rendering for displaying, allowing different scales of the 2-D avatar to be displayed on associated different sized displays of user devices.
Description
Technical field
Present disclosure relates to video communication and mutual, and more particularly, relates to for incarnation (avatar) generation, animation and play up so that at video communication and the mutual system and method using.
Background technology
In mobile device, the growing available functional user of making also wishes to communicate through video except simple calling.For example, user can start " video call ", " video conference " etc., wherein, user's audio frequency and real-time video are sent to one or more recipient by the camera in device and microphone, as other mobile device, desk-top computer, video conferencing system etc.The communication of real-time video can relate to and transmits a large amount of data (for example, depend on the technology of camera, for the treatment of the particular video frequency codec of realtime graphic information etc.).Under the also limited availability condition of the limit bandwidth of existing 2G/3G wireless technology and emerging 4G wireless technology, many device users carry out the proposal of video call simultaneously the bandwidth in existing wireless communications infrastructure have been caused to very large burden, and this can adversely affect the quality of video call.
Brief description of the drawings
Along with following embodiment is carried out, and when similarly label illustrates the accompanying drawing of similar portions, the feature and advantage of the various embodiment of described theme will become apparent, and wherein:
Figure 1A illustrates the exemplary device auto levelizer system consistent with the various embodiment of present disclosure;
Figure 1B illustrates the example virtual space system consistent with the various embodiment of present disclosure;
Fig. 2 illustrates the exemplary device consistent with the various embodiment of present disclosure;
Fig. 3 illustrates the example face detection module consistent with the various embodiment of present disclosure;
Fig. 4 A-4C illustrates example facial markers parameter and the generation of the incarnation consistent with at least one embodiment of present disclosure;
Fig. 5 illustrates the example incarnation control module consistent with the various embodiment of present disclosure and selects module;
Fig. 6 illustrates that the example system consistent with at least one embodiment of present disclosure realizes; And
Fig. 7 is the process flow diagram of the exemplary operations consistent with at least one embodiment of present disclosure.
Although following embodiment is carried out with reference to illustrative embodiment, its many alternative, modifications and variations will be apparent for those skilled in the art.
Embodiment
Some system and methods allow at communicating between users and mutual, and wherein, user can select specialization body to represent oneself.The animation of incarnation model and this type of incarnation model can be experienced most important to the user of communication period.Particularly, preferably there is user's face and the very fast animation response (real-time or closely real-time) of facial expression and accurately and/or vividly represent.
Some systems and/or method allow generate and play up three-dimensional (3-D) incarnation model to use in communication period.For example, some known methods comprise manual generation that laser scanning, photo matching, graphic designs personnel or artist based on model carry out etc.But known 3D incarnation generation system and method can have defect.Particularly, for making model animation keep more smooth and easy in communication period, 3-D incarnation model can generally include thousands of summits and trigpoint, and playing up of 3-D incarnation model can require sizable calculating input and power.In addition, the generation of 3-D incarnation also can require manual correction to improve the visual effect in the time passing through and during interaction uses, and domestic consumer can be difficult to create voluntarily the 3-D incarnation model compared with robust.
Many users can utilize such as the mobile computing device of smart phone and incarnation and communicate and alternately.But mobile computing device can have limited computational resource and/or storage, therefore, may not be user, particularly use the user of 3-D incarnation that gratifying incarnation communication and mutual is fully provided.
By general introduction, present disclosure relates generally to for using interactive incarnation to communicate and mutual system and method.The system and method consistent with present disclosure provides incarnation generate and play up conventionally, so as this locality in be associated local and remote user device and the video communication between long-distance user and mutual in use.More particularly, system allows to generate, plays up and with animate user's facial two dimension (2D) incarnation, wherein, 2D incarnation represents user's basic face shape and key facial feature, includes but not limited to position and the shape of eyes, nose, mouth and face contour.System is also configured to the key facial feature based on active communications and during interaction user real-time or that closely detect in real time at least to a certain extent, and incarnation animation is provided.System and method also provides Adapti ve rendering, to show the 2D incarnation of various ratios on the display at user's set at active communications and during interaction.More particularly, system and method can be configured to the scale factor of identification corresponding to the 2D incarnation of the display of the difference size of user's set, the distortion of 2D incarnation while preventing from thus showing on the multiple display of user's set.
In one embodiment, in the device that is coupled to camera, activate application.Application can be configured to allow face and the facial features of user based on user to generate 2D incarnation so that on remote-control device, demonstration in Virtual Space etc.Camera can be configured to start to catch image, and then on the image catching, carries out face and detect, and definite facial features.Then, carry out incarnation and select, wherein, user can the facial features based on user select between predefined 2D incarnation and the generation of 2D incarnation.Then, any face/the head detecting of movement of one or more facial features (including but not limited to eyes, nose and mouth) that comprises user is moved and/or the change of facial characteristics converts to and is used in that at least another device is upper, in Virtual Space, wait the parameter with animate incarnation.
Then, device can be configured to start and communicating by letter of another device, Virtual Space etc. at least.For example, can pass through 2G, 3G, 4G honeycomb connection foundation communication.Alternatively, can connect and set up communication by the Internet through WiFi.After connection setup, determine scale factor to allow and at least on another device, suitably showing the 2D incarnation of selecting in the communication between device and during interaction.Then, can transmit at least one in incarnation selection, incarnation parameter and scale factor.In one embodiment, at least one in the selection of receiving remote incarnation or long-range incarnation parameter.Long-range incarnation selects to impel device to show incarnation, and the incarnation that long-range incarnation parameter can impel device to show with animate.Voice communication is followed incarnation animation through known method.
The system and method consistent with present disclosure can be through the mobile computing device of for example smart phone and other user and communicates with mutual user improved experience is provided.Particularly, compared with known 3D incarnation system and method, native system provides the advantage of utilizing simpler 2D incarnation model generation and rendering intent, and the method requires more calculating input and the power of much less.In addition, native system provides the real-time or near real-time animation of 2D incarnation.
Figure 1A illustrates the device auto levelizer system 100 consistent with the various embodiment of present disclosure.System 100 generally can comprise the device 102 and 112 communicating through network 122.Device 102 comprises at least camera 104, microphone 106 and display 108.Device 112 comprises at least camera 114, microphone 116 and display 118.Network 122 comprises at least one server 124.
Device 102 and 112 can comprise the various hardware platforms that can carry out wired and/or radio communication.For example, device 102 and 112 can include but not limited to video conferencing system, desk-top computer, laptop computer, flat computer, smart phone (for example, iPhones, phone based on Android, Blackberries, the phone based on Symbian, the phone based on Palm etc.), cellular handset etc.
Camera 104 and 114 comprises any device for catching the digital picture that represents the environment that comprises one or more individuals, and can have suitable resolution to carry out one or more individuals' face analysis in environment as described herein.For example, camera 104 and 114 can comprise still camera (for example, being configured to catch the camera of still photo) or video camera (for example, being configured to catch the camera of the mobile image that comprises multiple frames).Camera 104 and 114 can be configured to use the light operation in visible spectrum or operates by the other parts that are not limited to the electromagnetic spectrum such as infrared spectrum, ultraviolet spectrum.Camera 104 and 114 can be included in respectively in device 102 and 112, or can be to be configured to through wired or wireless communication and device 102 and 112 isolated systems that communicate.The concrete example of camera 104 and 114 can comprise as can be associated with computing machine, video monitor etc. wired (for example, USB (universal serial bus) (USB), Ethernet, live wire etc.) or wireless (for example, WiFi, bluetooth etc.) web camera, mobile device camera are (for example, be integrated in cell phone or smart phone camera in exemplary device previously discussed for example), integrated laptop computer camera, integrated flat computer camera (for example, iPad, Galaxy Tab and like that) etc.
Device 102 and 112 can also comprise microphone 106 and 116.Microphone 106 and 116 comprises any device that is configured to sensing sound.Microphone 106 and 116 can be integrated in respectively in device 102 and 112, or can, through such as the wired or wireless communication described in example on about camera 104 and 114, carry out mutual with device 102,112.Display 108 and 118 comprises any device that is configured to show text, still image, mobile image (for example, video), user interface, figure etc.Display 108 and 118 can be integrated in respectively in device 102 and 112, or can be through carrying out alternately such as the wired or wireless communication described in example and device on about camera 104 and 114.
In one embodiment, display 108 and 118 is configured to respectively show incarnation 110 and 120.While quoting in this article, incarnation is defined as the user's of two dimension (2-D) or three-dimensional (3-D) diagrammatic representation.Incarnation needn't similar user appearance, therefore, although incarnation can be expression true to nature, they also can adopt the forms such as picture, cartoon, sketch.As shown in the figure, user's's (for example, long-distance user) that device 102 can indicator gauge showing device 112 incarnation 110, and similarly, the user's that device 112 can indicator gauge showing device 102 incarnation 120.Therefore, user can check other user's expression and needn't exchange the bulk information that relates generally to the device auto levelizer communication that adopts live image.
Network 122 can comprise the various second generations (2G), the third generation (3G), the 4th generation (4G) data communication technology, the Wi-Fi wireless data communication technology etc. based on honeycomb.Network 122 comprises at least one server 124 that is configured to set up and keep communication connection in the time using these technology.For example, server 124 can be configured to the communication protocol of supporting that the Internet is relevant, as the session initiation protocol (SIP) for amendment and termination both sides (clean culture) and (multicast) session in many ways, allow the interactive mode connection that connects the framework of setting up at top at byte stream to set up agreement (ICE) for presenting, for allowing the application operating by NAT to find the existence of other NAT, for connecting to divide, the User Datagram Protoco (UDP) (UDP) of application is equipped with practical (STUN) agreement of session traversal that is connected to the IP address of distance host and the network insertion converter of port or NAT, for allow element after NAT or fire wall by transfer control agreement (TCP) or UDP connect receive data use NAT around relaying travel through (TURN) etc.
Figure 1B illustrates the virtual space system consistent with the various embodiment of present disclosure 126.System 126 can comprise device 102,112 and server 124.Device 102,112 and server 124 can continue the mode shown in Figure 1A that is similar to and communicate, but user interactions can carry out in Virtual Space 128 instead of with device auto levelizer form.While quoting in this article, Virtual Space may be defined as the digital simulation of physical location.For example, Virtual Space 128 can be similar as external positions such as city, road, walkway, field, forest, island or as interior locations such as office, house, school, market, shops.
The user who is represented by incarnation can seem and carry out alternately with Virtual Space 128 as in real world.Virtual Space 128 can exist on one or more server that is coupled to the Internet, and can be safeguarded by third party.The example of Virtual Space comprises virtual office, virtual conference room, as the virtual world of Second Life, as the MMOPRG of World of Warcraft (MMORPG), as the MMO actual life game (MMORLG) of The Sims Online.In system 126, Virtual Space 128 can comprise the multiple incarnation corresponding to different user.Display 108 and 118 can show the Virtual Space (VS) 128 of compression (for example, less) version, instead of shows incarnation.For example, display 108 can show the skeleton view of the content of " seeing " in Virtual Space 128 corresponding to device 102 user's incarnation.Similarly, display 118 can show the skeleton view of the content of " seeing " in Virtual Space 128 corresponding to device 112 user's incarnation.The example of the content that incarnation may be seen in Virtual Space 128 can include but not limited to virtual architecture (for example, buildings), virtual vehicle, virtual objects, virtual animal, other incarnation etc.
Fig. 2 illustrates the exemplary device 102 according to the various embodiment of present disclosure.Although only tracing device 102, for example installs 112(, remote-control device) can comprise the resource that is configured to provide identical or similar functions.As previously described, device 102 is shown and comprises camera 104, microphone 106 and display 108.Camera 104 and microphone 106 can provide and be input to camera and audio frequency frame module 200.Camera and audio frequency frame module 200 can comprise common definition clear-cut and can operate to control at least customization, the Voice & Video proprietary, known and/or later exploitation of camera 104 and microphone 106 processes code (or instruction set).For example, camera and audio frequency frame module 200 can comprise that camera 104 and microphone 106, with document image and/or sound, can process image and/or sound, can impel image and/or audio reproduction etc.Depend on device 102, and more particularly, the operating system (OS) of operation in device 102, camera and audio frequency frame module 200 can be different.Exemplary operations system comprises iOS, Android, Blackberry OS, Symbian, Palm OS etc.Loudspeaker 202 can receive the audio-frequency information from camera and audio frequency frame module 200, and (for example can be configured to reproduce local sound, the audible feedback of voiceband user is provided) and remote sound (for example, participating in phone in virtual location, video call or mutual other side's sound).
The head, face and/or the facial zone that are configured in identification and the image that provides of tracking camera 104 can be also provided device 102, and the face detection module 204 of one or more facial features of definite user (, facial features 206).For example, face detection module 204 can comprise common definition clear-cut and can operate to receive standard format image (such as but not limited to RGB coloured image) and facial detection of code code (or instruction set), hardware and/or the firmware of the facial customization in recognition image at least to a certain extent, proprietary, known and/or later exploitation.
Face detection module 204 (for example also can be configured by a series of images, the frame of video of 24 frames per second) follow the tracks of the face that detects, and the change of face based on detecting and user's facial features (for example, facial features 206) is (for example, mobile), determine head position.The known tracker that can be adopted by face detection module 204 can comprise particle filter, mean shift, Kalman filtering etc., their each edge analysis, variance and analysis, characteristic point analysis, histogram analysis, skin analysis etc. of utilizing.
Face detection module 204 also can comprise common definition clear-cut and can operate to receive standard format image (such as but not limited to RGB coloured image) and the customization, the facial features code (or instruction set) of proprietary, known and/or later exploitation of one or more facial features 206 in recognition image at least to a certain extent.This type of known facial features system includes but not limited to the CSU face recognition evaluating system of being developed by Colorado State University, standard Viola-Jones rising cascade (boosting cascade) framework that can in public open-source computer vision (OpenCV) bag, find.
As discussed in detail herein, facial features 206 can comprise facial feature, includes but not limited to the position of facial marks (landmark) and/or the movements of shape and this type of mark such as eyes, nose, mouth, face contour.In one embodiment, the face action (for example, the change of facial features 206) that incarnation animation can be based on sensing.On the face of incarnation, the movement of real face can be followed or imitate to character pair point, and this is called " expression cloning " or " FA Facial Animation that behavior drives ".
Face detection module 204 also can be configured to the expression (for example, whether the face that identification detected is in the past glad, sad, smiles, and frowns, surprised, exciting etc.) that identification is associated with the feature detecting.Therefore the facial expression that, face detection module 204 can also comprise common definition clear-cut and can operate to detect and/or identify the customization of the expression in face, proprietary, known and/or later exploitation detects and/or cognizance code (or instruction set).For example, face detection module 204 (for example can be determined facial characteristics, eyes, nose, mouth etc.) size and/or position, and can compare these facial characteristics and comprise (for example thering is corresponding facial tagsort, smile, frown, excitement, sad etc.) the facial feature database of multiple sample facial characteristics.
Device 102 can also comprise that the incarnation that is configured to allow the user of device 102 to be chosen in the incarnation showing on remote-control device selects module 208.Incarnation is selected module 208 can comprise common definition clear-cut and can be operated to show different incarnation to user so that user can select the customization of one of incarnation, user interface proprietary, known and/or later exploitation to build code (or instruction set).
In one embodiment, incarnation selects module 208 to can be configured to allow the user of device 102 to be chosen in one or more predefine incarnation of device 102 interior storages, or selects the facial features 206 that detects based on user to generate the option of incarnation.The incarnation of predefine incarnation and generation can be all two dimension (2D) incarnation, and wherein, as described in more detail herein, predefined incarnation is based on model, and the 2D incarnation generating is based on sketch.
Predefined incarnation can allow all devices to have identical incarnation, and for example, need to be delivered to remote-control device or Virtual Space in the only selection of incarnation of during interaction (, the identification of predefine incarnation), and this has reduced the quantity of information that needs exchange.The incarnation generating can be stored in device 102 so that in communication period use in future.Can before setting up communication, select incarnation, but also can during the process of active communications, change incarnation.Therefore, may send or receive incarnation selection at any time of communication period point, and receiving trap may be selected according to the incarnation of receiving the incarnation of change demonstration.
Device 102 can also comprise the selection input being configured in response to select module 208 from incarnation, generates the incarnation control module 210 of incarnation.Incarnation control module 210 can comprise common definition clear-cut and can operate face/head position and/or the facial features 206 to detect based on face detection module 204, generates the customization of 2D incarnation, incarnation proprietary, known and/or later exploitation generates and processes code (or instruction set).
Incarnation control module 210 can also be configured to generate for the parameter with animate incarnation.While quoting in this article, animation may be defined as the outward appearance of change image/model.Single animation can be changed the outward appearance of 2D still image, or multiple animation for example can recur, with the motion in analog image (, rotary head, nods, and talk, frowns, and smiles, and laughs etc.).The position change of the face detecting and/or facial features 206 is convertible into the feature class that impels incarnation like the feature of user's face.
In one embodiment, the facial general expression detecting is convertible into impels incarnation to show one or more parameter of identical expression.Also can exaggerate incarnation expression with emphasize expression.In the time that incarnation parameter may be used on all predefine incarnation conventionally, can not need the knowledge of the incarnation of selecting.But in one embodiment, incarnation parameter can be specific to the incarnation of selecting, and therefore, if select another incarnation, can change.For example, people's incarnation for example can require, with different parameters settings (, different incarnation features can change) such as animal incarnation, cartoon incarnation to demonstrate as happiness, sad, anger, the mood such as surprised.
Incarnation control module 210 can comprise common definition clear-cut and can operate to generate parameter so that the face/head position detecting based on face detection module 204 and/or facial features 206, selects the customization of the incarnation that module 208 selects, the graphics process code (or instruction set) of proprietary, known and/or later exploitation with animate incarnation.For the animation method based on facial characteristics, 2D incarnation animation can for example be undertaken by scalloping (image warping) or image morphing (image morphing).Oddcast is the example that can be used for the software resource of 2D incarnation animation.
In addition, in system 100, incarnation control module 210 can receive and can be used for showing and selecting and long-range incarnation parameter corresponding to the long-range incarnation of the incarnation of the user at remote-control device with animate.Incarnation control module 210 can impel display module 212 to show incarnation 110 on display 108.Display module 212 can comprise common definition clear-cut and can operate to show on display 108 according to exemplary device auto levelizer embodiment and with the graphics process code (or instruction set) of the customization of animate incarnation, proprietary, known and/or later exploitation.
For example, incarnation control module 210 can receiving remote incarnation be selected, and soluble long-range incarnation is selected with corresponding to pre-definite incarnation.Then display module 212 can show incarnation 110 on display 108.In addition, the long-range incarnation parameter of receiving in incarnation control module 210 can be explained, and order can be provided to display module 212 with animate incarnation 110.
The Adapti ve rendering that provides long-range incarnation to select based on long-range incarnation parameter can be also provided incarnation control module 210.More particularly, incarnation control module 210 can comprise common definition clear-cut and can operate with adaptability and plays up incarnation 110 to be suitably applicable to display 108 and the graphics process code (or instruction set) of the customization of the distortion of prevention incarnation 110 in the time showing to user, proprietary, known and/or later exploitation.
In one embodiment, more than two users can participate in video call.In the time that more than two users are mutual in video call, display 108 can be divided or segmentation to allow the showing more than incarnation corresponding to long-distance user simultaneously.Alternatively, in system 126, incarnation control module 210 can receive the information of the content impelling display module 212 to show " to see " in Virtual Space 128 corresponding to device 102 user's the incarnation visual angle of incarnation (for example, from).For example, display 108 can be presented in Virtual Space 128 buildings that represents, object, animal, other incarnation etc.In one embodiment, incarnation control module 210 can be configured to impel display module 212 to show " feedback " incarnation 214.Feedback incarnation 214 represents that the incarnation selected is how on remote-control device, the medium demonstration in Virtual Space.Particularly, feedback incarnation 214 is shown as the incarnation that user selects, and can use identical parameters that incarnation control module 210 generates with animate.Like this, user can confirm the content that long-distance user sees at its during interaction.
Device 102 can also comprise be configured to transmit and receive for selecting incarnation, show incarnation, with animate incarnation, show the communication module 216 of the information of virtual location skeleton view etc.Communication module 216 can comprise common definition clear-cut and can operate to transmit that incarnation is selected, incarnation parameter, and receiving remote incarnation selects and the communication process code (or instruction set) of the customization of long-range incarnation parameter, proprietary, known and/or later exploitation.Communication module 216 also can transmit and receive corresponding to the mutual audio-frequency information based on incarnation.Communication module 216 can transmit and receive above-mentioned information through network 122 as previously described.
Device 102 can also comprise be configured to carry out with install 102 and comprising one or more processor 218 of the operation that is associated of one or more module.
Fig. 3 illustrates the example face detection module 204a consistent with the various embodiment of present disclosure.Face detection module 204a can be configured to receive one or more image from camera through camera 104 and audio frequency frame module 200, and the face in recognition image (or alternatively multiple faces) at least to a certain extent.Face detection module 204a also can be configured to identify to a certain extent and definite image in one or more facial features 206.As described herein, facial features 206 can one or more facial parameters based on being identified by face detection module 204a generate.Facial features 206 can comprise facial feature, includes but not limited to position and/or shape facial markses such as eyes, nose, mouth, face contour.
In the embodiment shown, face detection module 204a can comprise face detection/tracking module 300, facial normalization module 302, Mark Detection module 304, facial model module 306, facial parameters module 308, facial pose module 310 and facial expression detection module 312.Face detections/tracking module 300 can comprise common definition clear-cut and can operate the face tracking code (or instruction set) with the size of face and the customization of position detection and Identification are received from camera 104 at least to a certain extent still image or video flowing, proprietary, known and/or later exploitation.This type of known face detection/tracker for example comprises and is issued as Paul Viola and Michael Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Accepted Conference on Computer Vision and Pattern Recognition, 2001 Viola and the technology of Jones.These technology, by detailed scanning window on image, are used the cascade of self-adaptation rising (Adaptive Boosting (AdaBoost)) sorter to detect face.Face detection/tracking module 300 also can be across multiple image tracks facial or facial zone.
Face normalization module 302 can comprise common definition clear-cut and can operate the facial normalization code (or instruction set) of facial customization to identify in normalized image, proprietary, known and/or later exploitation.For example, face normalization module 302 can be configured to image rotating with alignment eyes (if the coordinate of eyes is known), nose, mouth etc., cutting image arrives conventionally corresponding to facial big or small less size, zoomed image is so that the constant distance between eyes, nose and/or mouth, application mask is with the not pixel in the ellipse that comprises typical case's face of pulverised, and histogram-equalized image makes non-shielding pixel have average 0 and standard deviation 1 with smoothing processing for distribution and/or the normalized image of the gray-scale value of non-shielding pixel.
Critical point detection module 304 can comprise common definition clear-cut and can operate the Mark Detection code (or instruction set) with the customization of the various facial characteristics of face in detection and Identification image at least to a certain extent, proprietary, known and/or later exploitation.What in Mark Detection, imply is face to have been detected at least to a certain extent.Alternatively, the area/region that wherein may be able to find the image of mark can (for example,, by facial normalization module 302) have been carried out to identify/focus in location to a certain degree.For example, Mark Detection module 304 can be based on heuristic analysis, and can be configured to identification and/or (for example analyze forehead, eyes (and/or canthus), nose, nose), relative position, size and/or the shape of chin (for example, point), eyebrow, cheekbone and lower jaw and face contour.Use the sorter based on Viola-Jones, canthus and the corners of the mouth also can be detected.
Facial model module 306 can comprise common definition clear-cut and can operate with the facial key point identification based on identifying in image and/or generate the customization of facial model, the facial model code (or instruction set) of proprietary, known and/or later exploitation.As understood, facial model module 306 can be considered a part for face detection/tracking module 300.
Facial model module 306 can comprise the facial parameters module 308 that is configured to generate based on the facial marks of identifying in image at least partly user's facial facial parameters.Facial model module 306 can comprise common definition clear-cut and can operate with the facial marks identification based on identifying in image and/or generate key point and the customization, facial model and the parameter code (or instruction set) of proprietary, known and/or later exploitation at the edge that is associated that is connected at least some key points.
As described in more detail herein, the facial parameters that the generation of the 2D incarnation of being undertaken by incarnation control module 210 can generate based on facial parameters module 308 at least partly, the connection edge that is associated that comprises key point and define between key point.The animation of the incarnation (comprising the incarnation of predefined incarnation and generation) of the selection of being undertaken by incarnation control module 210 similarly, and play up the facial parameters that can generate based on facial parameters module 308 at least partly.
Facial pose module 310 can comprise common definition clear-cut and can operate the face orientation detection of code (or instruction set) with the customization of the attitude of face in detection and Identification image at least to a certain extent, proprietary, known and/or later exploitation.For example, facial pose module 310 can be configured to the display 108 with respect to device 102, establishes the attitude of face in image.More particularly, whether the face that facial pose module 310 can be configured to determine user is towards the display 108 that installs 102, and whether indicating user is observing the content showing on display 108 thus.
Facial expression detection module 312 can comprise common definition clear-cut and can operate to detect and/or recognition image in user's customization, the facial expression proprietary, known and/or later exploitation of facial expression detect and/or cognizance code (or instruction set).For example, facial expression detection module 312 (for example can be determined facial characteristics, forehead, chin, eyes, nose, mouth, cheek, tooth etc.) size and/or position, and facial characteristics and comprise the facial feature database of multiple sample facial characteristics with corresponding facial tagsort relatively.
Fig. 4 A-4C illustrates example facial markers parameter and the generation of the incarnation consistent with at least one embodiment of present disclosure.As shown in Figure 4 A, carry out the facial detection and tracking of user's image 400.As previously described, face detection module 204(comprises face detections/tracking module 300, facial normalization module 302 and/or Mark Detection module 304 etc.) can be configured to detection and Identification user's facial size and position, the face of normalization identification and/or at least facial various facial characteristics in detection and Identification image in certain degree.More particularly, can identify and/or analyze relative position, size and/or the shape of forehead, eyes (and/or canthus), nose (for example, nose), chin (for example, point), eyebrow, cheekbone, lower jaw and face contour.
As shown in Figure 4 B, in image 402, can identify the user's who comprises facial parameters facial facial model.More particularly, facial parameters module 308 can be configured to generate based on the facial marks of identifying in image at least partly user's facial facial parameters.As shown in the figure, facial parameters can comprise one or more key point 404 and the edge 406 that is associated that interconnects one or more key point 404.For example, in the embodiment shown, edge 406 (1) can interconnect adjacent key point 404 (1), 404 (2).Key point 404 and the edge 406 that is associated form total facial model of user based on the facial marks of identification.
In one embodiment, facial parameters module 308 can comprise common definition clear-cut and can operate with according at the facial marks of an identification such as forehead and such as the statistics geometric relationship between the facial marks of at least another identification of eyes, facial marks (for example, forehead, eyes, nose, mouth, chin, face contour etc.) based on identification generates key point 404 and is connected the customization at edge 406, the facial parameters code (or instruction set) of proprietary, known and/or later exploitation.
For example, in one embodiment, can in two-dimentional cartesian coordinate system (incarnation is 2D), define key point 404 and the edge 406 that is associated.More particularly, key point 404 can be defined (for example, coding) be point, id, x, y}, wherein, " point " represents nodename, " id " represents index, and " x " and " y " is coordinate.Edge 406 can be defined to (for example, coding) is { edge, id, n, pi, p2, pn}, wherein, " edge " represents nodename, and " id " represents edge index, and " n " represents (for example to be comprised by edge 406, connect) the quantity of key point, and pl-pn represents the some index at edge 406.For example, can by code set edge, 0,5,0,2,1,3,0) be interpreted as that expression edge-0 comprises (connection) 5 key points, wherein, the order of connection of key point be key point 0 to key point 2 to key point 1 to key point 3 to key point 0.
Fig. 4 C illustrates the facial marks of identification and the example 2D incarnation 408 of facial parameters generation based on comprising key point 404 and edge 406.As shown in the figure, 2D incarnation 408 can comprise the sketch line of the facial shape that conventionally marks user and the key facial feature such as eyes, nose, mouth, eyebrow and face contour.
Fig. 5 illustrates the example incarnation control module 210a consistent with the various embodiment of present disclosure and incarnation selection module 208a.Incarnation selects module 208a to can be configured to allow the user of device 102 to be chosen in the incarnation showing on remote-control device.Incarnation is selected module 208 can comprise common definition clear-cut and can be operated to show different incarnation to user so that user can select the customization of one of incarnation, user interface proprietary, known and/or later exploitation to build code (or instruction set).In one embodiment, incarnation selects module 208a to can be configured to allow the user of device 102 to be chosen in the predefined incarnation of one or more 2D of incarnation database 500 interior storages.As common with reference to as shown in Fig. 4 A-4C and as described in, incarnation selects module 208a can also be configured to allow user to select to generate 2D incarnation.The 2D incarnation having generated can be described as the 2D incarnation based on sketch, and wherein, predefined key point is different from having, and generates key point and edge from user's face.On the contrary, predefined 2D incarnation can be described as the 2D incarnation based on model, wherein, the scheduled justice of key point, and 2D incarnation is not specific user's face " customization ".
As shown in the figure, incarnation control module 210a can comprise the user's selection that is configured to select from incarnation in response to instruction incarnation the generation of module 208a, generates the incarnation generation module 502 of 2D incarnation.Incarnation generation module 502 can comprise common definition clear-cut and can operate the facial features 206 to detect based on face detection module 204, generates the customization of 2D incarnation, incarnation proprietary, known and/or later exploitation generates and processes code (or instruction set).More particularly, facial marks and the facial parameters of identification that incarnation generation module 502 can be based on comprising key point 404 and edge 406, generates 2D incarnation 408(shown in Fig. 4 C).In the time that 2D incarnation generates, incarnation control module 210a can also be configured to that the copy of the 2D incarnation of generation is sent to incarnation and select module 208a to be stored in incarnation database 500.
Understand as common, incarnation generation module 502 can be configured to receive and generate long-range incarnation based on long-range incarnation parameter and selects.For example, long-range incarnation parameter can comprise facial features, comprises long-distance user's facial facial parameters (for example, key point), and wherein, incarnation generation module 502 can be configured to generate the incarnation model based on sketch.More particularly, incarnation generation module 502 can be configured at least partly based on key point be connected one or more key point and edge, generate long-distance user's incarnation.Then, can on device 102, show the long-distance user's who generates incarnation.
Incarnation control module 210a can also comprise the incarnation rendering module 504 that is configured to provide based on long-range incarnation parameter the Adapti ve rendering of long-range incarnation selection.More particularly, incarnation control module 210 can comprise common definition clear-cut and can operate with adaptability and plays up incarnation 110 to be suitably applicable to display 108 and the graphics process code (or instruction set) of the customization of the distortion of prevention incarnation 110 in the time showing to user, proprietary, known and/or later exploitation.
In one embodiment, incarnation rendering module 504 can be configured to the selection of receiving remote incarnation and the long-range incarnation parameter that is associated.Long-range incarnation parameter can comprise the facial features that long-range incarnation is selected, and comprises facial parameters.Incarnation rendering module 504 can be configured at least partly based on long-range incarnation parameter, identifies the display parameter that long-range incarnation is selected.The bounding box that the long-range incarnation of display parameter definable is selected, wherein, bounding box can be understood to that the acquiescence that refers to long-range incarnation 110 shows size.Incarnation rendering module 504 can also be configured to identify the display 108 of device 102 or the display parameter of display window (for example, height and width) that long-range incarnation 110 will show thereon.Incarnation rendering module 504 can also be configured to the display parameter of identification selected based on long-range incarnation and the display parameter of the identification of display 108, determines incarnation scale factor.Incarnation scale factor can allow long-range incarnation 110 to show with proper proportion (, undistorted or less distortion) and position (, long-range incarnation 110 can be centered on display 108) on display 108.
Understand as common, if (the display parameter of display 108 are changed, user's operating control 102 is so that the size from longitudinal to horizontal change view direction or change display 108), incarnation rendering module 504 can be configured to determine new scale factor based on the new display parameter of display 108, and display module 212 can be configured to show on display 108 based on new scale factor at least partly long-range incarnation 110.Similarly, if long-distance user exchanges incarnation in communication period, the new display parameter that incarnation rendering module 504 can be configured to select based on new long-range incarnation are determined new scale factor, and display module 212 can be configured to show on display 108 based on new scale factor at least partly long-range incarnation 110.
Fig. 6 illustrates according to the example system of at least one embodiment and realizes.Device 102' is configured to for example connect 600(through WiFi, in work) communicate with wireless mode, server 124' is configured to consult being connected between device 102' and 112' through the Internet 602, and equipment 112' is configured to through another WiFi connection 604(for example, communicate with wireless mode at home).In one embodiment, the video call application of active device auto levelizer based on incarnation in equipment 102'.After incarnation is selected, application can allow to select at least one remote-control device (for example, device 112').Then, application can impel device 102' to start and install communicating by letter of 112'.Transmit to connect through enterprise's access point (AP) 606 by device 102' and set up request auto levelizer 112', can start communication.The AP of enterprise 606 can be spendable AP in business environment, and therefore the comparable AP of family 614 supports more high data throughput and Geng Duo wireless client simultaneously.The AP of enterprise 606 can receive the wireless signal from device 102', and can continue to be transmitted and connected the request of foundation by various commercial networks through gateway 608.Then, connect the request of foundation and can pass through fire wall 610, fire wall 610 can be configured to control the information that flows into and flow out WiFi network 600.
Then, can set up request by the connection of server 124' treating apparatus 102'.Server 124' is configurable for registering IP address, destination-address is carried out to authentication with NAT traversal to can be directed to destination correct on the Internet 602 by being connected the request of foundation.For example, server 124' can set up the information analysis intended destination (for example, remote-control device 112') request from the connection from device 102' of receiving, and can correspondingly signal route be passed through to correct NAT, port and arrive IP address, this destination.Depend on network configuration, these operations can only must be carried out during connecting foundation.
In some cases, can repetitive operation during video call so that provide notice to NAT with keep connecting continue available.After connection has been set up, media and signal path 612 portability videos (for example, incarnation is selected and/or incarnation parameter) and audio-frequency information are directed to the AP of family 614.Then device 112' can receive and connect the request of foundation, and can be configured to determine whether to accept request.Determining whether to accept request for example to comprise that the user to device 112' shows vision narration, and inquiry is about whether accepting the connection request from device 102'.For example, if the user of device 112' accepts to connect (, accepting video call), connect and can be set up.Then camera 104' and 114' can be configured to the image of the relative users that starts difference trap setting 102' and 112' for use in the incarnation of selecting with the each user of animate.Then, microphone 106' and 116' can be configured to start the audio frequency of record from each user.When message exchange between device 102' and 112' starts, display 108' and 118' can show and the incarnation corresponding to the user of device 102' and 112' with animate.
Fig. 7 is according to the process flow diagram of the exemplary operations of at least one embodiment.In operation 702, can in device, activate application (for example, the application of the voice call based on incarnation).After the activation of application, it can be the selection of incarnation 704.The selection of incarnation can comprise the interface being shown to user by application, and the predefined incarnation file that interface allows user to browse and stores in incarnation database is also therefrom selected.Interface can also allow user to select to generate incarnation.Whether user determines to generate incarnation can be determined in operation 706.If determine that user selects to generate incarnation, different from the predefined incarnation of selection, then the camera in device can start to catch image in operation 708.Image can be still image or live video (for example, multiple images of continuous capturing).In operation 710, can from image, the detection/tracking of face/head start, carry out graphical analysis.Then the face, can analyzing and testing arriving for example, to extract facial features (, facial marks, facial parameters, facial expression etc.).In operation 712, face/head position and/or the facial features based on detecting at least partly, generates incarnation.
Incarnation select after, can operation 714 in configuration communication.Communication configuration is drawn together at least one remote-control device for participating in video call or the mark of Virtual Space.For example, user can store up from application memory, with device in another system relationship storage (for example, contacts list in smart phone, cell phone), such as selecting in for example, long-distance user/device list in (, in as social online media sites such as Facebook, LinkedIn, Yahoo, Google+, MSN) remote storage on the Internet.Alternatively, user can be chosen in as entered in the Virtual Space of Second Life online.
In operation 716, can between device and at least one remote-control device or Virtual Space, start and communicate by letter.For example, can be sent to remote-control device or Virtual Space by connecting the request of foundation.For explanation herein, suppose that remote-control device or Virtual Space acceptance connect the request of foundation.Then,, in operation 718, the camera in device starts to catch image.Image can be still image or live video (for example, multiple images of continuous capturing).In operation 720, can from image, the detection/tracking of face/head start, carry out graphical analysis.Then the face, can analyzing and testing arriving for example, to extract facial features (, facial marks, facial parameters, facial expression etc.).In operation 722, convert the face/head position detecting and/or facial features to incarnation parameter.Incarnation parameter on remote-control device or in Virtual Space with animate with play up the incarnation of selection.In operation 724, can transmit at least one in incarnation selection or incarnation parameter.
In operation 726, can show and with animate incarnation.For example, in the example (, system 100) of device auto levelizer communication, can from remote-control device receiving remote incarnation select or long-range incarnation parameter at least one.Then, can select to show based on the long-range incarnation of receiving, and long-range incarnation parameter that can be based on receiving be with animate and/or play up the incarnation corresponding to long-distance user.For example, in mutual example (, system 126), can receive the information that allows device to show the content of seeing corresponding to the incarnation of device users in Virtual Space.
Then, in operation 728, can determine whether current communication completes.If operation 728 in determine communication do not complete, operate 718-726 and can repeat to continues the facial analysis based on user, on remote equipment demonstration and with animate incarnation.Otherwise, in operation 730, can stop communication.If do not carry out other video call, also can stop video call application.
Although Fig. 7 illustrates the various operations according to embodiment, it being understood that for other embodiment, all operations shown in Fig. 7 is not essential.In fact, considered completely herein in other embodiment of present disclosure, operation and/or other operation described herein shown in Fig. 7 can be combined by the mode clearly not illustrating in any one in accompanying drawing, but still in full accord with present disclosure.Therefore the feature that, sensing does not definitely illustrate in an accompanying drawing and/or the claim of operation are considered to be in the scope and content of present disclosure.
Various features, aspect and embodiment are described in this article.As the skilled person will appreciate, feature, aspect and embodiment are easy to realize mutual combination and variation and amendment.Therefore, present disclosure should be considered as containing this type of combination, change and amendment.Therefore, range of the present invention and scope are not limited by any above-mentioned example embodiment should, but only should be according to the claim of enclosing and equivalent definition thereof.
While use in any embodiment in this article, term " module " can assignment be set to software, firmware and/or the circuit of carrying out any above mentioned operation.Software can be embodied as the software package, code, instruction, instruction set and/or the data that are recorded on nonvolatile computer-readable storage medium.Firmware can be embodied as code, instruction or instruction set and/or the data of hard coded in storage arrangement (for example, non-volatile).Can be for example when " circuit " uses in any embodiment in this article separately or with any array mode comprise hard-wired circuit, the firmware of the instruction carried out such as programmable circuit, state machine circuit and/or the storage programmable circuit of computer processor that comprises one or more independent instruction process core.Module can all or separately be embodied as the circuit of a part for the larger system of formation, for example, and integrated circuit (IC), System on Chip/SoC (SOC), desk-top computer, laptop computer, flat computer, server, smart phone etc.
Any one in described operation can realize in the system that comprises one or more mediums herein, is stored in alone or in combination the instruction of manner of execution while execution by one or more processors in described medium.Herein, processor for example can comprise server CPU, mobile device CPU and/or other programmable circuit.Therefore, expection described operation herein can distribute across multiple physical units of the processing structure such as at more than different physical locations.Medium can comprise the tangible media of any type, the disk of for example any type, comprise hard disk, floppy disk, CD, compact disk ROM (read-only memory) (CD-ROM), can rewriteable compact disc (CD-RW) and magneto-optic disk, semiconductor device such as ROM (read-only memory) (ROM), such as the random access memory (RAM) of dynamic and static RAM (SRAM), erasable programmable ROM (read-only memory) (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, solid magnetic disc (SSD), magnetic card or optical card or be applicable to the media of any type of store electrons instruction.Other embodiment can be embodied as the software module of being carried out by programmable control unit.Medium can be nonvolatile.
The term having adopted herein and statement are as language instead of the restriction described, and in this type of term and statement, be not intended to shown in getting rid of and any equivalent of described feature (or its part), and can recognize that various amendments are possible within the scope of the claims.Correspondingly, claim covers and comprises all these type of equivalents.Various features, aspect and embodiment describe in this article.As the skilled person will appreciate, feature, aspect and embodiment are easy to realize mutual combination and variation and amendment.Therefore, present disclosure should be considered as containing this type of combination, change and amendment.
As described herein, various embodiment can use hardware elements, software element or its any combination to realize.The example of hardware elements (for example can comprise processor, microprocessor, circuit, electric circuit element, transistor, resistor, capacitor, inductor etc.), integrated circuit, special IC (ASIC), programmable logic device (PLD) (PLD),, digital signal processor (DSP), field programmable gate array (FPGA), logic gate, register, semiconductor device, chip, microchip, chipset etc.
This instructions refers to that to quoting of " embodiment " or " embodiment " special characteristic, structure or the characteristic described in conjunction with this embodiment comprise at least one embodiment in the whole text.Therefore, not necessarily all refer to same embodiment at this instructions each position occurs " at an embodiment " or " in one embodiment " phrase in the whole text.In addition, specific feature, structure or characteristic can combine in one or more embodiments in any suitable manner.
According on the one hand, a kind of generation for the communication period between first user device and remote user device is provided, has played up and the system of animation.System comprise be configured to catch image camera, be configured to first and remote user device between start and set up and communicate by letter, and first and remote user device between transmit and receive communication module and one or more medium of information, in medium, be stored in alone or in combination and while execution by one or more processor, produce the instruction that one or more operates.Operation comprise select in two dimension (2D) incarnation based on model and the 2D incarnation based on sketch at least one to use in communication period, start communication, catch image, face in detected image, determine facial features from face, convert facial features to incarnation parameter, and transmit at least one in incarnation selection and incarnation parameter.
Another example system comprises assembly noted earlier, and determines that from face facial features comprises the facial marks detection and Identification face.Facial marks comprises at least one in forehead, chin, eyes, nose, mouth and face contour facial in image.Determine that from face facial features also comprises that the facial marks based on identification generates facial parameters at least partly.Facial parameters comprises one or more key point and between at least two key points of one or more key points, forms the edge connecting.
Another example system comprises assembly noted earlier, and incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and incarnation is based on facial features.
Another example system comprises assembly noted earlier, and incarnation is selected and incarnation parameter is used for generating incarnation in Virtual Space, and incarnation is based on facial features.
Another example system comprises assembly noted earlier and instruction, other receiving remote incarnation below the generation of instruction in the time being carried out by one or more processor select and long-range incarnation parameter at least one operation.
Another example system comprises assembly noted earlier, and comprise display, wherein, other operation below instruction produces in the time being carried out by one or more processor: play up long-range incarnation based on long-range incarnation parameter and select to allow undistorted or less distortion and show the incarnation of selecting based on long-range incarnation, and long-range incarnation based on playing up selects to show incarnation.
Another example system comprises assembly noted earlier and instruction, the operation of instruction other incarnation showing with animate based on long-range incarnation parameter below producing in the time being carried out by one or more processor.
According on the one hand, provide a kind of incarnation for the communication period between first user device and remote user device to generate, play up and the equipment of animation.Equipment comprises: communication module, be configured to first and remote user device between start and set up and communicate by letter, and first and remote user device between transmit and reception information; Incarnation is selected module, is configured to allow user to select at least one in two dimension (2D) incarnation based on model and the 2D incarnation based on sketch to use in communication period; Face detection module, is configured to detect the facial zone in user's image, and one or more facial features of detection and Identification face; And incarnation control module, be configured to convert facial features to incarnation parameter.Communication module is configured to transmit at least one in incarnation selection and incarnation parameter.
Another example apparatus comprises assembly noted earlier, and face detection module comprises the Mark Detection module of the facial marks that is configured to facial zone in recognition image, and facial marks comprises at least one in facial forehead, chin, eyes, nose, mouth and face contour.Face detection module also comprises the facial parameters module that is configured to generate based on the facial marks of identification at least partly facial parameters, and facial parameters comprises one or more key point and between at least two key points of one or more key point, forms the edge connecting.
Another example apparatus comprises assembly noted earlier, and incarnation control module is configured to generate the 2D incarnation based on sketch based on facial parameters at least partly.
Another example apparatus comprises assembly noted earlier, and incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and incarnation is based on facial features.
Another example apparatus comprises assembly noted earlier, and communication module is configured at least one in the selection of receiving remote incarnation and long-range incarnation parameter.
Another example apparatus comprises assembly noted earlier, and comprises the display that is configured to select based on long-range incarnation demonstration incarnation.
Another example apparatus comprises assembly noted earlier, and comprises the incarnation rendering module that is configured to play up long-range incarnation and select to allow undistorted or less distortion and show based on long-range incarnation parameter the incarnation of selecting based on long-range incarnation.
Another example apparatus comprises assembly noted earlier, and incarnation control module is configured to the incarnation showing with animate based on long-range incarnation parameter.
According on the other hand, provide a kind of and generated for incarnation, play up the method with animation.Method comprise select in two dimension (2D) incarnation based on model and the 2D incarnation based on sketch at least one to use in communication period, start communication, catch image, face in detected image, determine facial features from face, convert facial features to incarnation parameter, and transmit at least one in incarnation selection and incarnation parameter.
Another exemplary method comprises operation noted earlier, and determines that from face facial features comprises the facial marks detection and Identification face.Facial marks comprises at least one in forehead, chin, eyes, nose, mouth and face contour facial in image.Determine that from face facial features also comprises that the facial marks based on identification generates facial parameters at least partly.Facial parameters comprises one or more key point and between at least two key points of one or more key points, forms the edge connecting.
Another exemplary method comprises operation noted earlier, and incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and incarnation is based on facial features.
Another exemplary method comprises operation noted earlier, and incarnation is selected and incarnation parameter is used for generating incarnation in Virtual Space, and incarnation is based on facial features.
Another exemplary method comprises operation noted earlier and instruction, other receiving remote incarnation below the generation of instruction in the time being carried out by one or more processor select and long-range incarnation parameter at least one operation.
Another exemplary method comprises operation noted earlier, and comprise display, wherein, other operation below instruction produces in the time being carried out by one or more processor: play up long-range incarnation based on long-range incarnation parameter and select to allow undistorted or less distortion and show the incarnation of selecting based on long-range incarnation, and long-range incarnation based on playing up selects to show incarnation.
Another exemplary method comprises operation noted earlier and instruction, the operation of instruction other incarnation showing with animate based on long-range incarnation parameter below producing in the time being carried out by one or more processor.
According on the other hand, the computer-accessible media of at least one storage instruction are provided.In the time being carried out by one or more processor, instruction can impel computer system to carry out for incarnation generation, plays up the operation with animation.Operation comprise select in two dimension (2D) incarnation based on model and the 2D incarnation based on sketch at least one to use in communication period, start communication, catch image, face in detected image, determine facial features from face, convert facial features to incarnation parameter, and transmit at least one in incarnation selection and incarnation parameter.
Another exemplary computer accessible media comprises operation noted earlier, and determines that from face facial features comprises the facial marks detection and Identification face.Facial marks comprises at least one in forehead, chin, eyes, nose, mouth and face contour facial in image.Determine that from face facial features also comprises that the facial marks based on identification generates facial parameters at least partly.Facial parameters comprises one or more key point and between at least two key points of one or more key points, forms the edge connecting.
Another exemplary computer accessible media comprises operation noted earlier, and incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and incarnation is based on facial features.
Another exemplary computer accessible media comprises operation noted earlier, and incarnation is selected and incarnation parameter is used for generating incarnation in Virtual Space, and incarnation is based on facial features.
Another exemplary computer accessible media comprises operation noted earlier and instruction, other receiving remote incarnation below the generation of instruction in the time being carried out by one or more processor select and long-range incarnation parameter at least one operation.
Another exemplary computer accessible media comprises operation noted earlier, and comprise display, wherein, other operation below instruction produces in the time being carried out by one or more processor: play up long-range incarnation based on long-range incarnation parameter and select to allow undistorted or less distortion and show the incarnation of selecting based on long-range incarnation, and long-range incarnation based on playing up selects to show incarnation.
Another exemplary computer accessible media comprises operation noted earlier and instruction, the operation of instruction other incarnation showing with animate based on long-range incarnation parameter below producing in the time being carried out by one or more processor.
The term having adopted herein and statement are as language instead of the restriction described, and in this type of term and statement, be not intended to shown in getting rid of and any equivalent of described feature (or its part), and can recognize that various amendments are possible within the scope of the claims.Correspondingly, claim is intended to cover all these type of equivalents.
Claims (23)
1. generate, play up and the system of animation for the incarnation of the communication period between first user device and remote user device, described system comprises:
Camera, is configured to catch image;
Communication module, be configured to described first and described remote user device between start and set up and communicate by letter, and described first and described remote user device between transmit and reception information; And
One or more medium, stores instruction alone or in combination in described medium, operation below described instruction produces in the time being carried out by one or more processor, comprising:
At least one in two dimension (2D) incarnation of selection based on model and the 2D incarnation based on sketch is to used in communication period;
Start communication;
Catch image;
Detect the face in described image;
Determine facial features from described face;
Convert described facial features to incarnation parameter;
Transmit at least one in described incarnation selection and incarnation parameter.
2. the system as claimed in claim 1, wherein determine that from described face facial features comprises:
Facial marks described in detection and Identification in face, described facial marks comprises at least one in forehead, chin, eyes, nose, mouth and face contour facial described in described image; And
Facial marks based on described identification generates facial parameters at least partly, and described facial parameters comprises one or more key point and between at least two key points of described one or more key point, forms the edge connecting.
3. the system as claimed in claim 1, wherein said incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and described incarnation is based on described facial features.
4. the system as claimed in claim 1, wherein said incarnation is selected and incarnation parameter is used for generating incarnation in Virtual Space, and described incarnation is based on described facial features.
5. the system as claimed in claim 1, other operation below wherein said instruction produces in the time being carried out by one or more processor:
At least one in the selection of receiving remote incarnation and long-range incarnation parameter.
6. system as claimed in claim 6, also comprises display, other operation below wherein said instruction produces in the time being carried out by one or more processor:
Play up described long-range incarnation based on described long-range incarnation parameter and select to allow undistorted or less distortion and show the incarnation of selecting based on described long-range incarnation; And
Select to show described incarnation based on the described long-range incarnation of playing up.
7. system as claimed in claim 7, other operation below wherein said instruction produces in the time being carried out by one or more processor:
Select the incarnation to show described in animate based on described long-range incarnation.
8. generate, play up and the equipment of animation for the incarnation of the communication period between first user device and remote user device, described equipment comprises:
Communication module, be configured to described first and described remote user device between start with set up communicate by letter;
Incarnation is selected module, is configured to allow user to select at least one in two dimension (2D) incarnation based on model and the 2D incarnation based on sketch to use in communication period;
Face detection module, is configured to detect the facial zone in described user's image, and one or more facial features of face described in detection and Identification; And
Incarnation control module, is configured to convert described facial features to incarnation parameter;
Wherein said communication module is configured to transmit at least one in described incarnation selection and incarnation parameter.
9. equipment as claimed in claim 8, wherein said face detection module comprises:
Mark Detection module, is configured to identify the facial marks of facial zone described in described image, and described facial marks comprises at least one in forehead, chin, eyes, nose, mouth and the face contour of described face; And
Facial parameters module, is configured to the facial marks based on described identification at least partly and generates facial parameters, and described facial parameters comprises one or more key point and between at least two key points of described one or more key point, forms the edge connecting.
10. equipment as claimed in claim 9, wherein said incarnation control module is configured at least partly based on described facial parameters, generates the described 2D incarnation based on sketch.
11. equipment as claimed in claim 8, wherein said incarnation is selected and incarnation parameter is used for generating incarnation on described remote-control device, and described incarnation is based on described facial features.
12. equipment as claimed in claim 8, wherein said communication module is configured at least one in the selection of receiving remote incarnation and long-range incarnation parameter.
13. equipment as claimed in claim 12, also comprise the display that is configured to select to show based on described long-range incarnation incarnation.
14. equipment as claimed in claim 12, also comprise the incarnation rendering module that is configured to play up described long-range incarnation and select to allow undistorted or less distortion and show based on described long-range incarnation parameter the described incarnation of selecting based on described long-range incarnation.
15. equipment as claimed in claim 13, wherein said incarnation control module is configured to the incarnation to show described in animate based on described long-range incarnation parameter.
16. 1 kinds generate, play up and the method for animation for incarnation, and described method comprises:
At least one in two dimension (2D) incarnation of selection based on model and the 2D incarnation based on sketch is to used in communication period;
Start communication;
Catch image;
Detect the face in described image;
Determine facial features from described face;
Convert described facial features to incarnation parameter;
Transmit at least one in described incarnation selection and incarnation parameter.
17. methods as claimed in claim 16, wherein determine that from described face facial features comprises:
Facial marks described in detection and Identification in face, described facial marks comprises at least one in forehead, chin, eyes, nose, mouth and face contour facial described in described image; And
Facial marks based on described identification generates facial parameters at least partly, and described facial parameters comprises key point and between one or more key point, forms the edge connecting.
18. methods as claimed in claim 16, wherein said incarnation is selected and incarnation parameter is used on remote-control device, generating incarnation, and described incarnation is based on described facial features.
19. methods as claimed in claim 16, wherein said incarnation is selected and incarnation parameter is used for generating incarnation in Virtual Space, and described incarnation is based on described facial features.
20. methods as claimed in claim 16, also comprise at least one in the selection of receiving remote incarnation and long-range incarnation parameter.
21. methods as claimed in claim 20, also comprise:
Play up described long-range incarnation based on described long-range incarnation parameter and select to allow undistorted or less distortion and show the incarnation of selecting based on described long-range incarnation; And
Select to show described incarnation based on the described long-range incarnation of playing up.
22. methods as claimed in claim 21, also comprise the incarnation to show described in animate based on described long-range incarnation parameter.
The computer-accessible media of 23. at least one storage instruction, described instruction, in the time being carried out by machine, impels described machine to carry out the method as described in claim 16 to 22 any one.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010021750.2A CN111275795A (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2012/000460 WO2013152455A1 (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010021750.2A Division CN111275795A (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104205171A true CN104205171A (en) | 2014-12-10 |
Family
ID=49326983
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201280071879.8A Pending CN104205171A (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
CN202010021750.2A Pending CN111275795A (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010021750.2A Pending CN111275795A (en) | 2012-04-09 | 2012-04-09 | System and method for avatar generation, rendering and animation |
Country Status (4)
Country | Link |
---|---|
US (1) | US20140198121A1 (en) |
CN (2) | CN104205171A (en) |
TW (1) | TWI642306B (en) |
WO (1) | WO2013152455A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104618721A (en) * | 2015-01-28 | 2015-05-13 | 山东大学 | Ultra-low code rate face video coding and decoding method based on feature modeling |
CN105120165A (en) * | 2015-08-31 | 2015-12-02 | 联想(北京)有限公司 | Image acquisition control method and device |
CN105577517A (en) * | 2015-12-17 | 2016-05-11 | 掌赢信息科技(上海)有限公司 | Sending method of short video message and electronic device |
CN108335345A (en) * | 2018-02-12 | 2018-07-27 | 北京奇虎科技有限公司 | The control method and device of FA Facial Animation model, computing device |
CN109919016A (en) * | 2019-01-28 | 2019-06-21 | 武汉恩特拉信息技术有限公司 | A kind of method and device generating human face expression on the object of no face's organ |
CN111614925A (en) * | 2020-05-20 | 2020-09-01 | 广州视源电子科技股份有限公司 | Figure image processing method and device, corresponding terminal and storage medium |
CN111656406A (en) * | 2017-12-14 | 2020-09-11 | 奇跃公司 | Context-based rendering of virtual avatars |
US11295502B2 (en) | 2014-12-23 | 2022-04-05 | Intel Corporation | Augmented facial animation |
US11303850B2 (en) | 2012-04-09 | 2022-04-12 | Intel Corporation | Communication using interactive avatars |
CN114527881A (en) * | 2015-04-07 | 2022-05-24 | 英特尔公司 | Avatar keyboard |
CN115039401A (en) * | 2020-01-30 | 2022-09-09 | 斯纳普公司 | Video generation system for rendering frames on demand |
US11831937B2 (en) | 2020-01-30 | 2023-11-28 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUS |
US11887231B2 (en) | 2015-12-18 | 2024-01-30 | Tahoe Research, Ltd. | Avatar animation system |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US12111863B2 (en) | 2020-01-30 | 2024-10-08 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8584031B2 (en) | 2008-11-19 | 2013-11-12 | Apple Inc. | Portable touch screen device, method, and graphical user interface for using emoji characters |
TWI439960B (en) | 2010-04-07 | 2014-06-01 | Apple Inc | Avatar editing environment |
US9357174B2 (en) | 2012-04-09 | 2016-05-31 | Intel Corporation | System and method for avatar management and selection |
US9886622B2 (en) | 2013-03-14 | 2018-02-06 | Intel Corporation | Adaptive facial expression calibration |
WO2014139142A1 (en) | 2013-03-15 | 2014-09-18 | Intel Corporation | Scalable avatar messaging |
US20160042224A1 (en) * | 2013-04-03 | 2016-02-11 | Nokia Technologies Oy | An Apparatus and Associated Methods |
GB2516241A (en) * | 2013-07-15 | 2015-01-21 | Michael James Levy | Avatar creation system and method |
WO2016011654A1 (en) * | 2014-07-25 | 2016-01-28 | Intel Corporation | Avatar facial expression animations with head rotation |
WO2016068581A1 (en) | 2014-10-31 | 2016-05-06 | Samsung Electronics Co., Ltd. | Device and method of managing user information based on image |
EP3241187A4 (en) | 2014-12-23 | 2018-11-21 | Intel Corporation | Sketch selection for rendering 3d model avatar |
US9799133B2 (en) | 2014-12-23 | 2017-10-24 | Intel Corporation | Facial gesture driven animation of non-facial features |
KR101937850B1 (en) * | 2015-03-02 | 2019-01-14 | 네이버 주식회사 | Apparatus, method, and computer program for generating catoon data, and apparatus for viewing catoon data |
KR101620050B1 (en) * | 2015-03-03 | 2016-05-12 | 주식회사 카카오 | Display method of scenario emoticon using instant message service and user device therefor |
KR101726844B1 (en) * | 2015-03-25 | 2017-04-13 | 네이버 주식회사 | System and method for generating cartoon data |
WO2016161553A1 (en) * | 2015-04-07 | 2016-10-13 | Intel Corporation | Avatar generation and animations |
US9940637B2 (en) | 2015-06-05 | 2018-04-10 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11580608B2 (en) | 2016-06-12 | 2023-02-14 | Apple Inc. | Managing contact information for communication applications |
DK179978B1 (en) | 2016-09-23 | 2019-11-27 | Apple Inc. | Image data for enhanced user interactions |
US10504268B1 (en) * | 2017-04-18 | 2019-12-10 | Educational Testing Service | Systems and methods for generating facial expressions in a user interface |
CN110490093B (en) * | 2017-05-16 | 2020-10-16 | 苹果公司 | Emoticon recording and transmission |
KR102435337B1 (en) * | 2017-05-16 | 2022-08-22 | 애플 인크. | Emoji recording and sending |
DK179867B1 (en) | 2017-05-16 | 2019-08-06 | Apple Inc. | RECORDING AND SENDING EMOJI |
US11368351B1 (en) * | 2017-09-19 | 2022-06-21 | Lockheed Martin Corporation | Simulation view network streamer |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
DK179992B1 (en) | 2018-05-07 | 2020-01-14 | Apple Inc. | Visning af brugergrænseflader associeret med fysiske aktiviteter |
DK180078B1 (en) | 2018-05-07 | 2020-03-31 | Apple Inc. | USER INTERFACE FOR AVATAR CREATION |
US10375313B1 (en) | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
US10681310B2 (en) | 2018-05-07 | 2020-06-09 | Apple Inc. | Modifying video streams with supplemental content for video conferencing |
US12033296B2 (en) | 2018-05-07 | 2024-07-09 | Apple Inc. | Avatar creation user interface |
CN108717719A (en) * | 2018-05-23 | 2018-10-30 | 腾讯科技(深圳)有限公司 | Generation method, device and the computer storage media of cartoon human face image |
US11087520B2 (en) | 2018-09-19 | 2021-08-10 | XRSpace CO., LTD. | Avatar facial expression generating system and method of avatar facial expression generation for facial model |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
DK201970530A1 (en) | 2019-05-06 | 2021-01-28 | Apple Inc | Avatar integration with multiple applications |
CN113223128B (en) * | 2020-02-04 | 2022-09-13 | 北京百度网讯科技有限公司 | Method and apparatus for generating image |
DK181103B1 (en) | 2020-05-11 | 2022-12-15 | Apple Inc | User interfaces related to time |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
US11652959B2 (en) * | 2020-05-12 | 2023-05-16 | True Meeting Inc. | Generating a 3D visual representation of the 3D object using a neural network selected out of multiple neural networks |
CN111667553A (en) * | 2020-06-08 | 2020-09-15 | 北京有竹居网络技术有限公司 | Head-pixelized face color filling method and device and electronic equipment |
AU2021290132C1 (en) | 2020-06-08 | 2024-04-18 | Apple Inc. | Presenting avatars in three-dimensional environments |
CN112115823A (en) * | 2020-09-07 | 2020-12-22 | 江苏瑞科科技有限公司 | Mixed reality cooperative system based on emotion avatar |
EP4216167A4 (en) * | 2021-01-13 | 2024-05-01 | Samsung Electronics Co., Ltd. | Electronic device and method for operating avatar video service |
CN112601047B (en) * | 2021-02-22 | 2021-06-22 | 深圳平安智汇企业信息管理有限公司 | Projection method and device based on virtual meeting scene terminal and computer equipment |
TWI792845B (en) | 2021-03-09 | 2023-02-11 | 香港商數字王國企業集團有限公司 | Animation generation method for tracking facial expressions and neural network training method thereof |
CN113240778B (en) * | 2021-04-26 | 2024-04-12 | 北京百度网讯科技有限公司 | Method, device, electronic equipment and storage medium for generating virtual image |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
US20230273714A1 (en) | 2022-02-25 | 2023-08-31 | ShredMetrix LLC | Systems And Methods For Visualizing Sporting Equipment |
EP4273669A1 (en) * | 2022-05-06 | 2023-11-08 | Nokia Technologies Oy | Monitoring of facial characteristics |
US11972526B1 (en) * | 2023-03-31 | 2024-04-30 | Apple Inc. | Rendering of enrolled user's face for external display |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030206171A1 (en) * | 2002-05-03 | 2003-11-06 | Samsung Electronics Co., Ltd. | Apparatus and method for creating three-dimensional caricature |
CN1532775A (en) * | 2003-03-19 | 2004-09-29 | ���µ�����ҵ��ʽ���� | Visuable telephone terminal |
CN1832604A (en) * | 2005-03-07 | 2006-09-13 | 乐金电子(中国)研究开发中心有限公司 | Mobile communication terminal possessing cartoon generating function and cartoon generating method thereof |
EP2431936A2 (en) * | 2009-05-08 | 2012-03-21 | Samsung Electronics Co., Ltd. | System, method, and recording medium for controlling an object in virtual world |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7196733B2 (en) * | 2002-01-28 | 2007-03-27 | Canon Kabushiki Kaisha | Apparatus for receiving broadcast data, method for displaying broadcast program, and computer program |
US7386799B1 (en) * | 2002-11-21 | 2008-06-10 | Forterra Systems, Inc. | Cinematic techniques in avatar-centric communication during a multi-user online simulation |
GB0311208D0 (en) * | 2003-05-15 | 2003-06-18 | British Telecomm | Feature based caricaturing |
KR100983745B1 (en) * | 2003-09-27 | 2010-09-24 | 엘지전자 주식회사 | Avatar generation service method for mobile communication device |
US7809172B2 (en) * | 2005-11-07 | 2010-10-05 | International Barcode Corporation | Method and system for generating and linking composite images |
US8386918B2 (en) * | 2007-12-06 | 2013-02-26 | International Business Machines Corporation | Rendering of real world objects and interactions into a virtual universe |
US20090315893A1 (en) * | 2008-06-18 | 2009-12-24 | Microsoft Corporation | User avatar available across computing applications and devices |
US8819244B2 (en) * | 2010-04-07 | 2014-08-26 | Apple Inc. | Apparatus and method for establishing and utilizing backup communication channels |
WO2011129907A1 (en) * | 2010-04-13 | 2011-10-20 | Sony Computer Entertainment America Llc | Calibration of portable devices in a shared virtual space |
US8854397B2 (en) * | 2011-12-13 | 2014-10-07 | Facebook, Inc. | Photo selection for mobile devices |
-
2012
- 2012-04-09 WO PCT/CN2012/000460 patent/WO2013152455A1/en active Application Filing
- 2012-04-09 CN CN201280071879.8A patent/CN104205171A/en active Pending
- 2012-04-09 CN CN202010021750.2A patent/CN111275795A/en active Pending
- 2012-04-09 US US13/997,265 patent/US20140198121A1/en not_active Abandoned
-
2013
- 2013-04-09 TW TW102112511A patent/TWI642306B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030206171A1 (en) * | 2002-05-03 | 2003-11-06 | Samsung Electronics Co., Ltd. | Apparatus and method for creating three-dimensional caricature |
CN1532775A (en) * | 2003-03-19 | 2004-09-29 | ���µ�����ҵ��ʽ���� | Visuable telephone terminal |
CN1832604A (en) * | 2005-03-07 | 2006-09-13 | 乐金电子(中国)研究开发中心有限公司 | Mobile communication terminal possessing cartoon generating function and cartoon generating method thereof |
EP2431936A2 (en) * | 2009-05-08 | 2012-03-21 | Samsung Electronics Co., Ltd. | System, method, and recording medium for controlling an object in virtual world |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11595617B2 (en) | 2012-04-09 | 2023-02-28 | Intel Corporation | Communication using interactive avatars |
US11303850B2 (en) | 2012-04-09 | 2022-04-12 | Intel Corporation | Communication using interactive avatars |
US11295502B2 (en) | 2014-12-23 | 2022-04-05 | Intel Corporation | Augmented facial animation |
CN104618721A (en) * | 2015-01-28 | 2015-05-13 | 山东大学 | Ultra-low code rate face video coding and decoding method based on feature modeling |
CN104618721B (en) * | 2015-01-28 | 2018-01-26 | 山东大学 | The ELF magnetic field human face video coding-decoding method of feature based modeling |
CN114527881B (en) * | 2015-04-07 | 2023-09-26 | 英特尔公司 | avatar keyboard |
CN114527881A (en) * | 2015-04-07 | 2022-05-24 | 英特尔公司 | Avatar keyboard |
CN105120165A (en) * | 2015-08-31 | 2015-12-02 | 联想(北京)有限公司 | Image acquisition control method and device |
CN105577517A (en) * | 2015-12-17 | 2016-05-11 | 掌赢信息科技(上海)有限公司 | Sending method of short video message and electronic device |
US11887231B2 (en) | 2015-12-18 | 2024-01-30 | Tahoe Research, Ltd. | Avatar animation system |
CN111656406A (en) * | 2017-12-14 | 2020-09-11 | 奇跃公司 | Context-based rendering of virtual avatars |
CN108335345A (en) * | 2018-02-12 | 2018-07-27 | 北京奇虎科技有限公司 | The control method and device of FA Facial Animation model, computing device |
CN108335345B (en) * | 2018-02-12 | 2021-08-24 | 北京奇虎科技有限公司 | Control method and device of facial animation model and computing equipment |
CN109919016B (en) * | 2019-01-28 | 2020-11-03 | 武汉恩特拉信息技术有限公司 | Method and device for generating facial expression on object without facial organs |
CN109919016A (en) * | 2019-01-28 | 2019-06-21 | 武汉恩特拉信息技术有限公司 | A kind of method and device generating human face expression on the object of no face's organ |
CN115039401A (en) * | 2020-01-30 | 2022-09-09 | 斯纳普公司 | Video generation system for rendering frames on demand |
US11831937B2 (en) | 2020-01-30 | 2023-11-28 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUS |
CN115039401B (en) * | 2020-01-30 | 2024-01-26 | 斯纳普公司 | Video generation system for on-demand rendering of frames |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US12111863B2 (en) | 2020-01-30 | 2024-10-08 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
CN111614925A (en) * | 2020-05-20 | 2020-09-01 | 广州视源电子科技股份有限公司 | Figure image processing method and device, corresponding terminal and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW201352003A (en) | 2013-12-16 |
WO2013152455A1 (en) | 2013-10-17 |
CN111275795A (en) | 2020-06-12 |
US20140198121A1 (en) | 2014-07-17 |
TWI642306B (en) | 2018-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104205171A (en) | System and method for avatar generation, rendering and animation | |
CN104170358B (en) | For the system and method for incarnation management and selection | |
US11595617B2 (en) | Communication using interactive avatars | |
CN104011738A (en) | System and method for communication using interactive avatar | |
US9936165B2 (en) | System and method for avatar creation and synchronization | |
TWI583198B (en) | Communication using interactive avatars | |
TWI682669B (en) | Communication using interactive avatars | |
TW202107250A (en) | Communication using interactive avatars |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20141210 |
|
RJ01 | Rejection of invention patent application after publication |