WO2013027893A1 - Appareil et procédé pour des services de contenu émotionnel sur des dispositifs de télécommunication, appareil et procédé pour une reconnaissance d'émotion pour ceux-ci, et appareil et procédé pour générer et mettre en correspondance le contenu émotionnel à l'aide de ceux-ci - Google Patents
Appareil et procédé pour des services de contenu émotionnel sur des dispositifs de télécommunication, appareil et procédé pour une reconnaissance d'émotion pour ceux-ci, et appareil et procédé pour générer et mettre en correspondance le contenu émotionnel à l'aide de ceux-ci Download PDFInfo
- Publication number
- WO2013027893A1 WO2013027893A1 PCT/KR2011/008399 KR2011008399W WO2013027893A1 WO 2013027893 A1 WO2013027893 A1 WO 2013027893A1 KR 2011008399 W KR2011008399 W KR 2011008399W WO 2013027893 A1 WO2013027893 A1 WO 2013027893A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- emotion
- image
- face
- user
- video call
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 157
- 230000002996 emotional effect Effects 0.000 title claims abstract description 57
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 48
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 230000001815 facial effect Effects 0.000 claims abstract description 22
- 230000008451 emotion Effects 0.000 claims description 131
- 238000004891 communication Methods 0.000 claims description 107
- 230000008921 facial expression Effects 0.000 claims description 46
- 230000014509 gene expression Effects 0.000 claims description 26
- 230000008569 process Effects 0.000 claims description 13
- 239000000284 extract Substances 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 9
- 238000003384 imaging method Methods 0.000 claims description 8
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 230000003190 augmentative effect Effects 0.000 abstract description 24
- 230000033001 locomotion Effects 0.000 abstract description 12
- 238000010195 expression analysis Methods 0.000 abstract description 7
- 238000010606 normalization Methods 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 46
- 238000010586 diagram Methods 0.000 description 16
- 241001175904 Labeo bata Species 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000000513 principal component analysis Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 241000894007 species Species 0.000 description 5
- 210000000887 face Anatomy 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003203 everyday effect Effects 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000013515 script Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 101500028013 Bos taurus Spleen trypsin inhibitor II Proteins 0.000 description 1
- 208000027534 Emotional disease Diseases 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000009118 appropriate response Effects 0.000 description 1
- 238000007630 basic procedure Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000006397 emotional response Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 235000012432 gingerbread Nutrition 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000001976 improved effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004660 morphological change Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 230000008093 supporting effect Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
- G06Q10/101—Collaborative creation, e.g. joint development of products or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/176—Dynamic expression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/04—Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Definitions
- the present invention analyzes the emotions and facial expressions recognized by the image pickup apparatus of the transmitting communication terminal device, and the emotion contents for realizing mixing the virtual object on the screen of the receiving communication terminal device in real time so as to effectively deliver the analyzed emotion or communication.
- Service apparatus and method an emotion recognition apparatus and method for serving the emotion content, an apparatus and method for generating and matching the emotion content through the emotion recognition, and an apparatus and method for generating the emotion content .
- Augmented reality which is the second key keyword in the IT field, is a technology derived from one field of virtual reality and refers to a technology that combines the real world and the virtual experience. Augmented reality is regarded as one of the top 10 innovations to lead the future, and it is a technology that gives the user a better sense of reality by interacting with virtual objects based on the real world.
- Augmented reality is a field of virtual reality, a computer graphics technique that synthesizes virtual objects in the real environment and looks like objects existing in the original environment.
- Augmented reality is an existing method that targets only virtual space and virtual objects. Unlike virtual reality, it is a technology that synthesizes virtual objects on the basis of real world and reinforces and provides additional information that is difficult to obtain in real world alone.
- augmented reality technology is being actively used in various forms in the fields of broadcasting, advertising, exhibition, games, theme parks, military, education and promotion.
- augmented reality differs from the virtual reality technology, which excludes interaction with the real world and processes interactions only in the pre-established virtual space, based on real-time processing.
- the information is overlaid on the image of the real world input through the terminal, and thus it is distinguished from the virtual reality that provides only the image generated by the computer in that it enables interaction with the real world.
- Marker-based mobile augmented reality technology is a technology for recognizing a building by recognizing a specific sign after taking a specific sign corresponding to a specific building when shooting a specific building. It is a technology that overlays POI (Point of Interests) information corresponding to the image in the inferred direction by inferring the current position of the terminal and the viewing direction by using the installed GPS and digital compass. .
- POI Point of Interests
- 3D video content the third key keyword in the IT field, is exploding in related industries due to James Cameron's 'Avatar', and it is expected that the time for enjoying video calls with 3D content will come.
- Android phones such as Samsung's Galaxy SI II are supporting video calls, and since Android version 2.3 (Gingerbread) officially supports video calls, Android phones from mid-2011 have been able to support video calls. It is expected to be equipped with call function.
- Android-based Tablet computers and iPad 2 generations are also expected to provide video call service through front and back cameras, so video calls are becoming more common.
- 1: 1 or multi-party video calls may grow into core services in addition to voice and data services.
- interest in video calls through mobile terminals is gradually increasing.
- an object of the present invention is to provide an emotional content service apparatus and method that can provide fun to a video call by providing a virtual object representing the emotional state of both callers with the video during a video call, and the emotional content.
- another object of the present invention is to provide an emotional content service apparatus and method for superimposing the emotional state of the caller to the video of the caller through a virtual object, to experience augmented reality that gives the callers a more realistic feeling and And an emotion recognition apparatus and method for servicing the emotion content, an apparatus and method for generating the emotion content through the emotion recognition, and the emotion content generated by the apparatus and method for generating the emotion content. have.
- the present invention analyzes the emotions and facial expressions recognized from the image pickup device of the transmitting communication terminal device, and the emotion that is implemented by real-time mixing a virtual object on the screen of the receiving communication terminal device to effectively deliver the analyzed emotion or communication
- An apparatus and method for providing content services are provided.
- the present invention analyzes the emotions and facial expressions recognized from the image pickup device of the transmitting communication terminal device, and the emotion that is implemented by real-time mixing a virtual object on the screen of the receiving communication terminal device to effectively deliver the analyzed emotion or communication
- the present invention provides an apparatus and method for emotion recognition for providing a content service.
- the present invention analyzes the emotions and facial expressions recognized from the image pickup device of the transmitting communication terminal device, and the emotion that is implemented by real-time mixing a virtual object on the screen of the receiving communication terminal device to effectively deliver the analyzed emotion or communication
- An apparatus and method for generating the emotion content through an emotion recognition apparatus and method for providing a content service are provided.
- the present invention analyzes the emotions and facial expressions recognized from the image pickup device of the transmitting communication terminal device, and the emotion that is implemented by real-time mixing a virtual object on the screen of the receiving communication terminal device to effectively deliver the analyzed emotion or communication
- the present invention provides an emotional content generated by an apparatus and a method for generating the emotional content through an emotion recognition apparatus and a method for providing a content service.
- the present invention for achieving the above object is to analyze the emotion to match the avatar to the avatar instead of a specific emotion to analyze, and the emoticon matching to add the effect on the specific expression by analyzing the expression and the expression for a particular emotion It has at least one feature which exaggerates the specific part of the face or body which it represents.
- the present invention proposes a face detection, face recognition, and emotion recognition core technology for recognizing a user's emotions and expressions, and accordingly proposes a technology for generating and matching emoticons maximizing avatars and expressions for the recognized emotions.
- a video call service through a smartphone.
- the present invention can increase the effect of communication by recognizing a change in facial expression and matching the corresponding content (avatar) on a person's real face, thereby enabling expression of emotions that are impossible in the real world through facial recognition.
- Apparatus and method for emotion content service of a communication terminal device for achieving the above objects, apparatus and method for emotion recognition for the same, apparatus and method for generating and matching emotion content using the same are voice recognition, object recognition Face area detection technology of face recognition technology, face area normalization technology, feature extraction technology within face area, facial component (expression analysis) relationship technology of emotion recognition technology of object recognition, object or hand gesture, object recognition behavior and behavior
- voice recognition object recognition Face area detection technology of face recognition technology
- face area normalization technology feature extraction technology within face area
- object recognition behavior and behavior On the basis of cognitive technology, the real-time matching technology of the real picture and the virtual image is used to match the mixed virtual objects (including characters) through the gesture and facial expression analysis on both the face and the body to make a video call. Characterized in that the mixed reality is implemented through a video call.
- the emotional content service of the communication terminal device is registered in advance the expression analysis relation function of the specific expression and gesture of the voice, face and body, if a similar voice, facial expression and gesture is transmitted through the image On the output video screen, the virtual object responding to voice, facial expression, and gesture is matched in real time on the face and body to enjoy a video call.
- the emotional content service of the communication terminal apparatus for achieving the above objects is at least one of a gesture and facial expression of the user photographed through the image pickup means of the communication terminal device having at least an image pickup means and a display means
- the virtual object may further include a character.
- the virtual object may be changed by the user.
- the virtual object is changed in real time corresponding to the emotional state.
- the virtual object is characterized in that the position superimposed on the body and face of the user is changed.
- the communication terminal device further comprises a voice input means and a voice output means, the emotional state of the user from the user's voice input through the voice input means Characterized in further extracting.
- the emotional content service of the communication terminal apparatus for achieving the above object is a video call service providing terminal having at least an imaging means and a display means, the gesture and facial expression of the user to be photographed through the imaging means
- the emotional content service method of the communication terminal device for achieving the above object, the process comprising: inputting a face image of the user to the communication terminal device; Extracting face components from the input face image; Preprocessing the extracted facial components; Extracting facial features from the preprocessed facial components; Registering the extracted feature of the face in a face database; And a step of recognizing an emotion by comparing a feature registered in the face database with a face component extracted in the feature extraction process.
- the face image of the user from the camera module of the communication terminal device Receiving a process; Preprocessing the input face image; Detecting only valid data in the preprocessed face image; Estimating the position of the face and the camera information from the detected valid data; And generating a 3D image from the camera information and the position information of the face, and matching the generated 3D image with the face image of the user.
- the method for generating and matching the emotion content through the emotion recognition method for the emotion content service of the communication terminal device comprises the steps of: outputting the 3D image matched with the face image of the user on the screen; And transmitting the 3D image matched with the face image of the user to a counterpart communication terminal through a network.
- the emotional content service method of the communication terminal device for achieving the above object, the process of capturing a frame image to be analyzed in the video data source from the camera module of the communication terminal device; Preprocessing the captured image in an easy to analyze state; Detecting a face in the preprocessed image; Recognizing a posture estimation and a facial expression based on the recognition information extracted through the face detection process, and selecting a posture and facial expression of the avatar with respect to the posture and facial expression taken by the face; Determining position coordinates of the selected avatar on the 3D space through the analyzed information, selecting an avatar animation for a corresponding expression and emotion, and transmitting a control signal to the 3D engine; Composing a 3D space in which the avatar image and the video are to be represented, and performing a function of placing the analyzed avatar at a corresponding position; Matching the avatar and the video source represented through the 3D space into a single image source; Performing video encoding on the matched image together with a voice source; And
- the process of detecting a face in the pre-processed image, applying a learning algorithm or the like to analyze the facial feature point, and the position and relationship data on the image of the component It is characterized in that the extraction.
- the step of transmitting the matched image to the other terminal through the network establishes a session via SIP, and transmits to the Internet via RTP / RTCP Characterized in that.
- the emotional content service apparatus of the communication terminal apparatus for achieving the above object, the server communication unit interworking with the video call service providing terminal; And recognize the emotion state of the user from at least one of a gesture and an expression of the user from the image information received from the video call service providing terminal, and compare the recognized emotion state with previously stored object related information.
- the emotion content service apparatus of the communication terminal device characterized in that it further comprises a server storage unit for storing the object-related data corresponding to the emotional state.
- an emotion recognition apparatus for an emotion content service of a communication terminal apparatus for achieving the above objects includes a display unit for displaying an image of the other party and an object overlapping the image according to the video call; A communication unit interworking with a video call service providing server; An imaging unit which acquires image information of a user according to a video call; And recognize the emotion state of the user from the image information obtained by the image pickup unit, extract emotion information related to the recognized emotion state, and transmit the extracted emotion information to the video call service providing server, and the image from the video call service providing server.
- a controller configured to receive an object corresponding to the emotion information of the other party according to a call and to superimpose the received object on a position associated with the received object in the image of the other party according to the video call and to output the object to the display unit. It is characterized by.
- the apparatus for generating and matching the emotion content through the emotion recognition method for the emotion content service of the communication terminal device for achieving the above object, the video of the other party and the video according to the video call
- a display unit displaying overlapping objects;
- a communication unit interworking with a video call service providing server;
- An imaging unit which acquires image information of a user according to a video call; And recognize the emotion state of the user from the image information obtained by the image pickup unit, extract emotion information related to the recognized emotion state, and transmit the extracted emotion information to the video call service providing server, and the image from the video call service providing server.
- a controller configured to receive an object corresponding to the emotion information of the other party according to a call and to superimpose the received object on a position associated with the received object in the image of the other party according to the video call and to output the object to the display unit. It is characterized by.
- the display unit of the device for generating and matching the emotion content through the emotion recognition method for the emotion content service of the communication terminal device according to the present invention characterized in that for further displaying the image of the user according to the video call do.
- the apparatus for generating and matching the emotion content through the emotion recognition method for the emotion content service of the communication terminal device further includes a key input unit for determining whether to apply the object. do.
- the apparatus for generating and matching the emotion content through the emotion recognition method for the emotion content service of the communication terminal device according to the present invention may further include a storage unit for storing the object.
- the emotional content service of the communication terminal apparatus for achieving the above objects is at least one of a gesture and facial expression of the user photographed through the image pickup means of the communication terminal device having at least an image pickup means and a display means
- a communication terminal device of the other party that extracts the emotional state of the user from the user, exaggerates at least one of the body and the face of the user representing the gesture and expression corresponding to the extracted emotional state, and makes a video call with the user It is characterized by displaying on the display means.
- the present invention can provide abundant sights to the video call by implementing various mixed reality not seen in the real world through the video call.
- the present invention has an effect that can provide fun to the video call by providing a virtual object representing the emotional state of both callers with the video call during the video call.
- the present invention by superimposing the emotional state of the caller to the video of the caller through a virtual object, it is possible to experience the augmented reality that gives the caller a more realistic feeling to deliver the emotional state of the caller freshly It has an effect.
- the present invention has the effect of enabling the user to experience both virtual and reality by shaping the expressions of voices, faces and bodies of specific expressions and gestures into virtual objects through the video call screen.
- FIG. 1 is a view showing a concept of avatar matching according to the present invention
- FIG. 2 is a view showing a concept of emoticon matching according to the present invention
- FIG. 3 is a view showing composite data matched to a change in facial expression of a user and standard data thereof according to an embodiment of the present invention
- FIG. 4 is a control flow diagram illustrating an emotion and facial expression recognition and matching procedure of a 3D avatar
- FIG. 6 is a diagram illustrating a service movement scenario according to user movement between access networks to which the emotion content service method according to the present invention is applied;
- FIG. 7 is a control flowchart for face detection in an emotion recognition method for an emotion content service of a communication terminal device according to the present invention.
- FIG. 9 is a diagram illustrating a basic message and status code scheme of a SIP
- FIG. 10 is a diagram illustrating a SIP protocol stack
- FIG. 11 is a diagram showing a basic procedure of call setup of a SIP protocol
- FIG. 12 is a conceptual diagram of a content matching system according to the present invention.
- FIG. 14 is a schematic diagram of an avatar video communication operation procedure through emotion recognition and image registration according to the present invention.
- FIG. 15 is a view illustrating an avatar video communication operation procedure through emotion recognition and image registration of FIG. 14 using actual content
- 16 is a view showing various fields to which the present invention is applicable.
- the present invention has at least one feature of avatar matching to analyze an emotion to allow an avatar to express it for a specific emotion, and emoticon matching to analyze an expression and add an effect to the specific expression.
- FIG. 1 is a view showing the concept of avatar matching according to the present invention
- Figure 2 is a view showing the concept of emoticon matching according to the present invention.
- the avatar matching according to the present invention recognizes an emotion expressed in an actual image and replaces the actual image with an avatar expressed in augmented reality.
- the entire screen of the actual image may be replaced or only a specific part of the face may be expressed in augmented reality.
- matching the emoticon according to the present invention is to use a variety of emoticons for recognizing the emotions expressed in the actual image to increase the transmission effect on the recognized expression.
- the emotional content according to the present invention may be composed of, for example, ten male emotional recognition reactions, ten female emotional recognition reactions, and ten animal emotional recognition reactions.
- emotion contents can express characters and motions linked with motion expression scripts through 3D modeling that can be matched to standard data.
- FIG. 3 is a diagram showing synthetic data and standard data thereof matched to a change in facial expression of a user according to an exemplary embodiment of the present invention.
- standard data can be used to make a more interesting and pleasant video call between users by substituting an avatar or an emotion that is difficult to express.
- avatar creation and video call application on behalf of.
- the present invention can be implemented to be applicable to the video call by producing a man, a woman, an anthropomorphic character according to the emotion and facial expression changes and each type of performance.
- FIG. 4 is a control flowchart illustrating a process of recognizing and matching emotions and facial expressions of a 3D avatar. As illustrated in FIG. 4, a source of an avatar produced according to each emotion and facial expression is applied to a user application through the following recognition and matching steps. Is reflected.
- the procedure indicated in blue is a procedure implemented by a commercial library
- the procedure indicated in green is a content implementation procedure
- gray indicates a system area
- red indicates a user's emotion and facial expression during a video call.
- This is a procedure implemented by a technology for developing smartphone application content in which an avatar acting on a corresponding emotion and an emoticon adding an effect to a corresponding expression are matched.
- the present invention develops core technologies for face recognition, emotion recognition, video registration, and video communication necessary for emotion and facial expression matching support video call service, and develops application content and interworking server.
- avatars and emoticons for the user are produced and applied to the smartphone application.
- the emotion content generated by the device and method for generating the emotion content through the emotion recognition device and method for providing a content service utilizes the creative, future-oriented value and smart image of the character, a new theme, a new trend. It is used as content in mobile video call.
- the emotional content according to the present invention enhances the educational utility as a differentiated theme using advanced smart devices that will change the future life, and maximizes artistic value through differentiated content storytelling and the shaping of universal values.
- it maximizes the differentiation factor that 3D animation is possible not only through video player but also through video call with others.
- the emotional content according to the present invention can be helpful for the emotional development of our children by inducing a didactic value and various facial expression changes that our lives are happy and precious through smart video calls that accompany us and always communicate in everyday life.
- the visual image of the character according to the present invention constructs a character image that anyone can like and accept through the composition of figures 3-4.
- Figure 5 is a table summarized the embodiments of the main characters of storytelling according to the present invention, understanding the importance of living and breathing with me, the importance of universal values and communication for imagination and adventure, friendship and family It shows the main characters, their images, and their personalities and roles to convey them.
- the background story structure of the embodiment of FIG. 5 deals with a left-right doldol episode that occurs while the main character 'Ava' (user) meets a race 'Bata' in a virtual space living in his mobile phone.
- the protagonist is a user who uses a video call
- the character 'Bata' personified in the virtual space that is the friend of the protagonist, is a friend of mine who can meet at any time when the protagonist makes a video call with the other party.
- the emotion content according to the present invention in a 3D object in a video communication terminal such as a smartphone, it is expected that the processing time for image processing from face detection to recognition and emotion recognition during a video call is large, and also recognized information. On the basis of this, delays are expected to occur in expressing matching between video and 3D objects. Therefore, in terms of 3D matching, it is possible to consider the method of minimizing the matching delay time by using the 3D engine. For example, the development speed can be improved by using the Unity3D engine in the 3D object representation method using OpenGL ES.
- FIG. 6 is a diagram illustrating a service movement scenario according to user movement between access networks to which the emotion content service method according to the present invention is applied, and a method for solving a communication network access problem in a mobile environment through a smart phone may be sought. .
- face detection is to find a location of a face in an image.
- a face of a person is determined based on an angle of the front or side according to the gazing direction, the degree of tilting the head from side to side, various expressions, and distance from the camera.
- the image may vary depending on external changes such as morphological changes such as the size of the face image, differences in brightness levels within the face due to lighting, complex backgrounds, or other objects of indistinguishable color from the face. Face detection studies from MM include many difficulties.
- Face detection is a pre-processing step before face recognition, such as knowledge-based methods, feature-based methods, template-matching methods, and appearance-based methods. Divided by. This is summarized in Table 1 below.
- Knowledge-based Face Detection Methods is a method in which the face of the human face, such as eyebrows, eyes, nose, mouth, each face component using a constant distance and positional relationship to each other. In this method, partial contrast is concentrated in the center area of the face image, and the contrast distribution of the face image and the image is detected through comparison, and a top-down approach is mainly used.
- the knowledge-based detection method has a disadvantage in that it is difficult to detect a face in an image having various changes in the face, such as a tilt of a face, an angle of looking at a camera, and an expression, so that it can be applied only in a special case.
- Feature-based Face Detection Methods are a combination of the size and shape of the facial feature components (eye, nose, mouth, outline, and contrast), their correlations, the color and texture information of the face and the components It is a method of detecting a face using shape information.
- the bottom-up approach is used to find partial features of the face and to integrate the candidate regions (face specific components) to find the face.
- Feature-based face detection methods have the advantage of not being sensitive to poses or face orientations, because the processing time can be found quickly and easily. However, it can be mistaken for a background or an object similar to the skin color, and the color and texture information of the face may be lost as the brightness of the light changes. In addition, there is a disadvantage that can not detect the feature components of the face according to the degree of inclination of the face.
- Template matching-based Face Detection Methods create a standard template for all of the target faces and then detect the face by comparing the similarity with the input image. There are algorithms and variant template algorithms.
- the template matching-based face detection method generates information using partial regions or outlines from the prepared image data, and then transforms the generated information through algorithms to increase the amount of similar information and use it for face detection.
- the template matching-based face detection method is sensitive to the change in the size of the face according to the distance, the rotation angle and the tilt of the face according to the gaze direction, and it is difficult to define templates for different poses like the knowledge-based method.
- An appearance-based method is a method of detecting a face using a model trained by a set of training images using pattern recognition.
- Appearance-based methods are one of the most used methods in the face detection field, and are based on eigenface, linear discriminant analysis (LDA), and neural network generated by Principal Component Analysis (PCA). NN), Adaboost, and Support Vector Machines (SVMs).
- Appearance-based methods use the existing face and non-face learning data groups to detect face regions in complex images and generate learned eigenvectors to find faces. This method has the advantage of high recognition rate because the constraints mentioned in other detection methods are overcome by learning.
- appearance-based methods such as PCA, NN, and SVM require a lot of time to learn the database, and also have a disadvantage of having to learn again when the database changes.
- face recognition technology is a method used to identify a face after detecting a face through a multimedia image. Face recognition technology can be classified as shown in Table 1 below, in the present invention is used to identify the components of the face.
- the input of the face recognition system uses the entire face area.
- the holistic face recognition method has an advantage that can be easily implemented, but it does not take enough detail of the face, so it is difficult to obtain sufficient results.
- Holistic face recognition methods include principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), tensor face and probabilistic decision-based neural networks (PDBNN).
- Feature-based methods first extract spatial features (eyes, nose and mouth), and then the location and spatial characteristics (geometry and appearance) of the spatial features are input to the recognition system.
- the feature-based method is quite complicated because there is a variety of feature information on the face, so it is necessary to determine how to select the best features to improve face recognition performance.
- the typical feature-based methods such as Pure Geometry, Dynamic Link Architecture, and Hidden Markov model, have much better performance than the above holistic matching method. Are utilized.
- Hybrid methods are very complicated because they use the entire face area to recognize a face along with the location characteristics, but the recognition rate is much superior to the holistic matching and feature-based matching methods.
- Hybrid methods include linear feature analysis (LFA), shape-normalization, and component-based methods.
- FIG. 8 is a diagram illustrating a configuration and operation (Req / Resp) sequence for a SIP service
- FIG. 9 is a diagram illustrating a basic message and status code scheme of SIP
- FIGS. 10 and 11 are a basic SIP protocol stack and call setup.
- SIP is a protocol for managing sessions or calls in multimedia communication, and is a technique that focuses on multimedia communication management through signaling rather than multimedia data transmission itself.
- Table 3 summarizes the components of SIP service and its main functions.
- a caller sends an INVITE request message for creating a session to a callee. These messages go through several SIP servers to be delivered to the receiver.
- the received proxy server parses the message to recognize the recipient and delivers the received message to the appropriate proxy server or recipient's user agent (UA).
- UA user agent
- the receiver receiving the INVITE message sends a response message to the INVITE message.
- the response message has a status code indicating the result of processing. If the receiver receives and processes the message correctly, it sends a “200 OK” response message to the sender.
- the sender who receives the response sends a ACK request message back to the receiver to inform the receiver that the response message is correctly received.
- the wired / wireless convergence service environment is an environment in which the mobility of terminals is generalized, and various types of access networks are selected based on criteria such as service quality and user preferences, rather than access network access in a simple sense for existing communication access. It is evolving into a mobile terminal environment between heterogeneous networks that connect and communicate with each other. Therefore, in order to access a terminal between heterogeneous networks, a mobility support technology between heterogeneous networks is required, and a function for mobility support technology must be mounted in the terminal.
- a multi-mode terminal having a plurality of communication interfaces to access networks to which the terminals can be connected is required.
- multi-mode terminals The need for more is growing.
- the current approach takes the form of changing the communication mode for connection to a heterogeneous network, which requires a reset of the terminal's power and services.
- an automatic access control technique between heterogeneous networks is required in a multimode terminal in which handover between heterogeneous networks is automatically controlled without user terminal setting and service disconnection.
- FIG. 12 is a conceptual diagram of a content matching system according to an embodiment of the present invention.
- an avatar is recognized to a counterpart by recognizing a user's face and emotion during a video call through an Avatar video call program included in each user's terminal.
- the facial expressions and emoticons are matched on the screen and transmitted.
- the user performs a continuous face recognition and emotion during the call. At this time, more effective and enjoyable video call with the other party is possible through avatar matching and emoticon matching with perceived emotion or facial expression.
- Basic video call is made through SIP-based video conference and performs data transmission and reception using RTP / RTCP.
- the switch to HTTP streaming may be considered.
- FIG. 13 is a diagram illustrating a basic operation procedure for image registration according to the present invention.
- an avatar and a video are matched.
- image input is performed through a camera module of a smart phone (terminal) to perform image preprocessing for face recognition, facial expression recognition, and emotion recognition.
- it extracts possible face candidates and analyzes the components of the face to extract information for posture estimation and emotion recognition.
- the facial expression and motion of the avatar are selected, the position in the 3D space is calculated, matched with the video, and displayed on the screen. It also encodes this image and sends it over the network to a remote video call smartphone.
- Step 1 is a step of capturing a frame image to be analyzed in a video data source from a camera module
- step 2 is a analysis of the captured image.
- the preprocessing is performed in an easy-to-use state, so that the boundary between objects in the image can be grasped by an edge detection algorithm or the like.
- step (3) detects a face from the preprocessed image. Analyzes facial feature points by applying a learning algorithm, extracts the position and relationship data on the image of the component, and step (4) uses the face detection step. At the stage of posture estimation and facial expression recognition based on the extracted recognition information, the attitude and facial expression of the avatar regarding the posture and facial expression taken by the face are selected. Next, step 5 determines the avatar (face) position coordinates in the 3D space through the analyzed information, selects an avatar animation for the expression and emotion, and transmits a control signal (message) to the 3D engine.
- a control signal messagessage
- step 6 the 3D space in which the avatar image and video are represented is composed, and the analyzed avatar is placed in the corresponding position (controlling the avatar and 3D space through the 3D engine API).
- the avatar and the video source to be represented are matched to a single image source, and in step 8, the video is encoded along with the voice source.
- the audio source is extracted from the video source and processed.
- the network is sent to the other terminal configured for the video call.
- the session is configured through the SIP and transmitted through the Internet through the RTP / RTCP.
- FIG. 15 illustrates an avatar video communication operation process through emotion recognition and image registration of FIG. 14 using actual content.
- face tracking and eye, nose, and mouth are recognized from the captured image, and through this, standard analysis and relationship technology are applied, emotional inference and real-time matching to match a virtual model to the face of the real world.
- standard analysis and relationship technology are applied, emotional inference and real-time matching to match a virtual model to the face of the real world.
- a real-time avatar is implemented on the user's face.
- an apparatus and method for emotion content service of a communication terminal device As described above, an apparatus and method for emotion recognition therefor, an apparatus and method for generating and matching emotion content using the same, and whether the emotion content is voice recognition or an object Facial region detection technology of face recognition technology, facial region normalization technology, feature extraction technology in face region, facial component (expression analysis) relationship technology of emotion recognition technology of object recognition, object hand gesture, object recognition action and
- object Facial region detection technology of face recognition technology facial region normalization technology
- feature extraction technology in face region feature extraction technology in face region
- facial component (expression analysis) relationship technology of emotion recognition technology of object recognition object hand gesture
- object recognition action On the basis of behavioral cognitive technology, real-time matching technology of real-life and virtual images is used to match mixed virtual objects (including characters) through gesture and facial expression analysis on both faces and bodies making video calls, which cannot be seen in the real world.
- Mixed reality is realized through video call.
- an apparatus and method for emotion contents service of a communication terminal device includes voice, a specific expression of a face and a body If the similar voice, facial expressions and gestures are transmitted through the video, the virtual objects responding to the voices, facial expressions and gestures are matched on the face and body in real time. It is a wonderful way to enjoy your video calls.
- an apparatus and method for emotion content service of a communication terminal device may be expressed through a mobile device.
- a system that recognizes gestures and expresses them through avatars, it will become a foundation to take a leap forward in the cutting-edge video industry, such as domestic film, animation, and cyber characters, and bring human emotions and facial expressions to third parties (Avatar). It will greatly contribute to enhancing the competitiveness of domestic mobile contents and video contents industry by making the process of expressing 3D virtual objects on the face of the real world more natural.
- FIG. 16 is a diagram illustrating various fields to which the present invention is applicable, and it is expected that the protagonist and the viewer of the movie may communicate in the future due to the spread of the smart device, and the main character of the camera installed on the upper part of the device. It can be applied to the high-tech cultural industry that can provide diversity to see the same video but have different experiences by showing the appropriate response and gestures through the recognition.
- the present invention is a script-based expression and gesture expression technology used in movies, animation, cyber characters, interface design using user emotion reaction, facial recognition for security and surveillance, consumer's emotional response measurement for products and designs
- the field of application is endless. Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined not only by the scope of the following claims, but also by the equivalents of the claims.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Resources & Organizations (AREA)
- Signal Processing (AREA)
- Entrepreneurship & Innovation (AREA)
- Tourism & Hospitality (AREA)
- Strategic Management (AREA)
- Multimedia (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- General Business, Economics & Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Marketing (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Primary Health Care (AREA)
- Processing Or Creating Images (AREA)
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
La présente invention concerne la réalité augmentée fournissant à un utilisateur des données mélangées en provenance d'un environnement réel et d'un environnement virtuel. Plus particulièrement, la présente invention concerne un appareil et un procédé pour des services de contenu émotionnel sur des dispositifs de télécommunication, un appareil et un procédé pour une reconnaissance d'émotion pour ceux-ci, et un appareil et un procédé pour générer et mettre en correspondance le contenu émotionnel à l'aide de ceux-ci, lesquels peuvent utiliser une technique de mise en correspondance en temps réel pour mettre en correspondance des images réelles et des images virtuelles sur la base d'une reconnaissance vocale, une technique d'extraction de caractéristiques faciales, une technique de normalisation faciale, une technique de détection faciale pour une technique de reconnaissance faciale pour une reconnaissance d'objet, une technique de relation (analyse d'expression) de caractéristiques faciales pour une technique de reconnaissance d'émotion pour une reconnaissance d'objet, une technique de reconnaissance de mouvement de main pour une reconnaissance d'objet, et une technique de reconnaissance de mouvement et de comportement pour une reconnaissance d'objet, et lesquels peuvent mettre en correspondance des objets virtuels mélangés (comprenant des caractères) avec les visages et les corps des deux côtés réalisant une communication vidéo à l'aide d'analyses de geste et d'expression pour mettre en œuvre différentes réalités mélangées, qui ne peuvent pas être vues dans le monde réel, pour la communication vidéo.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2011-0083435 | 2011-08-22 | ||
KR1020110083435A KR20130022434A (ko) | 2011-08-22 | 2011-08-22 | 통신단말장치의 감정 컨텐츠 서비스 장치 및 방법, 이를 위한 감정 인지 장치 및 방법, 이를 이용한 감정 컨텐츠를 생성하고 정합하는 장치 및 방법 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013027893A1 true WO2013027893A1 (fr) | 2013-02-28 |
Family
ID=47746615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2011/008399 WO2013027893A1 (fr) | 2011-08-22 | 2011-11-07 | Appareil et procédé pour des services de contenu émotionnel sur des dispositifs de télécommunication, appareil et procédé pour une reconnaissance d'émotion pour ceux-ci, et appareil et procédé pour générer et mettre en correspondance le contenu émotionnel à l'aide de ceux-ci |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20130022434A (fr) |
WO (1) | WO2013027893A1 (fr) |
Cited By (179)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014182052A1 (fr) * | 2013-05-09 | 2014-11-13 | Samsung Electronics Co., Ltd. | Procédé et appareil de fourniture de contenus comprenant des informations de réalité augmentée |
US9269374B1 (en) | 2014-10-27 | 2016-02-23 | Mattersight Corporation | Predictive video analytics system and methods |
WO2016077578A1 (fr) * | 2014-11-13 | 2016-05-19 | Intel Corporation | Système et procédé pour authentification par traits distinctifs |
KR101652486B1 (ko) * | 2015-04-05 | 2016-08-30 | 주식회사 큐버 | 멀티모달 다중 에이전트 기반의 감정 통신 시스템 |
CN106127828A (zh) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置及移动终端 |
CN106127829A (zh) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置及终端 |
CN106157262A (zh) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置和移动终端 |
CN106157363A (zh) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种基于增强现实的拍照方法、装置和移动终端 |
KR101743763B1 (ko) * | 2015-06-29 | 2017-06-05 | (주)참빛솔루션 | 감성 아바타 이모티콘 기반의 스마트 러닝 학습 제공 방법, 그리고 이를 구현하기 위한 스마트 러닝 학습 단말장치 |
GB2529037B (en) * | 2014-06-10 | 2018-05-23 | 2Mee Ltd | Augmented reality apparatus and method |
US10043406B1 (en) | 2017-03-10 | 2018-08-07 | Intel Corporation | Augmented emotion display for austistic persons |
CN108830917A (zh) * | 2018-05-29 | 2018-11-16 | 努比亚技术有限公司 | 一种信息生成方法、终端及计算机可读存储介质 |
CN108961431A (zh) * | 2018-07-03 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | 人物表情的生成方法、装置及终端设备 |
CN109727303A (zh) * | 2018-12-29 | 2019-05-07 | 广州华多网络科技有限公司 | 视频展示方法、系统、计算机设备、存储介质和终端 |
CN109840009A (zh) * | 2017-11-28 | 2019-06-04 | 浙江思考者科技有限公司 | 一种智能真人广告屏交互系统及实现方法 |
CN110046336A (zh) * | 2019-04-15 | 2019-07-23 | 南京孜博汇信息科技有限公司 | 位置编码表单处理方法及系统 |
WO2019204464A1 (fr) * | 2018-04-18 | 2019-10-24 | Snap Inc. | Système d'expression augmentée |
CN110431838A (zh) * | 2017-03-22 | 2019-11-08 | 韩国斯诺有限公司 | 提供人脸识别摄像机的动态内容的方法及系统 |
CN110705356A (zh) * | 2019-08-31 | 2020-01-17 | 深圳市大拿科技有限公司 | 功能控制方法及相关设备 |
US10554698B2 (en) | 2017-12-28 | 2020-02-04 | Hyperconnect, Inc. | Terminal and server providing video call service |
CN110874137A (zh) * | 2018-08-31 | 2020-03-10 | 阿里巴巴集团控股有限公司 | 一种交互方法以及装置 |
CN111183455A (zh) * | 2017-08-29 | 2020-05-19 | 互曼人工智能科技(上海)有限公司 | 图像数据处理系统与方法 |
CN111191564A (zh) * | 2019-12-26 | 2020-05-22 | 三盟科技股份有限公司 | 基于多角度神经网络的多姿态人脸情绪识别方法及系统 |
CN111353842A (zh) * | 2018-12-24 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 推送信息的处理方法和系统 |
CN111773676A (zh) * | 2020-07-23 | 2020-10-16 | 网易(杭州)网络有限公司 | 确定虚拟角色动作的方法及装置 |
CN111918015A (zh) * | 2019-05-07 | 2020-11-10 | 阿瓦亚公司 | 基于人工智能确定的面部情绪的视频通话路由和管理 |
US10848446B1 (en) | 2016-07-19 | 2020-11-24 | Snap Inc. | Displaying customized electronic messaging graphics |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US10880246B2 (en) | 2016-10-24 | 2020-12-29 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
CN112215929A (zh) * | 2020-10-10 | 2021-01-12 | 珠海格力电器股份有限公司 | 一种虚拟社交的数据处理方法、装置及系统 |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
US10936157B2 (en) | 2017-11-29 | 2021-03-02 | Snap Inc. | Selectable item including a customized graphic for an electronic messaging application |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US10984569B2 (en) | 2016-06-30 | 2021-04-20 | Snap Inc. | Avatar based ideogram generation |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
US10991395B1 (en) | 2014-02-05 | 2021-04-27 | Snap Inc. | Method for real time video processing involving changing a color of an object on a human face in a video |
US11010022B2 (en) | 2019-02-06 | 2021-05-18 | Snap Inc. | Global event-based avatar |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US11030789B2 (en) | 2017-10-30 | 2021-06-08 | Snap Inc. | Animated chat presence |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11039270B2 (en) | 2019-03-28 | 2021-06-15 | Snap Inc. | Points of interest in a location sharing system |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
WO2021114710A1 (fr) * | 2019-12-09 | 2021-06-17 | 上海幻电信息科技有限公司 | Procédé et appareil d'interaction vidéo de diffusion en continu en direct, et dispositif informatique |
CN113014471A (zh) * | 2021-01-18 | 2021-06-22 | 腾讯科技(深圳)有限公司 | 会话处理方法,装置、终端和存储介质 |
US11048916B2 (en) | 2016-03-31 | 2021-06-29 | Snap Inc. | Automated avatar generation |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
CN113099150A (zh) * | 2020-01-08 | 2021-07-09 | 华为技术有限公司 | 图像处理的方法、设备及系统 |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11100311B2 (en) | 2016-10-19 | 2021-08-24 | Snap Inc. | Neural networks for facial modeling |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US11120597B2 (en) | 2017-10-26 | 2021-09-14 | Snap Inc. | Joint audio-video facial animation system |
US11120601B2 (en) | 2018-02-28 | 2021-09-14 | Snap Inc. | Animated expressive icon |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US11229849B2 (en) | 2012-05-08 | 2022-01-25 | Snap Inc. | System and method for generating and displaying avatars |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
CN114630135A (zh) * | 2020-12-11 | 2022-06-14 | 北京字跳网络技术有限公司 | 一种直播互动方法及装置 |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
WO2022143128A1 (fr) * | 2020-12-29 | 2022-07-07 | 华为技术有限公司 | Procédé et appareil d'appel vidéo basés sur un avatar, et terminal |
US11411895B2 (en) | 2017-11-29 | 2022-08-09 | Snap Inc. | Generating aggregated media content items for a group of users in an electronic messaging application |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
US11425068B2 (en) | 2009-02-03 | 2022-08-23 | Snap Inc. | Interactive avatar in messaging environment |
US11438341B1 (en) | 2016-10-10 | 2022-09-06 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11544883B1 (en) | 2017-01-16 | 2023-01-03 | Snap Inc. | Coded vision system |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
US11580700B2 (en) | 2016-10-24 | 2023-02-14 | Snap Inc. | Augmented reality object manipulation |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11651539B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | System for generating media content items on demand |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11662900B2 (en) | 2016-05-31 | 2023-05-30 | Snap Inc. | Application control using a gesture based trigger |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11683280B2 (en) | 2020-06-10 | 2023-06-20 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11704878B2 (en) | 2017-01-09 | 2023-07-18 | Snap Inc. | Surface aware lens |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11830209B2 (en) | 2017-05-26 | 2023-11-28 | Snap Inc. | Neural network-based image stream modification |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11888795B2 (en) | 2020-09-21 | 2024-01-30 | Snap Inc. | Chats with micro sound clips |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11969075B2 (en) | 2020-03-31 | 2024-04-30 | Snap Inc. | Augmented reality beauty product tutorials |
US11978283B2 (en) | 2021-03-16 | 2024-05-07 | Snap Inc. | Mirroring device with a hands-free mode |
US11983826B2 (en) | 2021-09-30 | 2024-05-14 | Snap Inc. | 3D upper garment tracking |
US11983462B2 (en) | 2021-08-31 | 2024-05-14 | Snap Inc. | Conversation guided augmented reality experience |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US11996113B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Voice notes with changing effects |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
US12002175B2 (en) | 2023-06-30 | 2024-06-04 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9516259B2 (en) * | 2013-10-22 | 2016-12-06 | Google Inc. | Capturing media content in accordance with a viewer expression |
CN103945161B (zh) * | 2014-04-14 | 2017-06-27 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
KR101535574B1 (ko) * | 2014-07-18 | 2015-07-10 | 오용운 | 3d 캐릭터를 이용한 소셜 네트워크 이모티콘 제공 시스템 및 방법 |
KR101681501B1 (ko) * | 2016-06-28 | 2016-12-01 | (주) 키글 | 아바타 얼굴 생성 시스템 및 방법 |
KR102616172B1 (ko) * | 2016-08-12 | 2023-12-19 | 주식회사 케이티 | 캐릭터 제공 시스템 및 이를 이용한 정보 수집 방법 |
KR102120871B1 (ko) * | 2017-11-08 | 2020-06-09 | 주식회사 하이퍼커넥트 | 영상 통화 서비스를 제공하는 단말 및 서버 |
WO2019103484A1 (fr) | 2017-11-24 | 2019-05-31 | 주식회사 제네시스랩 | Dispositif de reconnaissance d'émotions multimodal, procédé et support d'informations à l'aide d'intelligence artificielle |
US10681310B2 (en) * | 2018-05-07 | 2020-06-09 | Apple Inc. | Modifying video streams with supplemental content for video conferencing |
US11012389B2 (en) | 2018-05-07 | 2021-05-18 | Apple Inc. | Modifying images with supplemental content for messaging |
KR102647656B1 (ko) * | 2018-09-04 | 2024-03-15 | 삼성전자주식회사 | 증강 현실 영상에 부가 객체를 표시하는 전자 장치 및 상기 전자 장치의 구동 방법 |
KR102611458B1 (ko) * | 2018-09-06 | 2023-12-11 | 주식회사 아이앤나 | 아기 주변 영역을 이용한 아기의 감정 상태 증강 방법 |
KR102648993B1 (ko) * | 2018-12-21 | 2024-03-20 | 삼성전자주식회사 | 사용자의 감정 상태에 기반하여 아바타를 제공하기 위한 전자 장치 및 그에 관한 방법 |
CN109831638B (zh) * | 2019-01-23 | 2021-01-08 | 广州视源电子科技股份有限公司 | 视频图像传输方法、装置、交互智能平板和存储介质 |
JP6581742B1 (ja) * | 2019-03-27 | 2019-09-25 | 株式会社ドワンゴ | Vr生放送配信システム、配信サーバ、配信サーバの制御方法、配信サーバのプログラム、およびvr生写真データのデータ構造 |
KR102236718B1 (ko) * | 2019-07-25 | 2021-04-06 | 주식회사 모두커뮤니케이션 | 감정이 반영된 개인화 객체 생성을 위한 서비스 제공 장치 및 방법 |
KR102114457B1 (ko) * | 2019-10-21 | 2020-05-22 | (주)부즈 | 실시간 캐릭터 스트리밍 콘텐츠의 처리 방법 및 장치 |
KR102260022B1 (ko) * | 2020-05-25 | 2021-06-02 | 전남대학교산학협력단 | 딥러닝 기반 이미지 내 객체 분류 시스템 및 방법 |
KR102637373B1 (ko) * | 2021-01-26 | 2024-02-19 | 주식회사 플랫팜 | 이모티콘 생성 장치 및 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008004844A1 (fr) * | 2006-07-06 | 2008-01-10 | Ktfreetel Co., Ltd. | Procédé et système destinés à fournir un service d'analyse vocale, et appareil correspondant |
KR20080057030A (ko) * | 2006-12-19 | 2008-06-24 | 엘지전자 주식회사 | 이모티콘을 이용한 화상통화장치 및 방법 |
KR100868638B1 (ko) * | 2007-08-07 | 2008-11-12 | 에스케이 텔레콤주식회사 | 영상 통화 말풍선 제공 시스템 및 방법 |
KR20110025721A (ko) * | 2009-09-05 | 2011-03-11 | 에스케이텔레콤 주식회사 | 영상통화 중 감정 전달 시스템 및 방법 |
US20110122219A1 (en) * | 2009-11-23 | 2011-05-26 | Samsung Electronics Co. Ltd. | Method and apparatus for video call in a mobile terminal |
-
2011
- 2011-08-22 KR KR1020110083435A patent/KR20130022434A/ko not_active Application Discontinuation
- 2011-11-07 WO PCT/KR2011/008399 patent/WO2013027893A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008004844A1 (fr) * | 2006-07-06 | 2008-01-10 | Ktfreetel Co., Ltd. | Procédé et système destinés à fournir un service d'analyse vocale, et appareil correspondant |
KR20080057030A (ko) * | 2006-12-19 | 2008-06-24 | 엘지전자 주식회사 | 이모티콘을 이용한 화상통화장치 및 방법 |
KR100868638B1 (ko) * | 2007-08-07 | 2008-11-12 | 에스케이 텔레콤주식회사 | 영상 통화 말풍선 제공 시스템 및 방법 |
KR20110025721A (ko) * | 2009-09-05 | 2011-03-11 | 에스케이텔레콤 주식회사 | 영상통화 중 감정 전달 시스템 및 방법 |
US20110122219A1 (en) * | 2009-11-23 | 2011-05-26 | Samsung Electronics Co. Ltd. | Method and apparatus for video call in a mobile terminal |
Cited By (292)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11425068B2 (en) | 2009-02-03 | 2022-08-23 | Snap Inc. | Interactive avatar in messaging environment |
US11925869B2 (en) | 2012-05-08 | 2024-03-12 | Snap Inc. | System and method for generating and displaying avatars |
US11607616B2 (en) | 2012-05-08 | 2023-03-21 | Snap Inc. | System and method for generating and displaying avatars |
US11229849B2 (en) | 2012-05-08 | 2022-01-25 | Snap Inc. | System and method for generating and displaying avatars |
US9710970B2 (en) | 2013-05-09 | 2017-07-18 | Samsung Electronics Co., Ltd. | Method and apparatus for providing contents including augmented reality information |
WO2014182052A1 (fr) * | 2013-05-09 | 2014-11-13 | Samsung Electronics Co., Ltd. | Procédé et appareil de fourniture de contenus comprenant des informations de réalité augmentée |
US11651797B2 (en) | 2014-02-05 | 2023-05-16 | Snap Inc. | Real time video processing for changing proportions of an object in the video |
US10991395B1 (en) | 2014-02-05 | 2021-04-27 | Snap Inc. | Method for real time video processing involving changing a color of an object on a human face in a video |
US11443772B2 (en) | 2014-02-05 | 2022-09-13 | Snap Inc. | Method for triggering events in a video |
GB2529037B (en) * | 2014-06-10 | 2018-05-23 | 2Mee Ltd | Augmented reality apparatus and method |
US10262195B2 (en) | 2014-10-27 | 2019-04-16 | Mattersight Corporation | Predictive and responsive video analytics system and methods |
US9437215B2 (en) | 2014-10-27 | 2016-09-06 | Mattersight Corporation | Predictive video analytics system and methods |
US9269374B1 (en) | 2014-10-27 | 2016-02-23 | Mattersight Corporation | Predictive video analytics system and methods |
WO2016077578A1 (fr) * | 2014-11-13 | 2016-05-19 | Intel Corporation | Système et procédé pour authentification par traits distinctifs |
US9811649B2 (en) | 2014-11-13 | 2017-11-07 | Intel Corporation | System and method for feature-based authentication |
KR101652486B1 (ko) * | 2015-04-05 | 2016-08-30 | 주식회사 큐버 | 멀티모달 다중 에이전트 기반의 감정 통신 시스템 |
WO2016163565A1 (fr) * | 2015-04-05 | 2016-10-13 | 한신대학교 산학협력단 | Système de communication émotionnelle basé sur des agents multimodaux multiples |
KR101743763B1 (ko) * | 2015-06-29 | 2017-06-05 | (주)참빛솔루션 | 감성 아바타 이모티콘 기반의 스마트 러닝 학습 제공 방법, 그리고 이를 구현하기 위한 스마트 러닝 학습 단말장치 |
US11048916B2 (en) | 2016-03-31 | 2021-06-29 | Snap Inc. | Automated avatar generation |
US11631276B2 (en) | 2016-03-31 | 2023-04-18 | Snap Inc. | Automated avatar generation |
US11662900B2 (en) | 2016-05-31 | 2023-05-30 | Snap Inc. | Application control using a gesture based trigger |
CN106157363A (zh) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种基于增强现实的拍照方法、装置和移动终端 |
CN106157262A (zh) * | 2016-06-28 | 2016-11-23 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置和移动终端 |
CN106127829A (zh) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置及终端 |
CN106157262B (zh) * | 2016-06-28 | 2020-04-17 | Oppo广东移动通信有限公司 | 一种增强现实的处理方法、装置和移动终端 |
CN106127829B (zh) * | 2016-06-28 | 2020-06-30 | Oppo广东移动通信有限公司 | 一种增强现实的处理方法、装置及终端 |
CN106127828A (zh) * | 2016-06-28 | 2016-11-16 | 广东欧珀移动通信有限公司 | 一种增强现实的处理方法、装置及移动终端 |
US10984569B2 (en) | 2016-06-30 | 2021-04-20 | Snap Inc. | Avatar based ideogram generation |
US10855632B2 (en) | 2016-07-19 | 2020-12-01 | Snap Inc. | Displaying customized electronic messaging graphics |
US11418470B2 (en) | 2016-07-19 | 2022-08-16 | Snap Inc. | Displaying customized electronic messaging graphics |
US11509615B2 (en) | 2016-07-19 | 2022-11-22 | Snap Inc. | Generating customized electronic messaging graphics |
US11438288B2 (en) | 2016-07-19 | 2022-09-06 | Snap Inc. | Displaying customized electronic messaging graphics |
US10848446B1 (en) | 2016-07-19 | 2020-11-24 | Snap Inc. | Displaying customized electronic messaging graphics |
US11962598B2 (en) | 2016-10-10 | 2024-04-16 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11438341B1 (en) | 2016-10-10 | 2022-09-06 | Snap Inc. | Social media post subscribe requests for buffer user accounts |
US11100311B2 (en) | 2016-10-19 | 2021-08-24 | Snap Inc. | Neural networks for facial modeling |
US11218433B2 (en) | 2016-10-24 | 2022-01-04 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US11580700B2 (en) | 2016-10-24 | 2023-02-14 | Snap Inc. | Augmented reality object manipulation |
US10880246B2 (en) | 2016-10-24 | 2020-12-29 | Snap Inc. | Generating and displaying customized avatars in electronic messages |
US11876762B1 (en) | 2016-10-24 | 2024-01-16 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11843456B2 (en) | 2016-10-24 | 2023-12-12 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US10938758B2 (en) | 2016-10-24 | 2021-03-02 | Snap Inc. | Generating and displaying customized avatars in media overlays |
US11704878B2 (en) | 2017-01-09 | 2023-07-18 | Snap Inc. | Surface aware lens |
US11616745B2 (en) | 2017-01-09 | 2023-03-28 | Snap Inc. | Contextual generation and selection of customized media content |
US11989809B2 (en) | 2017-01-16 | 2024-05-21 | Snap Inc. | Coded vision system |
US11544883B1 (en) | 2017-01-16 | 2023-01-03 | Snap Inc. | Coded vision system |
US11991130B2 (en) | 2017-01-18 | 2024-05-21 | Snap Inc. | Customized contextual media content item generation |
US10951562B2 (en) | 2017-01-18 | 2021-03-16 | Snap. Inc. | Customized contextual media content item generation |
US11870743B1 (en) | 2017-01-23 | 2024-01-09 | Snap Inc. | Customized digital avatar accessories |
US10043406B1 (en) | 2017-03-10 | 2018-08-07 | Intel Corporation | Augmented emotion display for austistic persons |
CN110431838A (zh) * | 2017-03-22 | 2019-11-08 | 韩国斯诺有限公司 | 提供人脸识别摄像机的动态内容的方法及系统 |
CN110431838B (zh) * | 2017-03-22 | 2022-03-29 | 韩国斯诺有限公司 | 提供人脸识别摄像机的动态内容的方法及系统 |
US11069103B1 (en) | 2017-04-20 | 2021-07-20 | Snap Inc. | Customized user interface for electronic communications |
US11593980B2 (en) | 2017-04-20 | 2023-02-28 | Snap Inc. | Customized user interface for electronic communications |
US10963529B1 (en) | 2017-04-27 | 2021-03-30 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11385763B2 (en) | 2017-04-27 | 2022-07-12 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11893647B2 (en) | 2017-04-27 | 2024-02-06 | Snap Inc. | Location-based virtual avatars |
US10952013B1 (en) | 2017-04-27 | 2021-03-16 | Snap Inc. | Selective location-based identity communication |
US11842411B2 (en) | 2017-04-27 | 2023-12-12 | Snap Inc. | Location-based virtual avatars |
US11995288B2 (en) | 2017-04-27 | 2024-05-28 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11451956B1 (en) | 2017-04-27 | 2022-09-20 | Snap Inc. | Location privacy management on map-based social media platforms |
US11418906B2 (en) | 2017-04-27 | 2022-08-16 | Snap Inc. | Selective location-based identity communication |
US11474663B2 (en) | 2017-04-27 | 2022-10-18 | Snap Inc. | Location-based search mechanism in a graphical user interface |
US11392264B1 (en) | 2017-04-27 | 2022-07-19 | Snap Inc. | Map-based graphical user interface for multi-type social media galleries |
US11782574B2 (en) | 2017-04-27 | 2023-10-10 | Snap Inc. | Map-based graphical user interface indicating geospatial activity metrics |
US11830209B2 (en) | 2017-05-26 | 2023-11-28 | Snap Inc. | Neural network-based image stream modification |
US11122094B2 (en) | 2017-07-28 | 2021-09-14 | Snap Inc. | Software application manager for messaging applications |
US11882162B2 (en) | 2017-07-28 | 2024-01-23 | Snap Inc. | Software application manager for messaging applications |
US11659014B2 (en) | 2017-07-28 | 2023-05-23 | Snap Inc. | Software application manager for messaging applications |
CN111183455A (zh) * | 2017-08-29 | 2020-05-19 | 互曼人工智能科技(上海)有限公司 | 图像数据处理系统与方法 |
US11120597B2 (en) | 2017-10-26 | 2021-09-14 | Snap Inc. | Joint audio-video facial animation system |
US11610354B2 (en) | 2017-10-26 | 2023-03-21 | Snap Inc. | Joint audio-video facial animation system |
US11030789B2 (en) | 2017-10-30 | 2021-06-08 | Snap Inc. | Animated chat presence |
US11354843B2 (en) | 2017-10-30 | 2022-06-07 | Snap Inc. | Animated chat presence |
US11930055B2 (en) | 2017-10-30 | 2024-03-12 | Snap Inc. | Animated chat presence |
US11706267B2 (en) | 2017-10-30 | 2023-07-18 | Snap Inc. | Animated chat presence |
CN109840009A (zh) * | 2017-11-28 | 2019-06-04 | 浙江思考者科技有限公司 | 一种智能真人广告屏交互系统及实现方法 |
US11460974B1 (en) | 2017-11-28 | 2022-10-04 | Snap Inc. | Content discovery refresh |
US11411895B2 (en) | 2017-11-29 | 2022-08-09 | Snap Inc. | Generating aggregated media content items for a group of users in an electronic messaging application |
US10936157B2 (en) | 2017-11-29 | 2021-03-02 | Snap Inc. | Selectable item including a customized graphic for an electronic messaging application |
US10554698B2 (en) | 2017-12-28 | 2020-02-04 | Hyperconnect, Inc. | Terminal and server providing video call service |
US11769259B2 (en) | 2018-01-23 | 2023-09-26 | Snap Inc. | Region-based stabilized face tracking |
US10949648B1 (en) | 2018-01-23 | 2021-03-16 | Snap Inc. | Region-based stabilized face tracking |
US11468618B2 (en) | 2018-02-28 | 2022-10-11 | Snap Inc. | Animated expressive icon |
US11523159B2 (en) | 2018-02-28 | 2022-12-06 | Snap Inc. | Generating media content items based on location information |
US11688119B2 (en) | 2018-02-28 | 2023-06-27 | Snap Inc. | Animated expressive icon |
US11880923B2 (en) | 2018-02-28 | 2024-01-23 | Snap Inc. | Animated expressive icon |
US11120601B2 (en) | 2018-02-28 | 2021-09-14 | Snap Inc. | Animated expressive icon |
US10979752B1 (en) | 2018-02-28 | 2021-04-13 | Snap Inc. | Generating media content items based on location information |
US11310176B2 (en) | 2018-04-13 | 2022-04-19 | Snap Inc. | Content suggestion system |
KR20240027845A (ko) | 2018-04-18 | 2024-03-04 | 스냅 인코포레이티드 | 증강 표현 시스템 |
US10719968B2 (en) | 2018-04-18 | 2020-07-21 | Snap Inc. | Augmented expression system |
US11875439B2 (en) | 2018-04-18 | 2024-01-16 | Snap Inc. | Augmented expression system |
WO2019204464A1 (fr) * | 2018-04-18 | 2019-10-24 | Snap Inc. | Système d'expression augmentée |
CN108830917B (zh) * | 2018-05-29 | 2023-04-18 | 努比亚技术有限公司 | 一种信息生成方法、终端及计算机可读存储介质 |
CN108830917A (zh) * | 2018-05-29 | 2018-11-16 | 努比亚技术有限公司 | 一种信息生成方法、终端及计算机可读存储介质 |
CN108961431A (zh) * | 2018-07-03 | 2018-12-07 | 百度在线网络技术(北京)有限公司 | 人物表情的生成方法、装置及终端设备 |
US11074675B2 (en) | 2018-07-31 | 2021-07-27 | Snap Inc. | Eye texture inpainting |
US11030813B2 (en) | 2018-08-30 | 2021-06-08 | Snap Inc. | Video clip object tracking |
US11715268B2 (en) | 2018-08-30 | 2023-08-01 | Snap Inc. | Video clip object tracking |
CN110874137A (zh) * | 2018-08-31 | 2020-03-10 | 阿里巴巴集团控股有限公司 | 一种交互方法以及装置 |
CN110874137B (zh) * | 2018-08-31 | 2023-06-13 | 阿里巴巴集团控股有限公司 | 一种交互方法以及装置 |
US10896534B1 (en) | 2018-09-19 | 2021-01-19 | Snap Inc. | Avatar style transformation using neural networks |
US11348301B2 (en) | 2018-09-19 | 2022-05-31 | Snap Inc. | Avatar style transformation using neural networks |
US10895964B1 (en) | 2018-09-25 | 2021-01-19 | Snap Inc. | Interface to display shared user groups |
US11868590B2 (en) | 2018-09-25 | 2024-01-09 | Snap Inc. | Interface to display shared user groups |
US11294545B2 (en) | 2018-09-25 | 2022-04-05 | Snap Inc. | Interface to display shared user groups |
US11824822B2 (en) | 2018-09-28 | 2023-11-21 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11610357B2 (en) | 2018-09-28 | 2023-03-21 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11171902B2 (en) | 2018-09-28 | 2021-11-09 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11189070B2 (en) | 2018-09-28 | 2021-11-30 | Snap Inc. | System and method of generating targeted user lists using customizable avatar characteristics |
US11455082B2 (en) | 2018-09-28 | 2022-09-27 | Snap Inc. | Collaborative achievement interface |
US11477149B2 (en) | 2018-09-28 | 2022-10-18 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11704005B2 (en) | 2018-09-28 | 2023-07-18 | Snap Inc. | Collaborative achievement interface |
US11245658B2 (en) | 2018-09-28 | 2022-02-08 | Snap Inc. | System and method of generating private notifications between users in a communication session |
US10904181B2 (en) | 2018-09-28 | 2021-01-26 | Snap Inc. | Generating customized graphics having reactions to electronic message content |
US11321896B2 (en) | 2018-10-31 | 2022-05-03 | Snap Inc. | 3D avatar rendering |
US10872451B2 (en) | 2018-10-31 | 2020-12-22 | Snap Inc. | 3D avatar rendering |
US11103795B1 (en) | 2018-10-31 | 2021-08-31 | Snap Inc. | Game drawer |
US11836859B2 (en) | 2018-11-27 | 2023-12-05 | Snap Inc. | Textured mesh building |
US11176737B2 (en) | 2018-11-27 | 2021-11-16 | Snap Inc. | Textured mesh building |
US20220044479A1 (en) | 2018-11-27 | 2022-02-10 | Snap Inc. | Textured mesh building |
US11620791B2 (en) | 2018-11-27 | 2023-04-04 | Snap Inc. | Rendering 3D captions within real-world environments |
US10902661B1 (en) | 2018-11-28 | 2021-01-26 | Snap Inc. | Dynamic composite user identifier |
US11887237B2 (en) | 2018-11-28 | 2024-01-30 | Snap Inc. | Dynamic composite user identifier |
US11698722B2 (en) | 2018-11-30 | 2023-07-11 | Snap Inc. | Generating customized avatars based on location information |
US10861170B1 (en) | 2018-11-30 | 2020-12-08 | Snap Inc. | Efficient human pose tracking in videos |
US11783494B2 (en) | 2018-11-30 | 2023-10-10 | Snap Inc. | Efficient human pose tracking in videos |
US11199957B1 (en) | 2018-11-30 | 2021-12-14 | Snap Inc. | Generating customized avatars based on location information |
US11315259B2 (en) | 2018-11-30 | 2022-04-26 | Snap Inc. | Efficient human pose tracking in videos |
US11055514B1 (en) | 2018-12-14 | 2021-07-06 | Snap Inc. | Image face manipulation |
US11798261B2 (en) | 2018-12-14 | 2023-10-24 | Snap Inc. | Image face manipulation |
CN111353842A (zh) * | 2018-12-24 | 2020-06-30 | 阿里巴巴集团控股有限公司 | 推送信息的处理方法和系统 |
US11516173B1 (en) | 2018-12-26 | 2022-11-29 | Snap Inc. | Message composition interface |
CN109727303B (zh) * | 2018-12-29 | 2023-07-25 | 广州方硅信息技术有限公司 | 视频展示方法、系统、计算机设备、存储介质和终端 |
CN109727303A (zh) * | 2018-12-29 | 2019-05-07 | 广州华多网络科技有限公司 | 视频展示方法、系统、计算机设备、存储介质和终端 |
US11877211B2 (en) | 2019-01-14 | 2024-01-16 | Snap Inc. | Destination sharing in location sharing system |
US11032670B1 (en) | 2019-01-14 | 2021-06-08 | Snap Inc. | Destination sharing in location sharing system |
US10939246B1 (en) | 2019-01-16 | 2021-03-02 | Snap Inc. | Location-based context information sharing in a messaging system |
US10945098B2 (en) | 2019-01-16 | 2021-03-09 | Snap Inc. | Location-based context information sharing in a messaging system |
US11751015B2 (en) | 2019-01-16 | 2023-09-05 | Snap Inc. | Location-based context information sharing in a messaging system |
US11294936B1 (en) | 2019-01-30 | 2022-04-05 | Snap Inc. | Adaptive spatial density based clustering |
US11693887B2 (en) | 2019-01-30 | 2023-07-04 | Snap Inc. | Adaptive spatial density based clustering |
US11714524B2 (en) | 2019-02-06 | 2023-08-01 | Snap Inc. | Global event-based avatar |
US11557075B2 (en) | 2019-02-06 | 2023-01-17 | Snap Inc. | Body pose estimation |
US10984575B2 (en) | 2019-02-06 | 2021-04-20 | Snap Inc. | Body pose estimation |
US11010022B2 (en) | 2019-02-06 | 2021-05-18 | Snap Inc. | Global event-based avatar |
US11275439B2 (en) | 2019-02-13 | 2022-03-15 | Snap Inc. | Sleep detection in a location sharing system |
US11809624B2 (en) | 2019-02-13 | 2023-11-07 | Snap Inc. | Sleep detection in a location sharing system |
US10936066B1 (en) | 2019-02-13 | 2021-03-02 | Snap Inc. | Sleep detection in a location sharing system |
US10964082B2 (en) | 2019-02-26 | 2021-03-30 | Snap Inc. | Avatar based on weather |
US11574431B2 (en) | 2019-02-26 | 2023-02-07 | Snap Inc. | Avatar based on weather |
US11301117B2 (en) | 2019-03-08 | 2022-04-12 | Snap Inc. | Contextual information in chat |
US10852918B1 (en) | 2019-03-08 | 2020-12-01 | Snap Inc. | Contextual information in chat |
US11868414B1 (en) | 2019-03-14 | 2024-01-09 | Snap Inc. | Graph-based prediction for contact suggestion in a location sharing system |
US11852554B1 (en) | 2019-03-21 | 2023-12-26 | Snap Inc. | Barometer calibration in a location sharing system |
US11166123B1 (en) | 2019-03-28 | 2021-11-02 | Snap Inc. | Grouped transmission of location data in a location sharing system |
US11039270B2 (en) | 2019-03-28 | 2021-06-15 | Snap Inc. | Points of interest in a location sharing system |
US11638115B2 (en) | 2019-03-28 | 2023-04-25 | Snap Inc. | Points of interest in a location sharing system |
CN110046336A (zh) * | 2019-04-15 | 2019-07-23 | 南京孜博汇信息科技有限公司 | 位置编码表单处理方法及系统 |
US11973732B2 (en) | 2019-04-30 | 2024-04-30 | Snap Inc. | Messaging system with avatar generation |
US10992619B2 (en) | 2019-04-30 | 2021-04-27 | Snap Inc. | Messaging system with avatar generation |
CN111918015A (zh) * | 2019-05-07 | 2020-11-10 | 阿瓦亚公司 | 基于人工智能确定的面部情绪的视频通话路由和管理 |
USD916811S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916810S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
USD916871S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916809S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a transitional graphical user interface |
USD916872S1 (en) | 2019-05-28 | 2021-04-20 | Snap Inc. | Display screen or portion thereof with a graphical user interface |
US10893385B1 (en) | 2019-06-07 | 2021-01-12 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11601783B2 (en) | 2019-06-07 | 2023-03-07 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11917495B2 (en) | 2019-06-07 | 2024-02-27 | Snap Inc. | Detection of a physical collision between two client devices in a location sharing system |
US11188190B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | Generating animation overlays in a communication session |
US11443491B2 (en) | 2019-06-28 | 2022-09-13 | Snap Inc. | 3D object camera customization system |
US11823341B2 (en) | 2019-06-28 | 2023-11-21 | Snap Inc. | 3D object camera customization system |
US11676199B2 (en) | 2019-06-28 | 2023-06-13 | Snap Inc. | Generating customizable avatar outfits |
US11189098B2 (en) | 2019-06-28 | 2021-11-30 | Snap Inc. | 3D object camera customization system |
US11307747B2 (en) | 2019-07-11 | 2022-04-19 | Snap Inc. | Edge gesture interface with smart interactions |
US11714535B2 (en) | 2019-07-11 | 2023-08-01 | Snap Inc. | Edge gesture interface with smart interactions |
US11455081B2 (en) | 2019-08-05 | 2022-09-27 | Snap Inc. | Message thread prioritization interface |
US11956192B2 (en) | 2019-08-12 | 2024-04-09 | Snap Inc. | Message reminder interface |
US11588772B2 (en) | 2019-08-12 | 2023-02-21 | Snap Inc. | Message reminder interface |
US10911387B1 (en) | 2019-08-12 | 2021-02-02 | Snap Inc. | Message reminder interface |
CN110705356A (zh) * | 2019-08-31 | 2020-01-17 | 深圳市大拿科技有限公司 | 功能控制方法及相关设备 |
CN110705356B (zh) * | 2019-08-31 | 2023-12-29 | 深圳市大拿科技有限公司 | 功能控制方法及相关设备 |
US11662890B2 (en) | 2019-09-16 | 2023-05-30 | Snap Inc. | Messaging system with battery level sharing |
US11320969B2 (en) | 2019-09-16 | 2022-05-03 | Snap Inc. | Messaging system with battery level sharing |
US11822774B2 (en) | 2019-09-16 | 2023-11-21 | Snap Inc. | Messaging system with battery level sharing |
US11425062B2 (en) | 2019-09-27 | 2022-08-23 | Snap Inc. | Recommended content viewed by friends |
US11270491B2 (en) | 2019-09-30 | 2022-03-08 | Snap Inc. | Dynamic parameterized user avatar stories |
US11676320B2 (en) | 2019-09-30 | 2023-06-13 | Snap Inc. | Dynamic media collection generation |
US11080917B2 (en) | 2019-09-30 | 2021-08-03 | Snap Inc. | Dynamic parameterized user avatar stories |
US11218838B2 (en) | 2019-10-31 | 2022-01-04 | Snap Inc. | Focused map-based context information surfacing |
US11563702B2 (en) | 2019-12-03 | 2023-01-24 | Snap Inc. | Personalized avatar notification |
US11063891B2 (en) | 2019-12-03 | 2021-07-13 | Snap Inc. | Personalized avatar notification |
WO2021114710A1 (fr) * | 2019-12-09 | 2021-06-17 | 上海幻电信息科技有限公司 | Procédé et appareil d'interaction vidéo de diffusion en continu en direct, et dispositif informatique |
US11128586B2 (en) | 2019-12-09 | 2021-09-21 | Snap Inc. | Context sensitive avatar captions |
US11778263B2 (en) | 2019-12-09 | 2023-10-03 | Shanghai Hode Information Technology Co., Ltd. | Live streaming video interaction method and apparatus, and computer device |
US11582176B2 (en) | 2019-12-09 | 2023-02-14 | Snap Inc. | Context sensitive avatar captions |
US11036989B1 (en) | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
US11594025B2 (en) | 2019-12-11 | 2023-02-28 | Snap Inc. | Skeletal tracking using previous frames |
US11263817B1 (en) | 2019-12-19 | 2022-03-01 | Snap Inc. | 3D captions with face tracking |
US11227442B1 (en) | 2019-12-19 | 2022-01-18 | Snap Inc. | 3D captions with semantic graphical elements |
US11908093B2 (en) | 2019-12-19 | 2024-02-20 | Snap Inc. | 3D captions with semantic graphical elements |
US11810220B2 (en) | 2019-12-19 | 2023-11-07 | Snap Inc. | 3D captions with face tracking |
US11636657B2 (en) | 2019-12-19 | 2023-04-25 | Snap Inc. | 3D captions with semantic graphical elements |
CN111191564A (zh) * | 2019-12-26 | 2020-05-22 | 三盟科技股份有限公司 | 基于多角度神经网络的多姿态人脸情绪识别方法及系统 |
US11128715B1 (en) | 2019-12-30 | 2021-09-21 | Snap Inc. | Physical friend proximity in chat |
US11140515B1 (en) | 2019-12-30 | 2021-10-05 | Snap Inc. | Interfaces for relative device positioning |
US11893208B2 (en) | 2019-12-31 | 2024-02-06 | Snap Inc. | Combined map icon with action indicator |
US11169658B2 (en) | 2019-12-31 | 2021-11-09 | Snap Inc. | Combined map icon with action indicator |
CN113099150A (zh) * | 2020-01-08 | 2021-07-09 | 华为技术有限公司 | 图像处理的方法、设备及系统 |
US11036781B1 (en) | 2020-01-30 | 2021-06-15 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11831937B2 (en) | 2020-01-30 | 2023-11-28 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUS |
US11651022B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11356720B2 (en) | 2020-01-30 | 2022-06-07 | Snap Inc. | Video generation system to render frames on demand |
US11651539B2 (en) | 2020-01-30 | 2023-05-16 | Snap Inc. | System for generating media content items on demand |
US11284144B2 (en) | 2020-01-30 | 2022-03-22 | Snap Inc. | Video generation system to render frames on demand using a fleet of GPUs |
US11263254B2 (en) | 2020-01-30 | 2022-03-01 | Snap Inc. | Video generation system to render frames on demand using a fleet of servers |
US11729441B2 (en) | 2020-01-30 | 2023-08-15 | Snap Inc. | Video generation system to render frames on demand |
US11991419B2 (en) | 2020-01-30 | 2024-05-21 | Snap Inc. | Selecting avatars to be included in the video being generated on demand |
US11619501B2 (en) | 2020-03-11 | 2023-04-04 | Snap Inc. | Avatar based on trip |
US11775165B2 (en) | 2020-03-16 | 2023-10-03 | Snap Inc. | 3D cutout image modification |
US11217020B2 (en) | 2020-03-16 | 2022-01-04 | Snap Inc. | 3D cutout image modification |
US11978140B2 (en) | 2020-03-30 | 2024-05-07 | Snap Inc. | Personalized media overlay recommendation |
US11818286B2 (en) | 2020-03-30 | 2023-11-14 | Snap Inc. | Avatar recommendation and reply |
US11625873B2 (en) | 2020-03-30 | 2023-04-11 | Snap Inc. | Personalized media overlay recommendation |
US11969075B2 (en) | 2020-03-31 | 2024-04-30 | Snap Inc. | Augmented reality beauty product tutorials |
US11956190B2 (en) | 2020-05-08 | 2024-04-09 | Snap Inc. | Messaging system with a carousel of related entities |
US11822766B2 (en) | 2020-06-08 | 2023-11-21 | Snap Inc. | Encoded image based messaging system |
US11922010B2 (en) | 2020-06-08 | 2024-03-05 | Snap Inc. | Providing contextual information with keyboard interface for messaging system |
US11543939B2 (en) | 2020-06-08 | 2023-01-03 | Snap Inc. | Encoded image based messaging system |
US11683280B2 (en) | 2020-06-10 | 2023-06-20 | Snap Inc. | Messaging system including an external-resource dock and drawer |
US11580682B1 (en) | 2020-06-30 | 2023-02-14 | Snap Inc. | Messaging system with augmented reality makeup |
CN111773676A (zh) * | 2020-07-23 | 2020-10-16 | 网易(杭州)网络有限公司 | 确定虚拟角色动作的方法及装置 |
US11863513B2 (en) | 2020-08-31 | 2024-01-02 | Snap Inc. | Media content playback and comments management |
US11360733B2 (en) | 2020-09-10 | 2022-06-14 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11893301B2 (en) | 2020-09-10 | 2024-02-06 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11833427B2 (en) | 2020-09-21 | 2023-12-05 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11452939B2 (en) | 2020-09-21 | 2022-09-27 | Snap Inc. | Graphical marker generation system for synchronizing users |
US11888795B2 (en) | 2020-09-21 | 2024-01-30 | Snap Inc. | Chats with micro sound clips |
US11910269B2 (en) | 2020-09-25 | 2024-02-20 | Snap Inc. | Augmented reality content items including user avatar to share location |
CN112215929A (zh) * | 2020-10-10 | 2021-01-12 | 珠海格力电器股份有限公司 | 一种虚拟社交的数据处理方法、装置及系统 |
US11615592B2 (en) | 2020-10-27 | 2023-03-28 | Snap Inc. | Side-by-side character animation from realtime 3D body motion capture |
US11660022B2 (en) | 2020-10-27 | 2023-05-30 | Snap Inc. | Adaptive skeletal joint smoothing |
US11450051B2 (en) | 2020-11-18 | 2022-09-20 | Snap Inc. | Personalized avatar real-time motion capture |
US11734894B2 (en) | 2020-11-18 | 2023-08-22 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
US11748931B2 (en) | 2020-11-18 | 2023-09-05 | Snap Inc. | Body animation sharing and remixing |
CN114630135A (zh) * | 2020-12-11 | 2022-06-14 | 北京字跳网络技术有限公司 | 一种直播互动方法及装置 |
WO2022143128A1 (fr) * | 2020-12-29 | 2022-07-07 | 华为技术有限公司 | Procédé et appareil d'appel vidéo basés sur un avatar, et terminal |
CN113014471B (zh) * | 2021-01-18 | 2022-08-19 | 腾讯科技(深圳)有限公司 | 会话处理方法,装置、终端和存储介质 |
CN113014471A (zh) * | 2021-01-18 | 2021-06-22 | 腾讯科技(深圳)有限公司 | 会话处理方法,装置、终端和存储介质 |
US11790531B2 (en) | 2021-02-24 | 2023-10-17 | Snap Inc. | Whole body segmentation |
US11734959B2 (en) | 2021-03-16 | 2023-08-22 | Snap Inc. | Activating hands-free mode on mirroring device |
US11798201B2 (en) | 2021-03-16 | 2023-10-24 | Snap Inc. | Mirroring device with whole-body outfits |
US11908243B2 (en) | 2021-03-16 | 2024-02-20 | Snap Inc. | Menu hierarchy navigation on electronic mirroring devices |
US11809633B2 (en) | 2021-03-16 | 2023-11-07 | Snap Inc. | Mirroring device with pointing based navigation |
US11978283B2 (en) | 2021-03-16 | 2024-05-07 | Snap Inc. | Mirroring device with a hands-free mode |
US11544885B2 (en) | 2021-03-19 | 2023-01-03 | Snap Inc. | Augmented reality experience based on physical items |
US11562548B2 (en) | 2021-03-22 | 2023-01-24 | Snap Inc. | True size eyewear in real time |
US11636654B2 (en) | 2021-05-19 | 2023-04-25 | Snap Inc. | AR-based connected portal shopping |
US11941767B2 (en) | 2021-05-19 | 2024-03-26 | Snap Inc. | AR-based connected portal shopping |
US11941227B2 (en) | 2021-06-30 | 2024-03-26 | Snap Inc. | Hybrid search system for customizable media |
US11854069B2 (en) | 2021-07-16 | 2023-12-26 | Snap Inc. | Personalized try-on ads |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11983462B2 (en) | 2021-08-31 | 2024-05-14 | Snap Inc. | Conversation guided augmented reality experience |
US11670059B2 (en) | 2021-09-01 | 2023-06-06 | Snap Inc. | Controlling interactive fashion based on body gestures |
US11673054B2 (en) | 2021-09-07 | 2023-06-13 | Snap Inc. | Controlling AR games on fashion items |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11900506B2 (en) | 2021-09-09 | 2024-02-13 | Snap Inc. | Controlling interactive fashion based on facial expressions |
US11734866B2 (en) | 2021-09-13 | 2023-08-22 | Snap Inc. | Controlling interactive fashion based on voice |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11983826B2 (en) | 2021-09-30 | 2024-05-14 | Snap Inc. | 3D upper garment tracking |
US11636662B2 (en) | 2021-09-30 | 2023-04-25 | Snap Inc. | Body normal network light and rendering control |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11651572B2 (en) | 2021-10-11 | 2023-05-16 | Snap Inc. | Light and rendering of garments |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US11995757B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Customized animation from video |
US11996113B2 (en) | 2021-10-29 | 2024-05-28 | Snap Inc. | Voice notes with changing effects |
US11960784B2 (en) | 2021-12-07 | 2024-04-16 | Snap Inc. | Shared augmented reality unboxing experience |
US11748958B2 (en) | 2021-12-07 | 2023-09-05 | Snap Inc. | Augmented reality unboxing experience |
US12008811B2 (en) | 2021-12-14 | 2024-06-11 | Snap Inc. | Machine learning-based selection of a representative video frame within a messaging application |
US11880947B2 (en) | 2021-12-21 | 2024-01-23 | Snap Inc. | Real-time upper-body garment exchange |
US11928783B2 (en) | 2021-12-30 | 2024-03-12 | Snap Inc. | AR position and orientation along a plane |
US11887260B2 (en) | 2021-12-30 | 2024-01-30 | Snap Inc. | AR position indicator |
US11823346B2 (en) | 2022-01-17 | 2023-11-21 | Snap Inc. | AR body part tracking system |
US11954762B2 (en) | 2022-01-19 | 2024-04-09 | Snap Inc. | Object replacement system |
US12002146B2 (en) | 2022-03-28 | 2024-06-04 | Snap Inc. | 3D modeling based on neural light field |
US11870745B1 (en) | 2022-06-28 | 2024-01-09 | Snap Inc. | Media gallery sharing and management |
US11893166B1 (en) | 2022-11-08 | 2024-02-06 | Snap Inc. | User avatar movement control using an augmented reality eyewear device |
US12002175B2 (en) | 2023-06-30 | 2024-06-04 | Snap Inc. | Real-time motion transfer for prosthetic limbs |
Also Published As
Publication number | Publication date |
---|---|
KR20130022434A (ko) | 2013-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013027893A1 (fr) | Appareil et procédé pour des services de contenu émotionnel sur des dispositifs de télécommunication, appareil et procédé pour une reconnaissance d'émotion pour ceux-ci, et appareil et procédé pour générer et mettre en correspondance le contenu émotionnel à l'aide de ceux-ci | |
US11736756B2 (en) | Producing realistic body movement using body images | |
US11783524B2 (en) | Producing realistic talking face with expression using images text and voice | |
WO2020204000A1 (fr) | Système d'aide à la communication, procédé d'aide à la communication, programme d'aide à la communication et programme de commande d'image | |
JP6616288B2 (ja) | 通信における情報交換のための方法、ユーザ端末、及びサーバ | |
CN116797694A (zh) | 表情符号人偶化 | |
US20090016617A1 (en) | Sender dependent messaging viewer | |
US20190222806A1 (en) | Communication system and method | |
CN109691054A (zh) | 动画用户标识符 | |
CN108874114B (zh) | 实现虚拟对象情绪表达的方法、装置、计算机设备及存储介质 | |
US11151796B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
CN110418095B (zh) | 虚拟场景的处理方法、装置、电子设备及存储介质 | |
CN109150690B (zh) | 交互数据处理方法、装置、计算机设备和存储介质 | |
CN112199016B (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
KR102148151B1 (ko) | 디지털 커뮤니케이션 네트워크에 기반한 지능형 채팅 | |
US11423627B2 (en) | Systems and methods for providing real-time composite video from multiple source devices featuring augmented reality elements | |
KR20120018479A (ko) | 얼굴 표정 및 동작 인식을 이용한 아바타 제공 서버 및 그 방법 | |
US20220398816A1 (en) | Systems And Methods For Providing Real-Time Composite Video From Multiple Source Devices Featuring Augmented Reality Elements | |
US11553009B2 (en) | Information processing device, information processing method, and computer program for switching between communications performed in real space and virtual space | |
US20220291752A1 (en) | Distributed Application Platform Projected on a Secondary Display for Entertainment, Gaming and Learning with Intelligent Gesture Interactions and Complex Input Composition for Control | |
KR20130082693A (ko) | 아바타를 사용한 영상 채팅 장치 및 방법 | |
US20230386147A1 (en) | Systems and Methods for Providing Real-Time Composite Video from Multiple Source Devices Featuring Augmented Reality Elements | |
JP2023099309A (ja) | アバターを通じて映像の音声を手話に通訳する方法、コンピュータ装置、およびコンピュータプログラム | |
JP5894505B2 (ja) | 画像コミュニケーションシステム、画像生成装置及びプログラム | |
KR100736541B1 (ko) | 개인 캐릭터의 온라인 통합 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11871115 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11871115 Country of ref document: EP Kind code of ref document: A1 |