WO2004095308A1 - Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology - Google Patents

Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology Download PDF

Info

Publication number
WO2004095308A1
WO2004095308A1 PCT/KR2004/000905 KR2004000905W WO2004095308A1 WO 2004095308 A1 WO2004095308 A1 WO 2004095308A1 KR 2004000905 W KR2004000905 W KR 2004000905W WO 2004095308 A1 WO2004095308 A1 WO 2004095308A1
Authority
WO
WIPO (PCT)
Prior art keywords
avatar
natural language
client
language processing
character message
Prior art date
Application number
PCT/KR2004/000905
Other languages
French (fr)
Inventor
Ji Seon Hong
Chang Ho Ham
Original Assignee
Eulen, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR10-2003-0025134 priority Critical
Priority to KR1020030025134A priority patent/KR20040091331A/en
Application filed by Eulen, Inc. filed Critical Eulen, Inc.
Publication of WO2004095308A1 publication Critical patent/WO2004095308A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique

Abstract

The present invention relates to a method and system for representing avatars corresponding to character messages and the contents of the messages using natural language processing technology. The method analyzes the prefixes, nouns, suffixes and newly-coined words of a character message based on the natural language processing technology, extracts avatar images, such as eye, nose, mouth and motion, corresponding to the contents of the character message input by a client through chat or the character message of a mobile communication terminal, and allows the intention of the client, which is intended to be transmitted to a counterpart client, to be more effectively perceived by the counterpart client through the shape and motion of the avatar of the client that is an intention transmitting means, such as a cartoon.

Description

METHOD AND SYSTEM FOR EXPRESSING AVATAR THAT CORRESPOND TO MESSAGE AND SENTENCE INPUTTED OF USING NATURAL LANGUAGE
PROCESSING TECHNOLOGY
Technical Field
The present invention relates, in general, to a method and system for representing avatars corresponding to character messages and the contents of the messages using natural language processing technology and, more particularly, to a method and system for representing avatars corresponding to character messages and the contents of the messages using natural language processing technology, which analyzes the prefixes, nouns, suffixes and newly-coined words of a character message based on the natural language processing technology, extracts avatar images, such as eye, nose, mouth and motion, corresponding to the contents of the character message input by a client through chat or the character message of a mobile communication terminal, and allows the intention of the client, which is intended to be transmitted to a counterpart client, to be more effectively perceived by the counterpart client through the shape and motion of the avatar of the client that is an intention transmitting means, such as a cartoon. Background Art
Natural language processing technology is a field of computer science that deals with concepts and methods of simulating human intelligent communication behaviors. Unlike the operations of a computer related to a database, artificial intelligent enables human meta-knowledge (the knowledge of knowledge) and heuristics (humans' empirical knowledge) to be utilized. To the question, "What is the size of Admiral Sun-shin Lee's shoes?," humans would easily answer "I don't know." In contrast, computers answer the question after searching all knowledge stored therein. This is one of the differences between humans and computers.
A human intellectual process is composed of learning, inference and self-correction. Research has been performed to implement the human intellectual process m a computer. Furthermore, it is difficult to implement common sense in a computer because the amount of the common sense is infinite. However, research related to common sense has been attempted. When humans are walking, speaking, riding bicycles and driving automobiles, they do not make decisions after intentional meditation. The fact that as human skills are parts of humans, humans do not need to recognize the skills is the same as the fact that humans do not need to recognize the detailed movement of their bodies to perform common actions. Accordingly, attempts to imitate humans have many limitations.
In the field of natural language processing, systems, such as automatic translation machine, a voice recognition device, and an optical character reader, have been put to practical use.
In early stages, most games and chat services provided avatars that were formed by combining several characters, or were already finished. In general, users selected some, which can represent their likings and personalities, from avatars provided by service providers, and used them.
Recently, by providing a plurality of avatar components (such as eyes, noses, mouths, hair styles and customs) with the assistance of the development of graphic technology, the number of combinations of components has been infinitely increased, so that clients can create and use various and differentiated avatars.
However, the above-described avatars are problematic in that the utilization thereof is very limitative because the avatars are only a means for expressing an intention that clients can input through e-mail, Short Message Service (SMS) or chat, users are easily fed up with avatars because the avatars remain constant as long as clients do not change images or the components of the avatars set by the clients in early stages, and the avatars cannot express emotion and motion as long as the clients do not change the components of the avatars. A method of representing avatars after previously selecting desired emotion and motion is described in Korean Unexamined Pat. Appl. Publication No. 2000-0072569 (December 5, 2000), and this method is problematic in that the presentation of motion and action is limitative.
Another method is disclosed in Korean Unexamined Pat. Appl. Publication No. 2003-0026506 (April 3, 2003), which recognizes simple emoticons and several words (e.g., ΛΛ, _?_?. @-@, HaHa) . However, in this method, the representation of emotion is limitative and motion cannot be expressed.
Disclosure of the Invention
Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a method and system, which, when every character message oriented toward the use of an avatar is input on the wired or wireless Internet, creates the emotion and motion of an avatar using a means for interpreting and comprehending a sentence input by the client using natural language processing technology and a means for extracting avatar images corresponding to the interpreted and comprehended sentence, and transmits an avatar capable of representing the emotion and motion corresponding to the character message and the contents of the character message, which is input by a client on the wired or wireless Internet, to a counterpart client. In order to accomplish the above object, the present invention provides a method of representing avatars corresponding to input character messages and contents of the character messages on mobile communication terminals or wired or wireless Internet using natural language processing technology, comprising the first step of searching for natural language words of a character message through respective recognition units and analyzing contents of the character message in conjunction with a natural language processing database, at a natural language processing server; the second step of extracting avatar images to correspond to the analyzed contents of the character message with reference to a previously registered avatar database, at an avatar server; and the third step of applying the extracted avatar images to a previously registered avatar of a client
In the meantime, the method may further include the step of creating an avatar of the client if the avatar of the client does not exist.
The present invention provides a system for representing avatars corresponding to input character messages and contents of the character messages on mobile communication terminals or wired or wireless Internet using natural language processing technology, including a natural language processing server including natural language recognition units for searching for natural language words of a character message input by a client, and a natural language processing unit for analyzing and comprehending the contents of the character message extracted by the natural language recognition units m conjunction with a natural language database; an avatar server including a natural language link unit for receiving the contents of the character message input by the client and comprehended by the natural language processing unit, an avatar image extraction unit for extracting avatar images corresponding to the contents of the character message, which are transmitted from the natural language link unit, from an avatar image database, an avatar image application unit for applying the avatar images, which are extracted by the avatar image extraction unit, to an avatar of the client previously registered in the avatar image database, and an avatar control unit for controlling the avatar of the client to which natural language processing is applied and managing continuous motion to represent a meaning of the character message input by the client; and a central control unit for controlling the natural language server and the avatar server and providing interfaces between the respective servers. Accordingly, in accordance with the above-described construction, the avatar representing method of the present invention analyzes the contents of a sentence input from a client who sends a character message through wired or wireless data communications with a mobile communication company or website having adopted the present invention, extracts avatar images representing the facial expressing and motion, and applies the avatar images to the avatar of the client, so that the client can make the counterpart client more effectively perceive his or her intention through the avatar in which emotion and motion are implemented by linking the character message input by the client with the avatar of the client.
The avatar representing system includes a natural language processing server, an avatar server and one or more clients inputting character messages via mobile communication terminals or the Internet so as to comprehend a character message input from a client via a mobile communication terminal or the Internet and apply emotion and motion to the avatar of the client to express the meaning of the character message.
Furthermore, the present invention provides a control method using natural language processing technology in a method of representing emotion and motion on a mobile communication terminal or the Internet through the above- described system operation and the below-described steps. That is, to comprehend expressible emotion and motion included in the character message input by the client on the mobile communication terminal or the Internet, and apply the emotion and motion to the avatar of the client, the system of the present invention includes the natural language processing server, the avatar server and at least one client inputting the character message. The method of the present invention includes the step of analyzing a sentence, which is input from the client who sends character messages oriented toward the use of an avatar through wired or wireless communication related to a mobile communication company or website, using the respective recognition units (prefix recognition unit, noun recognition unit, suffix recognition unit, newly-coined recognition unit and database) of the avatar server, the step of extracting expressible motion and motion with reference to an avatar database using the avatar server (avatar link recognition unit, avatar image extraction unit, avatar image application unit and avatar control unit) , and the step of applying extracted avatar images to the previously registered avatar of the client.
Furthermore, the avatar may include all visual images capable of representing the client.
Brief Description of the Drawings FIG. 1 is a schematic diagram showing servers and clients having an avatar server for representing avatars and a natural language processing server in accordance with an embodiment of the present invention; FIG. 2 is a configuration diagram of a server for representing avatars in accordance with an embodiment of the present invention;
FIG. 3 is a flowchart schematically showing a process of representing an avatar in accordance with an embodiment of the present invention;
FIG. 4 is a flowchart showing a process ranging from the input of a character message by a client to the creation of an avatar in terms of servers that intend to represent an avatar; and FIG. 5 is an example view of a resulting screen on which an avatar message of the mobile communication terminal of a client, to which natural language processing in accordance with the embodiment of the present invention is applied, is displayed on the mobile communication terminal of a counterpart client.
^Description of principal elements of the drawings*
100: natural language recognition server 105: avatar server
110: client 115: counterpart client
200: natural language link unit 205: avatar image extraction unit 210: avatar image application unit 215: avatar control unit
220; prefix recognition unit 225: noun recognition unit
230: suffix recognition unit 235: newly-coined word recognition unit 240 avatar image database
245 registered avatar image database
250 natural language processing database 255 central processing unit
500 input screen of mobile communication terminal of client 505 resulting screen of mobile communication terminal of counterpart client
Best Mode for Carrying Out the Invention
A preferred embodiment of the present invention is described with reference to the attached drawings.
In the assignment of reference characters to the elements of the attached drawings, the same elements are designated by the same reference characters as much as possible even though depicted in different drawings.
Furthermore, in the following description, many specific items, such as specific elements, are illustrated to describe embodiments of the present invention, which is provided only to help the further comprehensive understanding of the present invention. It is apparent to those skilled m the art that the present invention can be implemented without depending on the specific items. Furthermore, in the description of the present invention, if it is determined that detailed descriptions regarding related known functions or constructions may unnecessarily make the gist of the present invention vague, the detailed descriptions will be omitted. To specify the present invention, variations or modifications can be made within the range without departing from the scope of the present invention.
FIG. 1 is a schematic diagram showing servers and clients having an avatar server for representing avatars and a natural language processing server in accordance with an embodiment of the present invention.
Referring to FIG. 1, the present invention analyzes the contents of a character message sent between clients 110 and 115, and varies the avatar of the client to represent emotion and motion corresponding to the meaning of the character message and, thus, make the counterpart client 115 having received the character message effectively perceive the contents of the character message. The present invention is embodied while having a natural language processing server 100 and an avatar server 105, or working in conjunction with them.
The natural language processing server 100 stores and holds the prefixes, nouns, suffixes and newly-coined words of a language. The prefixes, nouns, suffixes and newly- coined words of the language are stored in a database with the contents thereof being matched to meanings required for natural language processing.
The avatar server 105 performs functions of providing an avatar in conformity with a sentence that the client 110 inputs, managing avatars, providing an avatar and a message to a counterpart, and performing operations in conjunction with the natural language processing server 100.
FIG. 2 is a configuration diagram of a server for representing avatars in accordance with an embodiment of the present invention.
Referring to FIG. 2, the server is composed of the natural language processing server 100 and the avatar server 105, and implements the present invention by transmitting and receiving useful data therebetween through a central control unit 255.
The natural language processing server 100 includes a prefix recognition unit 220, a noun recognition unit 230, a newly-coined word recognition unit 235, a natural language processing unit 260, and a natural language processing database 250.
The prefix recognition unit 220 functions to search for and recognize prefixes included in the sentences input by the client 110, and refers to a plurality of prefixes that have been previously stored in the natural language processing database 250 and can be input by the client 110.
The noun recognition unit 225 functions to search for and recognize nouns included in sentences input by the client 110, and refers to a plurality of nouns that have been previously stored in the natural language processing database 250 and can be input by the client 110.
The suffix recognition unit 230 functions to search for and recognize prefixes included in sentences input by the client 110, and refers to a plurality of suffixes that have been previously stored in the natural language processing database 250 and can be input by the client 110. The newly-coined word recognition unit 230 functions to search for and recognize newly-coined words included in the sentences input by the client 110, and refers to a plurality of newly-coined words that have been previously stored in the natural language processing database 250 and can be input by the client 110.
The natural language processing unit 260 functions to comprehend the meaning of a character message that the respective recognition units 220, 225, 230 and 235 have extracted in conjunction with the natural language processing database 250.
The avatar server 105 includes a natural language link unit 200, an avatar image extraction unit 205, an avatar image application unit 210, an avatar control unit 215, an avatar image database 240, and a registered avatar image database 245.
The natural language link unit 200 receives the meaning of a character message input by the client 110, which has been comprehended by the natural language processing unit 260 of the natural language processing server 100.
The avatar image extraction unit 205 extracts avatar images corresponding to the meaning of the character message from the avatar image database 240.
The avatar image application unit 210 converts the extracted avatar images into commands for transmitting instructions to cause the avatar of the client to represent the meaning and automatically varies the avatar of the client according to a program.
The avatar control unit 215 functions to control the avatar to which natural language processing is applied, and manages continuous motion to represent the meaning of the character message input by the client.
The avatar image database 240 is a database in which a plurality of avatar images required for the representation of the meaning of the character message input by the client 110 are stored.
The registered avatar image database 245 is the storage where the basic avatars of the client 110 are stored. The basic avatars are original avatars to which extracted avatar images (eye, nose, mouth, motion, etc.) are applied to represent the meaning of a character message input by the client 110 through an avatar of the client 110.
The central control unit 255 functions as an interface that controls the respective servers (the natural language server and the avatar server) and manages data exchange between the servers . When the client 110 inputs a character message via a mobile communication terminal or the Internet, the natural language processing server 100 comprehends the meaning of the character message, extracts avatar images corresponding to the comprehended meaning, and applies the extracted avatar images to the avatar of the client 110. With this construction, the client 110 can transmit his or her intention to the counterpart client 115 more effectively.
FIG. 3 is a flowchart schematically showing a process of representing an avatar in accordance with an embodiment of the present invention.
Referring to FIG. 3, when a character message is input via a mobile communication terminal or the Internet at step 300, the creation of an avatar that the present invention intends to implement starts. If the client 110 already possesses his or her own avatar at step 305, an analysis of the character message starts at step 310. If the client 110 does not possess the avatar, the avatar of the client is created at step 325. Avatar images corresponding to the meaning of the input character message are extracted at step 315, and are applied to the avatar of the client 110 at step 320.
FIG. 4 is a flowchart showing a process ranging from the input of a character message by a client to the creation of an avatar in terms of servers that intend to represent an avatar. Referring to FIG. 4, when the client 110 inputs a character message that is intended to be transmitted to the counterpart client 115 at step 400, the natural language processing server 100 searches for and analyzes the prefixes of the character message at step 405, searches for and analyzes the nouns of the character message at step 410, searches for and analyzes the suffixes of the character message at step 415, searches for and analyzes the newly-coined words of the character message at step 420, and performs natural language processing at step 425.
If the contents of the character message can be natural language processed at step 430, an extracted result of the natural language processing is transmitted to the natural language link unit 200 of the avatar server 105 at step 435, and avatar images corresponding to the emotion and motion of the character message are extracted at step 440. The extracted avatar images are inserted into the existing avatar of the client 110 at step 445, and an avatar to which the extracted avatar images are applied is created at step 450.
Thereafter, if a continuous avatar is necessary at step 455, the previous steps are repeated. Thereafter, the continuous avatar is created, and the contents of the character message are inserted into the created avatar at step 460.
If the contents of the character message cannot be natural language processed at step 430, a basic avatar is created and then the contents of the character message are inserted into the basic avatar at step 465.
FIG. 5 is an example view of a resulting screen on which an avatar message of the mobile communication terminal of a client, to which natural language processing in accordance with the embodiment of the present invention is applied, is displayed on the mobile communication terminal of a counterpart client. Referring to FIG. 5, in the use of the present invention, the client 110 inputs a character message via his or her mobile communication terminal or the Internet at step 500. The character message is natural language processed into an avatar, and a resulting screen 505 is displayed on the counterpart client 115.
When the message "take an umbrella" 500 is input, an avatar 505 in which an umbrella image and the expression "take" are inserted into a basic avatar is displayed on the mobile communication terminal or Internet screen of the counterpart client 115.
Industrial Applicability
As described above, in accordance with the present invention, a client can easily and realistically express his or her intention, which is intended to be transmitted to a counterpart client, using the shape of motion of his or her avatar via a mobile communication terminal or the Internet, so that there is an advantage in that the intention of the client can be interestingly and correctly transmitted compared to an existing avatar chat or message transmission.

Claims

Claims
1. A method of representing avatars corresponding to input character messages and contents of the character messages on mobile communication terminals or wired or wireless Internet using natural language processing technology, comprising: the first step of searching for natural language words of a character message through respective recognition units and analyzing contents of the character message in conjunction with a natural language processing database, at a natural language processing server; the second step of extracting avatar images to correspond to the analyzed contents of the character message with reference to a previously registered avatar database, at an avatar server; and the third step of applying the extracted avatar images to a previously registered avatar of a client
2. The method as set forth in claim 1, wherein the first step further comprises the step of creating an avatar of the client if the avatar of the client does not exist.
3. A system for representing avatars corresponding to input character messages and contents of the character messages on mobile communication terminals or wired or wireless Internet using natural language processing technology, comprising: a natural language processing server (100) comprising natural language recognition units (220, 225, 230 and 235) for searching for natural language words of a character message input by a client, and a natural language processing unit (260) for analyzing and comprehending contents of the character message extracted by the natural language recognition units in conjunction with a natural language database (250) ; an avatar server (105) comprising a natural language link unit (200) for receiving the contents of the character message input by the client and comprehended by the natural language processing unit (260) , an avatar image extraction unit (205) for extracting avatar images corresponding to the contents of the character message, which are transmitted from the natural language link unit (200) , from an avatar image database (240) , an avatar image application unit (210) for applying the avatar images, which are extracted by the avatar image extraction unit (205) , to an avatar of the client previously registered in the avatar image database (245) , and an avatar control unit (215) for controlling the avatar of the client to which natural language processing is applied and managing continuous motion to represent a meaning of the character message input by the client; and a central control unit (255) for controlling the natural language server (100) and the avatar server (105) and providing interfaces between the respective servers .
PCT/KR2004/000905 2003-04-21 2004-04-20 Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology WO2004095308A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR10-2003-0025134 2003-04-21
KR1020030025134A KR20040091331A (en) 2003-04-21 2003-04-21 Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology

Publications (1)

Publication Number Publication Date
WO2004095308A1 true WO2004095308A1 (en) 2004-11-04

Family

ID=33308286

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2004/000905 WO2004095308A1 (en) 2003-04-21 2004-04-20 Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology

Country Status (2)

Country Link
KR (1) KR20040091331A (en)
WO (1) WO2004095308A1 (en)

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015064903A1 (en) * 2013-10-31 2015-05-07 Samsung Electronics Co., Ltd. Displaying messages in an electronic device
US9105014B2 (en) 2009-02-03 2015-08-11 International Business Machines Corporation Interactive avatar in messaging environment
CN110880198A (en) * 2018-09-06 2020-03-13 百度在线网络技术(北京)有限公司 Animation generation method and device
US10848446B1 (en) 2016-07-19 2020-11-24 Snap Inc. Displaying customized electronic messaging graphics
US10852918B1 (en) 2019-03-08 2020-12-01 Snap Inc. Contextual information in chat
US10861170B1 (en) 2018-11-30 2020-12-08 Snap Inc. Efficient human pose tracking in videos
US10872451B2 (en) 2018-10-31 2020-12-22 Snap Inc. 3D avatar rendering
US10880246B2 (en) 2016-10-24 2020-12-29 Snap Inc. Generating and displaying customized avatars in electronic messages
US10893385B1 (en) 2019-06-07 2021-01-12 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US10896534B1 (en) 2018-09-19 2021-01-19 Snap Inc. Avatar style transformation using neural networks
US10895964B1 (en) 2018-09-25 2021-01-19 Snap Inc. Interface to display shared user groups
US10902661B1 (en) 2018-11-28 2021-01-26 Snap Inc. Dynamic composite user identifier
US10904181B2 (en) 2018-09-28 2021-01-26 Snap Inc. Generating customized graphics having reactions to electronic message content
US10911387B1 (en) 2019-08-12 2021-02-02 Snap Inc. Message reminder interface
US10936066B1 (en) 2019-02-13 2021-03-02 Snap Inc. Sleep detection in a location sharing system
US10939246B1 (en) 2019-01-16 2021-03-02 Snap Inc. Location-based context information sharing in a messaging system
US10936157B2 (en) 2017-11-29 2021-03-02 Snap Inc. Selectable item including a customized graphic for an electronic messaging application
US10951562B2 (en) 2017-01-18 2021-03-16 Snap. Inc. Customized contextual media content item generation
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US10949648B1 (en) 2018-01-23 2021-03-16 Snap Inc. Region-based stabilized face tracking
US10964082B2 (en) 2019-02-26 2021-03-30 Snap Inc. Avatar based on weather
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
USD916871S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
USD916810S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a graphical user interface
US10984575B2 (en) 2019-02-06 2021-04-20 Snap Inc. Body pose estimation
USD916811S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
US10984569B2 (en) 2016-06-30 2021-04-20 Snap Inc. Avatar based ideogram generation
USD916872S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a graphical user interface
USD916809S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
US10991395B1 (en) 2014-02-05 2021-04-27 Snap Inc. Method for real time video processing involving changing a color of an object on a human face in a video
US10992619B2 (en) 2019-04-30 2021-04-27 Snap Inc. Messaging system with avatar generation
US11010022B2 (en) 2019-02-06 2021-05-18 Snap Inc. Global event-based avatar
US11030789B2 (en) 2017-10-30 2021-06-08 Snap Inc. Animated chat presence
US11032670B1 (en) 2019-01-14 2021-06-08 Snap Inc. Destination sharing in location sharing system
US11030813B2 (en) 2018-08-30 2021-06-08 Snap Inc. Video clip object tracking
US11036781B1 (en) 2020-01-30 2021-06-15 Snap Inc. Video generation system to render frames on demand using a fleet of servers
US11039270B2 (en) 2019-03-28 2021-06-15 Snap Inc. Points of interest in a location sharing system
US11036989B1 (en) 2019-12-11 2021-06-15 Snap Inc. Skeletal tracking using previous frames
US11048916B2 (en) 2016-03-31 2021-06-29 Snap Inc. Automated avatar generation
US11055514B1 (en) 2018-12-14 2021-07-06 Snap Inc. Image face manipulation
US11063891B2 (en) 2019-12-03 2021-07-13 Snap Inc. Personalized avatar notification
US11069103B1 (en) 2017-04-20 2021-07-20 Snap Inc. Customized user interface for electronic communications
US11074675B2 (en) 2018-07-31 2021-07-27 Snap Inc. Eye texture inpainting
US11080917B2 (en) 2019-09-30 2021-08-03 Snap Inc. Dynamic parameterized user avatar stories
US11100311B2 (en) 2016-10-19 2021-08-24 Snap Inc. Neural networks for facial modeling
US11103795B1 (en) 2018-10-31 2021-08-31 Snap Inc. Game drawer
US11122094B2 (en) 2017-07-28 2021-09-14 Snap Inc. Software application manager for messaging applications
US11120597B2 (en) 2017-10-26 2021-09-14 Snap Inc. Joint audio-video facial animation system
US11120601B2 (en) 2018-02-28 2021-09-14 Snap Inc. Animated expressive icon
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11128586B2 (en) 2019-12-09 2021-09-21 Snap Inc. Context sensitive avatar captions
US11140515B1 (en) 2019-12-30 2021-10-05 Snap Inc. Interfaces for relative device positioning
US11166123B1 (en) 2019-03-28 2021-11-02 Snap Inc. Grouped transmission of location data in a location sharing system
US11169658B2 (en) 2019-12-31 2021-11-09 Snap Inc. Combined map icon with action indicator
US11176737B2 (en) 2018-11-27 2021-11-16 Snap Inc. Textured mesh building
US11189070B2 (en) 2018-09-28 2021-11-30 Snap Inc. System and method of generating targeted user lists using customizable avatar characteristics
US11189098B2 (en) 2019-06-28 2021-11-30 Snap Inc. 3D object camera customization system
US11188190B2 (en) 2019-06-28 2021-11-30 Snap Inc. Generating animation overlays in a communication session
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US11217020B2 (en) 2020-03-16 2022-01-04 Snap Inc. 3D cutout image modification
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11227442B1 (en) 2019-12-19 2022-01-18 Snap Inc. 3D captions with semantic graphical elements
US11229849B2 (en) 2012-05-08 2022-01-25 Snap Inc. System and method for generating and displaying avatars
US11245658B2 (en) 2018-09-28 2022-02-08 Snap Inc. System and method of generating private notifications between users in a communication session
US11263817B1 (en) 2019-12-19 2022-03-01 Snap Inc. 3D captions with face tracking
US11284144B2 (en) 2020-01-30 2022-03-22 Snap Inc. Video generation system to render frames on demand using a fleet of GPUs
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11307747B2 (en) 2019-07-11 2022-04-19 Snap Inc. Edge gesture interface with smart interactions
US11310176B2 (en) 2018-04-13 2022-04-19 Snap Inc. Content suggestion system
US11320969B2 (en) 2019-09-16 2022-05-03 Snap Inc. Messaging system with battery level sharing
US11356720B2 (en) 2020-01-30 2022-06-07 Snap Inc. Video generation system to render frames on demand
US11360733B2 (en) 2020-09-10 2022-06-14 Snap Inc. Colocated shared augmented reality without shared backend
US11392264B1 (en) 2018-04-27 2022-07-19 Snap Inc. Map-based graphical user interface for multi-type social media galleries

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9665563B2 (en) 2009-05-28 2017-05-30 Samsung Electronics Co., Ltd. Animation system and methods for generating animation based on text-based data and user information
EP3812950A1 (en) 2019-10-23 2021-04-28 Tata Consultancy Services Limited Method and system for creating an intelligent cartoon comic strip based on dynamic content

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880731A (en) * 1995-12-14 1999-03-09 Microsoft Corporation Use of avatars with automatic gesturing and bounded interaction in on-line chat session
KR20010025161A (en) * 2000-06-02 2001-04-06 조양일 Method for providing an avatar maker
KR20010034987A (en) * 2000-06-22 2001-05-07 문경수 Method of using animated characters working together word in electronic mail or chatting on internet basis
KR20020042248A (en) * 2000-11-30 2002-06-05 한가람 Method and system for perceiving emotion from the text and visualizing the perceived emotion
US6453294B1 (en) * 2000-05-31 2002-09-17 International Business Machines Corporation Dynamic destination-determined multimedia avatars for interactive on-line communications

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880731A (en) * 1995-12-14 1999-03-09 Microsoft Corporation Use of avatars with automatic gesturing and bounded interaction in on-line chat session
US6453294B1 (en) * 2000-05-31 2002-09-17 International Business Machines Corporation Dynamic destination-determined multimedia avatars for interactive on-line communications
KR20010025161A (en) * 2000-06-02 2001-04-06 조양일 Method for providing an avatar maker
KR20010034987A (en) * 2000-06-22 2001-05-07 문경수 Method of using animated characters working together word in electronic mail or chatting on internet basis
KR20020042248A (en) * 2000-11-30 2002-06-05 한가람 Method and system for perceiving emotion from the text and visualizing the perceived emotion

Cited By (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9105014B2 (en) 2009-02-03 2015-08-11 International Business Machines Corporation Interactive avatar in messaging environment
US9749270B2 (en) 2009-02-03 2017-08-29 Snap Inc. Interactive avatar in messaging environment
US10158589B2 (en) 2009-02-03 2018-12-18 Snap Inc. Interactive avatar in messaging environment
US11229849B2 (en) 2012-05-08 2022-01-25 Snap Inc. System and method for generating and displaying avatars
WO2015064903A1 (en) * 2013-10-31 2015-05-07 Samsung Electronics Co., Ltd. Displaying messages in an electronic device
US9641471B2 (en) 2013-10-31 2017-05-02 Samsung Electronics Co., Ltd. Electronic device, and method and computer-readable recording medium for displaying message in electronic device
US10991395B1 (en) 2014-02-05 2021-04-27 Snap Inc. Method for real time video processing involving changing a color of an object on a human face in a video
US11048916B2 (en) 2016-03-31 2021-06-29 Snap Inc. Automated avatar generation
US10984569B2 (en) 2016-06-30 2021-04-20 Snap Inc. Avatar based ideogram generation
US10855632B2 (en) 2016-07-19 2020-12-01 Snap Inc. Displaying customized electronic messaging graphics
US10848446B1 (en) 2016-07-19 2020-11-24 Snap Inc. Displaying customized electronic messaging graphics
US11100311B2 (en) 2016-10-19 2021-08-24 Snap Inc. Neural networks for facial modeling
US10880246B2 (en) 2016-10-24 2020-12-29 Snap Inc. Generating and displaying customized avatars in electronic messages
US10938758B2 (en) 2016-10-24 2021-03-02 Snap Inc. Generating and displaying customized avatars in media overlays
US11218433B2 (en) 2016-10-24 2022-01-04 Snap Inc. Generating and displaying customized avatars in electronic messages
US10951562B2 (en) 2017-01-18 2021-03-16 Snap. Inc. Customized contextual media content item generation
US11069103B1 (en) 2017-04-20 2021-07-20 Snap Inc. Customized user interface for electronic communications
US11385763B2 (en) 2017-04-27 2022-07-12 Snap Inc. Map-based graphical user interface indicating geospatial activity metrics
US10963529B1 (en) 2017-04-27 2021-03-30 Snap Inc. Location-based search mechanism in a graphical user interface
US10952013B1 (en) 2017-04-27 2021-03-16 Snap Inc. Selective location-based identity communication
US11122094B2 (en) 2017-07-28 2021-09-14 Snap Inc. Software application manager for messaging applications
US11120597B2 (en) 2017-10-26 2021-09-14 Snap Inc. Joint audio-video facial animation system
US11354843B2 (en) 2017-10-30 2022-06-07 Snap Inc. Animated chat presence
US11030789B2 (en) 2017-10-30 2021-06-08 Snap Inc. Animated chat presence
US10936157B2 (en) 2017-11-29 2021-03-02 Snap Inc. Selectable item including a customized graphic for an electronic messaging application
US10949648B1 (en) 2018-01-23 2021-03-16 Snap Inc. Region-based stabilized face tracking
US10979752B1 (en) 2018-02-28 2021-04-13 Snap Inc. Generating media content items based on location information
US11120601B2 (en) 2018-02-28 2021-09-14 Snap Inc. Animated expressive icon
US11310176B2 (en) 2018-04-13 2022-04-19 Snap Inc. Content suggestion system
US11392264B1 (en) 2018-04-27 2022-07-19 Snap Inc. Map-based graphical user interface for multi-type social media galleries
US11074675B2 (en) 2018-07-31 2021-07-27 Snap Inc. Eye texture inpainting
US11030813B2 (en) 2018-08-30 2021-06-08 Snap Inc. Video clip object tracking
CN110880198A (en) * 2018-09-06 2020-03-13 百度在线网络技术(北京)有限公司 Animation generation method and device
US10896534B1 (en) 2018-09-19 2021-01-19 Snap Inc. Avatar style transformation using neural networks
US11348301B2 (en) 2018-09-19 2022-05-31 Snap Inc. Avatar style transformation using neural networks
US11294545B2 (en) 2018-09-25 2022-04-05 Snap Inc. Interface to display shared user groups
US10895964B1 (en) 2018-09-25 2021-01-19 Snap Inc. Interface to display shared user groups
US11189070B2 (en) 2018-09-28 2021-11-30 Snap Inc. System and method of generating targeted user lists using customizable avatar characteristics
US10904181B2 (en) 2018-09-28 2021-01-26 Snap Inc. Generating customized graphics having reactions to electronic message content
US11171902B2 (en) 2018-09-28 2021-11-09 Snap Inc. Generating customized graphics having reactions to electronic message content
US11245658B2 (en) 2018-09-28 2022-02-08 Snap Inc. System and method of generating private notifications between users in a communication session
US11103795B1 (en) 2018-10-31 2021-08-31 Snap Inc. Game drawer
US11321896B2 (en) 2018-10-31 2022-05-03 Snap Inc. 3D avatar rendering
US10872451B2 (en) 2018-10-31 2020-12-22 Snap Inc. 3D avatar rendering
US11176737B2 (en) 2018-11-27 2021-11-16 Snap Inc. Textured mesh building
US10902661B1 (en) 2018-11-28 2021-01-26 Snap Inc. Dynamic composite user identifier
US11315259B2 (en) 2018-11-30 2022-04-26 Snap Inc. Efficient human pose tracking in videos
US10861170B1 (en) 2018-11-30 2020-12-08 Snap Inc. Efficient human pose tracking in videos
US11199957B1 (en) 2018-11-30 2021-12-14 Snap Inc. Generating customized avatars based on location information
US11055514B1 (en) 2018-12-14 2021-07-06 Snap Inc. Image face manipulation
US11032670B1 (en) 2019-01-14 2021-06-08 Snap Inc. Destination sharing in location sharing system
US10945098B2 (en) 2019-01-16 2021-03-09 Snap Inc. Location-based context information sharing in a messaging system
US10939246B1 (en) 2019-01-16 2021-03-02 Snap Inc. Location-based context information sharing in a messaging system
US11294936B1 (en) 2019-01-30 2022-04-05 Snap Inc. Adaptive spatial density based clustering
US11010022B2 (en) 2019-02-06 2021-05-18 Snap Inc. Global event-based avatar
US10984575B2 (en) 2019-02-06 2021-04-20 Snap Inc. Body pose estimation
US11275439B2 (en) 2019-02-13 2022-03-15 Snap Inc. Sleep detection in a location sharing system
US10936066B1 (en) 2019-02-13 2021-03-02 Snap Inc. Sleep detection in a location sharing system
US10964082B2 (en) 2019-02-26 2021-03-30 Snap Inc. Avatar based on weather
US10852918B1 (en) 2019-03-08 2020-12-01 Snap Inc. Contextual information in chat
US11301117B2 (en) 2019-03-08 2022-04-12 Snap Inc. Contextual information in chat
US11166123B1 (en) 2019-03-28 2021-11-02 Snap Inc. Grouped transmission of location data in a location sharing system
US11039270B2 (en) 2019-03-28 2021-06-15 Snap Inc. Points of interest in a location sharing system
US10992619B2 (en) 2019-04-30 2021-04-27 Snap Inc. Messaging system with avatar generation
USD916811S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
USD916809S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
USD916872S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a graphical user interface
USD916871S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a transitional graphical user interface
USD916810S1 (en) 2019-05-28 2021-04-20 Snap Inc. Display screen or portion thereof with a graphical user interface
US10893385B1 (en) 2019-06-07 2021-01-12 Snap Inc. Detection of a physical collision between two client devices in a location sharing system
US11189098B2 (en) 2019-06-28 2021-11-30 Snap Inc. 3D object camera customization system
US11188190B2 (en) 2019-06-28 2021-11-30 Snap Inc. Generating animation overlays in a communication session
US11307747B2 (en) 2019-07-11 2022-04-19 Snap Inc. Edge gesture interface with smart interactions
US10911387B1 (en) 2019-08-12 2021-02-02 Snap Inc. Message reminder interface
US11320969B2 (en) 2019-09-16 2022-05-03 Snap Inc. Messaging system with battery level sharing
US11080917B2 (en) 2019-09-30 2021-08-03 Snap Inc. Dynamic parameterized user avatar stories
US11270491B2 (en) 2019-09-30 2022-03-08 Snap Inc. Dynamic parameterized user avatar stories
US11218838B2 (en) 2019-10-31 2022-01-04 Snap Inc. Focused map-based context information surfacing
US11063891B2 (en) 2019-12-03 2021-07-13 Snap Inc. Personalized avatar notification
US11128586B2 (en) 2019-12-09 2021-09-21 Snap Inc. Context sensitive avatar captions
US11036989B1 (en) 2019-12-11 2021-06-15 Snap Inc. Skeletal tracking using previous frames
US11227442B1 (en) 2019-12-19 2022-01-18 Snap Inc. 3D captions with semantic graphical elements
US11263817B1 (en) 2019-12-19 2022-03-01 Snap Inc. 3D captions with face tracking
US11140515B1 (en) 2019-12-30 2021-10-05 Snap Inc. Interfaces for relative device positioning
US11128715B1 (en) 2019-12-30 2021-09-21 Snap Inc. Physical friend proximity in chat
US11169658B2 (en) 2019-12-31 2021-11-09 Snap Inc. Combined map icon with action indicator
US11284144B2 (en) 2020-01-30 2022-03-22 Snap Inc. Video generation system to render frames on demand using a fleet of GPUs
US11263254B2 (en) 2020-01-30 2022-03-01 Snap Inc. Video generation system to render frames on demand using a fleet of servers
US11356720B2 (en) 2020-01-30 2022-06-07 Snap Inc. Video generation system to render frames on demand
US11036781B1 (en) 2020-01-30 2021-06-15 Snap Inc. Video generation system to render frames on demand using a fleet of servers
US11217020B2 (en) 2020-03-16 2022-01-04 Snap Inc. 3D cutout image modification
US11360733B2 (en) 2020-09-10 2022-06-14 Snap Inc. Colocated shared augmented reality without shared backend

Also Published As

Publication number Publication date
KR20040091331A (en) 2004-10-28

Similar Documents

Publication Publication Date Title
WO2004095308A1 (en) Method and system for expressing avatar that correspond to message and sentence inputted of using natural language processing technology
KR102050334B1 (en) Automatic suggestion responses to images received in messages, using the language model
KR101334066B1 (en) Self-evolving Artificial Intelligent cyber robot system and offer method
KR101925440B1 (en) Method for providing vr based live video chat service using conversational ai
US6526395B1 (en) Application of personality models and interaction with synthetic characters in a computing system
KR100841590B1 (en) Chat system, communication device, control method thereof and computer-readable information storage medium
KR20060125333A (en) A method for converting sms message to multimedia message and sending the multimedia message and text-image converting server
JP3135098U (en) E-mail image providing system
CN112152901A (en) Virtual image control method and device and electronic equipment
KR20190134080A (en) Apparatus for providing chatting service
KR101817342B1 (en) Method for making and selling a photo imoticon
KR20030026506A (en) System and method for interlocking process between emoticon and avatar
Kannan et al. Understanding emoticons: Perception and usage of emoticons in WhatsApp
KR101652486B1 (en) Sentiment communication system based on multiple multimodal agents
Itou et al. A Comic-Style Chat System with Japanese Expression Techniques for More Expressive Communication
CN112001929B (en) Picture asset processing method and device, storage medium and electronic device
CN112001930B (en) Picture asset processing method and device, storage medium and electronic device
KR100559287B1 (en) Chat system and method using animation of graphic images
KR20060104981A (en) System and method for interlocking process between emoticon and avatar
KR102268005B1 (en) Emotional Artificial Intelligence Curation System and method
KR101986153B1 (en) System and method for communication service using webtoon identification technology
CN111062207A (en) Expression image processing method and device, computer storage medium and electronic equipment
CN111897990A (en) Method, device and system for acquiring expression information
KR20190134055A (en) Method for providing chatting service in user treminal, and the program stored in medium for executing the method
KR20190134049A (en) User treminal for providing chatting service

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC, EPO FORM 1205A DATED 220606.

122 Ep: pct application non-entry in european phase