WO2002080107A1 - Text to visual speech system and method incorporating facial emotions - Google Patents
Text to visual speech system and method incorporating facial emotions Download PDFInfo
- Publication number
- WO2002080107A1 WO2002080107A1 PCT/IB2002/000860 IB0200860W WO02080107A1 WO 2002080107 A1 WO2002080107 A1 WO 2002080107A1 IB 0200860 W IB0200860 W IB 0200860W WO 02080107 A1 WO02080107 A1 WO 02080107A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face image
- strings
- emoticon
- text
- facial
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
Definitions
- the present invention relates to text to visual speech systems, and more particularly relates to a system and method for utilizing emoticons to generate emotions in a face image.
- On-line chat is particularly useful in many situations since it allows users to communicate over a network in real-time by typing text messages back and forth to each other in a common message window.
- emoticons are often typed in to infer emotions and/or facial expressions in the messages. Examples of commonly used emoticons include :-) for a smiley face, :-( for displeasure, ;-) for a wink, :-o for shock, :- ⁇ for sadness.
- a major advantage of audio-visual speech synthesis systems is that a view of an animated face image can improve intelligibility of both natural and synthetic speech significantly, especially under degraded acoustic conditions.
- the face image is computer generated, it is possible to manipulate facial expressions to signal emotion, which can, among other things, add emphasis to the speech and support the interaction in a dialogue situation.
- "Text to visual speech" systems utilize a keyboard or the like to enter text, then convert the text into a spoken message, and broadcast the spoken message along with an animated face image.
- One of the limitations of text to visual speech systems is that because the author of the message is simply typing in text, the output (i.e., the animated face and spoken message) lacks emotion and facial expressions. Accordingly, text to visual speech systems tend to provide a somewhat sterile form of person to person communication.
- the present invention addresses the above-mentioned problems by providing a visual speech system in which expressed emotions on an animated face can be created by inputting emoticon strings.
- the invention provides a visual speech system, wherein the visual speech system comprises: a data import system for receiving text data that includes word strings and emoticon strings; and a text-to-animation system for generating a displayable animated face image that can reproduce facial movements corresponding to the received word strings and the received emoticon strings.
- the invention provides a program product stored on a recordable medium, which when executed provides a visual speech system, comprising: a data import system for receiving text data that includes word strings and emoticon strings; and a text-to-animation system for generating a displayable animated face image that can reproduce facial movements corresponding to the received word strings and the received emoticon strings.
- the invention provides an online chat system having visual speech capabilities, comprising: (1) a first networked client having: (a) a first data import system for receiving text data that includes word strings and emoticon strings, and (b) a data export system for sending the text data to a network; and (2) a second networked client having: (a) a second data import system for receiving the text data from the network, and (b) a text-to-animation system for generating a displayable animated face image that reproduces facial movements corresponding to the received word strings and the received emoticon strings contained in the text data.
- the invention provides a method of performing visual speech on a system having a displayable animated face image, comprising the steps of: entering text data into a keyboard, wherein the text data includes word strings and emoticon strings; converting the word strings to audio speech; converting the word strings to mouth movements on the displayable animated face image, such that the mouth movements correspond with the audio speech; converting the emoticon strings to facial movements on the displayable animated face image, such that the facial movements correspond with expressed emotions associated with the entered emoticon strings; and displaying the animated face image along with a broadcast of the audio speech.
- the invention provides a visual speech system, comprising a data import system for receiving text data that includes at least one emoticon string, wherein the at least one emoticon string is associate with a predetermined facial expression; and a text-to-animation system for generating a displayable animated face image that can simulate facial movements corresponding to the predetermined facial expression.
- Fig. 1 depicts a block diagram of a visual speech system in accordance with a preferred embodiment of the present invention.
- Figs. 2 and 3 depict exemplary animated face images of the present invention.
- visual speech system 10 comprises a first client system 12 and a second client system 42 in communication with each other via network 40.
- first client system 12 and a second client system 42 in communication with each other via network 40.
- a multiple client system as shown in Fig. 1 is particularly useful in online chat applications where a user at a first client system 12 is in communication with a user at a second client system 42.
- Each client system e.g., client system 12
- client system 12 may be comprised of a stand-alone personal computer capable of executing a computer program, a browser program having access to applications available via a server, a dumb terminal in communication with a server, etc.
- Each client system Stored on each client system (or accessible to each client system) are executable processes that include an I/O system 20 and a text to speech video system 30.
- I/O system 20 and text to speech video system 30 may be implemented as software programs, executable on a processing unit.
- Each client system also includes: (1) an input system 14, such as a keyboard, mouse, hand held device, cell phone, voice recognition system, etc., for entering text data; and (2) an audio-visual output system comprised of, for example, a CRT display 16 and audio speaker 18.
- An exemplary operation of visual speech system 10 is described as follows.
- a first user at client system 12 can input text data via input system 14, and a corresponding animated face image and accompanying audio speech will be generated and appear on display 46 and speaker 48 of client system 42.
- a second user at client system 42 can respond by inputting text data via input system 44, and a second corresponding animated face image and accompanying audio speech will be generated and appear on display 16 and speaker 18 of client system 12.
- the inputted text data is converted into a remote audio-visual broadcast comprised of a moving animated face image that simulates speech. Therefore, rather than just receiving a text message, a user will receive a video speech broadcast containing the message.
- the user sending the message can not only input words, but also input emoticon strings that will cause the animated image being displayed to incorporate facial expressions and emotions.
- facial expression and “emotions” are used interchangeably, and may include any type of non-verbal facial movement.
- the user at client system 12 wanted to indicate pleasure or happiness along with the inputted word strings, the user could also type in an appropriate emoticon string i.e., a smiley face, :-).
- the resulting animated image on display 46 would then smile while speaking the words inputted at the first client system.
- Other emotions may include a wink, sad face, laugh, surprise, etc.
- emoticons regularly used in chat rooms, email, and other forms of online communication to indicate an emotion or the like.
- Each of these emoticons, as well as others not listed therein, may have an associated facial response that could be incorporated into a displayable animated face image.
- the facial expression and/or emotional response could appear after or before any spoken words, or preferably, be morphed into and along with the spoken words to provide a smooth transition for each message.
- Figs. 2 and 3 depict two examples of a displayable animated face image having different emotional or facial expressions.
- the subject is depicted with a neutral facial expression (no inputted emoticon), while Fig. 3 depicts the subject with an angry facial expression (resulting from an angry emoticon string >:- ⁇ ).
- the animated face image may morph talking along with the display of emotion.
- the animated face images of Figures 2 and 3 may comprise face geometries that are modeled as triangular-mesh-based 3D objects. Image or photometry data may or may not be superimposed on the geometry to obtain a face image.
- the face image may be handled as an object that is divided into a plurality of action units, such as eyebrows, eyes, mouth, etc.
- action units such as eyebrows, eyes, mouth, etc.
- one or more of the action units can be simulated according to a predetermined combination and degree.
- text data is entered into a first client system 12 via input system 14.
- the text data may comprise both word strings and emoticon strings.
- the data is received by data import system 26 of I/O system 20.
- the text data may be processed for display at display 16 of client system 12 (i.e. locally), and/or passed along to client system 42 for remote display.
- client system 12 may send the text data using data export system 28, which would export the data to network 40.
- client system 42 could then import the data using data import system 27.
- the imported text data could then be passed along to text-to-speech video system 31 for processing.
- Text-to-speech video system 31 has two primary functions: first, to convert the text data into audio speech; and second, to convert the text data into action units that correspond to displayable facial movements. Conversion of the text data to speech is handled by text-to-audio system 33. Systems for converting text to speech are well known in the art. The process of converting text data to facial movements is handled by text-to-animation system 35. Text-to-animation system 35 has two components, word string processor 37 and emoticon string processor 39. Word string processor 37 is primarily responsible for mouth movements associated with word strings that will be broadcast as spoken words. Accordingly, word string processor 37 primarily controls the facial action unit comprised of the mouth in the displayable facial image.
- Emoticon string processor 39 is responsible for processing the received emoticon strings and converting them to corresponding facial expressions. Accordingly, emoticon string processor 39 is responsible for controlling all of the facial action units in order to achieve the appropriate facial response. It should be understood that any type, combination and degree of facial movement be utilized to create a desired expression.
- Text-to-animation system 35 thus creates a complete animated facial image comprised of both mouth movements for speech and assorted facial movements for expressions.
- Accompanying the animated facial image is the speech associated with the word strings.
- a display driver 23 and audio driver 25 can be utilized to generate the audio and visual information on display 46 and speaker 48.
- each client system may include essentially the same software for communicating and generating visual speech. Accordingly, when client system 42 communicates responsive message back to client system 12, the same processing steps as those described above are implemented on client system 12 by I/O system 20 and text to speech video system 30.
- systems, functions, mechanisms, and modules described herein can be implemented in hardware, software, or a combination of hardware and software. They may be implemented by any type of computer system or other apparatus adapted for carrying out the methods described herein.
- a typical combination of hardware and software could be a general-purpose computer system with a computer program that, when loaded and executed, controls the computer system such that it carries out the methods described herein.
- a specific use computer containing specialized hardware for carrying out one or more of the functional tasks of the invention could be utilized.
- the present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods and functions described herein, and which - when loaded in a computer system - is able to carry out these methods and functions.
- Computer program, software program, program, program product, or software in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP02705014A EP1374179A1 (en) | 2001-03-29 | 2002-03-19 | Text to visual speech system and method incorporating facial emotions |
JP2002578253A JP2004519787A (ja) | 2001-03-29 | 2002-03-19 | 顔の感情を取り入れたテキスト視覚音声化システム及び方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/821,138 US20020194006A1 (en) | 2001-03-29 | 2001-03-29 | Text to visual speech system and method incorporating facial emotions |
US09/821,138 | 2001-03-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002080107A1 true WO2002080107A1 (en) | 2002-10-10 |
Family
ID=25232620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2002/000860 WO2002080107A1 (en) | 2001-03-29 | 2002-03-19 | Text to visual speech system and method incorporating facial emotions |
Country Status (6)
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040039771A (ko) * | 2002-11-04 | 2004-05-12 | 김남조 | 이모티콘 사운드 재생 장치 및 방법 |
EP1523160A1 (en) * | 2003-10-10 | 2005-04-13 | Nec Corporation | Apparatus and method for sending messages which indicate an emotional state |
EP1528483A1 (en) * | 2003-10-30 | 2005-05-04 | Nec Corporation | Device and method for displaying a text message together with information on emotional content of the message |
EP1575025A1 (en) * | 2002-12-20 | 2005-09-14 | Sony Electronics Inc. | Text display terminal device and server |
EP1942601A1 (en) * | 2006-12-29 | 2008-07-09 | Union Creations Limited | Device and method of expressing information in a communication message sent through a network |
WO2008096099A1 (en) | 2007-02-05 | 2008-08-14 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
WO2010034362A1 (en) * | 2008-09-23 | 2010-04-01 | Sony Ericsson Mobile Communications Ab | Methods and devices for controlling a presentation of an object |
US7805307B2 (en) | 2003-09-30 | 2010-09-28 | Sharp Laboratories Of America, Inc. | Text to speech conversion system |
EP2256642A3 (en) * | 2009-05-28 | 2011-05-04 | Samsung Electronics Co., Ltd. | Animation system for generating animation based on text-based data and user information |
CN104053131A (zh) * | 2013-03-12 | 2014-09-17 | 华为技术有限公司 | 一种文本通讯信息处理方法及相关设备 |
US9288303B1 (en) | 2014-09-18 | 2016-03-15 | Twin Harbor Labs, LLC | FaceBack—automated response capture using text messaging |
KR20180105005A (ko) * | 2017-03-14 | 2018-09-27 | 이명철 | 감성 콘텐츠 적용이 가능한 텍스트 에디터 지원 시스템 |
KR102053076B1 (ko) * | 2018-07-09 | 2019-12-06 | 주식회사 한글과컴퓨터 | 감성 분석 기반의 스타일 적용이 가능한 문서 편집 장치 및 그 동작 방법 |
Families Citing this family (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002132663A (ja) * | 2000-10-20 | 2002-05-10 | Nec Corp | 情報通信システムとその通信方法、及び通信プログラムを記録した記録媒体 |
US6963839B1 (en) * | 2000-11-03 | 2005-11-08 | At&T Corp. | System and method of controlling sound in a multi-media communication application |
US20080040227A1 (en) | 2000-11-03 | 2008-02-14 | At&T Corp. | System and method of marketing using a multi-media communication system |
US6990452B1 (en) | 2000-11-03 | 2006-01-24 | At&T Corp. | Method for sending multi-media messages using emoticons |
US6976082B1 (en) | 2000-11-03 | 2005-12-13 | At&T Corp. | System and method for receiving multi-media messages |
US7203648B1 (en) | 2000-11-03 | 2007-04-10 | At&T Corp. | Method for sending multi-media messages with customized audio |
US7035803B1 (en) | 2000-11-03 | 2006-04-25 | At&T Corp. | Method for sending multi-media messages using customizable background images |
US7091976B1 (en) | 2000-11-03 | 2006-08-15 | At&T Corp. | System and method of customizing animated entities for use in a multi-media communication application |
EP1343389B1 (en) * | 2000-11-17 | 2008-05-07 | Tate & Lyle Technology Limited | Meltable form of sucralose |
JP2002268665A (ja) * | 2001-03-13 | 2002-09-20 | Oki Electric Ind Co Ltd | テキスト音声合成装置 |
US6980333B2 (en) * | 2001-04-11 | 2005-12-27 | Eastman Kodak Company | Personalized motion imaging system |
US7080139B1 (en) | 2001-04-24 | 2006-07-18 | Fatbubble, Inc | Method and apparatus for selectively sharing and passively tracking communication device experiences |
US7085259B2 (en) * | 2001-07-31 | 2006-08-01 | Comverse, Inc. | Animated audio messaging |
WO2003028386A2 (en) * | 2001-09-25 | 2003-04-03 | Wildseed, Ltd. | Wireless mobile image messaging |
US7671861B1 (en) | 2001-11-02 | 2010-03-02 | At&T Intellectual Property Ii, L.P. | Apparatus and method of customizing animated entities for use in a multi-media communication application |
US7224851B2 (en) * | 2001-12-04 | 2007-05-29 | Fujifilm Corporation | Method and apparatus for registering modification pattern of transmission image and method and apparatus for reproducing the same |
US7401020B2 (en) * | 2002-11-29 | 2008-07-15 | International Business Machines Corporation | Application of emotion-based intonation and prosody to speech in text-to-speech systems |
US7168953B1 (en) * | 2003-01-27 | 2007-01-30 | Massachusetts Institute Of Technology | Trainable videorealistic speech animation |
US7539727B2 (en) | 2003-07-01 | 2009-05-26 | Microsoft Corporation | Instant messaging object store |
US7363378B2 (en) | 2003-07-01 | 2008-04-22 | Microsoft Corporation | Transport system for instant messaging |
US7607097B2 (en) * | 2003-09-25 | 2009-10-20 | International Business Machines Corporation | Translating emotion to braille, emoticons and other special symbols |
US8523572B2 (en) * | 2003-11-19 | 2013-09-03 | Raanan Liebermann | Touch language |
US20050131697A1 (en) * | 2003-12-10 | 2005-06-16 | International Business Machines Corporation | Speech improving apparatus, system and method |
US20050131744A1 (en) * | 2003-12-10 | 2005-06-16 | International Business Machines Corporation | Apparatus, system and method of automatically identifying participants at a videoconference who exhibit a particular expression |
US8171084B2 (en) * | 2004-01-20 | 2012-05-01 | Microsoft Corporation | Custom emoticons |
JP3930489B2 (ja) * | 2004-03-31 | 2007-06-13 | 株式会社コナミデジタルエンタテインメント | チャットシステム、通信装置、その制御方法及びプログラム |
CN100371889C (zh) * | 2004-07-08 | 2008-02-27 | 腾讯科技(深圳)有限公司 | 一种在即时通讯工具软件中使用表情符号的方法 |
US20060089147A1 (en) * | 2004-10-21 | 2006-04-27 | Beaty Robert M | Mobile network infrastructure for applications, personalized user interfaces, and services |
US7433700B2 (en) | 2004-11-12 | 2008-10-07 | Microsoft Corporation | Strategies for peer-to-peer instant messaging |
GB2422454A (en) * | 2005-01-22 | 2006-07-26 | Siemens Plc | A system for communicating user emotion |
JP2006263122A (ja) * | 2005-03-24 | 2006-10-05 | Sega Corp | ゲーム装置、ゲームシステム、ゲームデータの処理方法及びこのゲームデータの処理方法ためのプログラム並びに記憶媒体 |
EP1866810A1 (en) * | 2005-04-04 | 2007-12-19 | MOR(F) Dynamics Pty Ltd | Method for transforming language into a visual form |
US7529255B2 (en) * | 2005-04-21 | 2009-05-05 | Microsoft Corporation | Peer-to-peer multicasting using multiple transport protocols |
US20070061814A1 (en) * | 2005-09-13 | 2007-03-15 | Choi Andrew C | Method and apparatus for transparently interfacing a computer peripheral with a messaging system |
EP1771002B1 (en) * | 2005-09-30 | 2017-12-27 | LG Electronics Inc. | Mobile video communication terminal |
US20070143410A1 (en) * | 2005-12-16 | 2007-06-21 | International Business Machines Corporation | System and method for defining and translating chat abbreviations |
KR20070091962A (ko) * | 2006-03-08 | 2007-09-12 | 한국방송공사 | 애니메이션을 이용한 디엠비 데이터 방송의 나레이션 제공방법 및 이를 구현하기 위한 프로그램이 저장된 컴퓨터로판독 가능한 기록매체 |
US7571101B2 (en) * | 2006-05-25 | 2009-08-04 | Charles Humble | Quantifying psychological stress levels using voice patterns |
US8340956B2 (en) * | 2006-05-26 | 2012-12-25 | Nec Corporation | Information provision system, information provision method, information provision program, and information provision program recording medium |
US7640304B1 (en) * | 2006-06-14 | 2009-12-29 | Yes International Ag | System and method for detecting and measuring emotional indicia |
US7966567B2 (en) * | 2007-07-12 | 2011-06-21 | Center'd Corp. | Character expression in a geo-spatial environment |
TWI454955B (zh) * | 2006-12-29 | 2014-10-01 | Nuance Communications Inc | 使用模型檔產生動畫的方法及電腦可讀取的訊號承載媒體 |
JP4930584B2 (ja) * | 2007-03-20 | 2012-05-16 | 富士通株式会社 | 音声合成装置、音声合成システム、言語処理装置、音声合成方法及びコンピュータプログラム |
CN101072207B (zh) * | 2007-06-22 | 2010-09-08 | 腾讯科技(深圳)有限公司 | 即时通讯工具中的交流方法及即时通讯工具 |
US20090048840A1 (en) * | 2007-08-13 | 2009-02-19 | Teng-Feng Lin | Device for converting instant message into audio or visual response |
US20090082045A1 (en) * | 2007-09-26 | 2009-03-26 | Blastmsgs Inc. | Blast video messages systems and methods |
CN101287093B (zh) * | 2008-05-30 | 2010-06-09 | 北京中星微电子有限公司 | 在视频通信中添加特效的方法及视频客户端 |
US8542237B2 (en) * | 2008-06-23 | 2013-09-24 | Microsoft Corporation | Parametric font animation |
US20100228776A1 (en) * | 2009-03-09 | 2010-09-09 | Melkote Ramaswamy N | System, mechanisms, methods and services for the creation, interaction and consumption of searchable, context relevant, multimedia collages composited from heterogeneous sources |
CN102289339B (zh) * | 2010-06-21 | 2013-10-30 | 腾讯科技(深圳)有限公司 | 一种显示表情信息的方法及装置 |
EP2626794A4 (en) * | 2010-10-08 | 2018-01-10 | NEC Corporation | Character conversion system and character conversion method and computer program |
US8751228B2 (en) * | 2010-11-04 | 2014-06-10 | Microsoft Corporation | Minimum converted trajectory error (MCTE) audio-to-video engine |
US20120130717A1 (en) * | 2010-11-19 | 2012-05-24 | Microsoft Corporation | Real-time Animation for an Expressive Avatar |
US20120136660A1 (en) * | 2010-11-30 | 2012-05-31 | Alcatel-Lucent Usa Inc. | Voice-estimation based on real-time probing of the vocal tract |
US20140025385A1 (en) * | 2010-12-30 | 2014-01-23 | Nokia Corporation | Method, Apparatus and Computer Program Product for Emotion Detection |
US8559813B2 (en) | 2011-03-31 | 2013-10-15 | Alcatel Lucent | Passband reflectometer |
CN102271096A (zh) * | 2011-07-27 | 2011-12-07 | 苏州巴米特信息科技有限公司 | 一种特色聊天系统 |
TWI482108B (zh) | 2011-12-29 | 2015-04-21 | Univ Nat Taiwan | To bring virtual social networks into real-life social systems and methods |
US9331970B2 (en) * | 2012-12-05 | 2016-05-03 | Facebook, Inc. | Replacing typed emoticon with user photo |
CN103475991A (zh) * | 2013-08-09 | 2013-12-25 | 刘波涌 | 实现角色扮演的方法和系统 |
GB201401046D0 (en) * | 2014-01-22 | 2014-03-05 | Iedutainments Ltd | Searching and content delivery system |
CN105282621A (zh) * | 2014-07-22 | 2016-01-27 | 中兴通讯股份有限公司 | 一种语音消息可视化服务的实现方法及装置 |
WO2016045015A1 (en) * | 2014-09-24 | 2016-03-31 | Intel Corporation | Avatar audio communication systems and techniques |
EP3216008B1 (en) * | 2014-11-05 | 2020-02-26 | Intel Corporation | Avatar video apparatus and method |
CN104639425B (zh) * | 2015-01-06 | 2018-02-09 | 广州华多网络科技有限公司 | 一种网络表情播放方法、系统和服务设备 |
US10133918B1 (en) * | 2015-04-20 | 2018-11-20 | Snap Inc. | Generating a mood log based on user images |
CN104899814A (zh) * | 2015-05-08 | 2015-09-09 | 努比亚技术有限公司 | 一种智能提醒健康饮食的方法及终端 |
WO2017137947A1 (en) * | 2016-02-10 | 2017-08-17 | Vats Nitin | Producing realistic talking face with expression using images text and voice |
CN105763424B (zh) * | 2016-03-22 | 2019-05-07 | 网易有道信息技术(北京)有限公司 | 一种文字信息处理方法和装置 |
CN105931631A (zh) * | 2016-04-15 | 2016-09-07 | 北京地平线机器人技术研发有限公司 | 语音合成系统和方法 |
US10168859B2 (en) | 2016-04-26 | 2019-01-01 | International Business Machines Corporation | Contextual determination of emotion icons |
US9973456B2 (en) | 2016-07-22 | 2018-05-15 | Strip Messenger | Messaging as a graphical comic strip |
US9684430B1 (en) * | 2016-07-27 | 2017-06-20 | Strip Messenger | Linguistic and icon based message conversion for virtual environments and objects |
US10225621B1 (en) | 2017-12-20 | 2019-03-05 | Dish Network L.L.C. | Eyes free entertainment |
US20200279553A1 (en) * | 2019-02-28 | 2020-09-03 | Microsoft Technology Licensing, Llc | Linguistic style matching agent |
CN110991427B (zh) * | 2019-12-25 | 2023-07-14 | 北京百度网讯科技有限公司 | 用于视频的情绪识别方法、装置和计算机设备 |
CN112184858B (zh) | 2020-09-01 | 2021-12-07 | 魔珐(上海)信息科技有限公司 | 基于文本的虚拟对象动画生成方法及装置、存储介质、终端 |
CN112188304B (zh) * | 2020-09-28 | 2022-11-15 | 广州酷狗计算机科技有限公司 | 视频生成方法、装置、终端及存储介质 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0883090A2 (en) * | 1997-06-06 | 1998-12-09 | AT&T Corp. | Method for generating photo-realistic animated characters |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5689618A (en) * | 1991-02-19 | 1997-11-18 | Bright Star Technology, Inc. | Advanced tools for speech synchronized animation |
US5878396A (en) * | 1993-01-21 | 1999-03-02 | Apple Computer, Inc. | Method and apparatus for synthetic speech in facial animation |
US5880731A (en) * | 1995-12-14 | 1999-03-09 | Microsoft Corporation | Use of avatars with automatic gesturing and bounded interaction in on-line chat session |
US6069622A (en) * | 1996-03-08 | 2000-05-30 | Microsoft Corporation | Method and system for generating comic panels |
US6064383A (en) * | 1996-10-04 | 2000-05-16 | Microsoft Corporation | Method and system for selecting an emotional appearance and prosody for a graphical character |
US5963217A (en) * | 1996-11-18 | 1999-10-05 | 7Thstreet.Com, Inc. | Network conference system using limited bandwidth to generate locally animated displays |
SE520065C2 (sv) * | 1997-03-25 | 2003-05-20 | Telia Ab | Anordning och metod för prosodigenerering vid visuell talsyntes |
US5983190A (en) * | 1997-05-19 | 1999-11-09 | Microsoft Corporation | Client server animation system for managing interactive user interface characters |
US6112177A (en) * | 1997-11-07 | 2000-08-29 | At&T Corp. | Coarticulation method for audio-visual text-to-speech synthesis |
US6522333B1 (en) * | 1999-10-08 | 2003-02-18 | Electronic Arts Inc. | Remote communication through visual representations |
US6539354B1 (en) * | 2000-03-24 | 2003-03-25 | Fluent Speech Technologies, Inc. | Methods and devices for producing and using synthetic visual speech based on natural coarticulation |
AU2001255787A1 (en) * | 2000-05-01 | 2001-11-12 | Lifef/X Networks, Inc. | Virtual representatives for use as communications tools |
US6453294B1 (en) * | 2000-05-31 | 2002-09-17 | International Business Machines Corporation | Dynamic destination-determined multimedia avatars for interactive on-line communications |
US6963839B1 (en) * | 2000-11-03 | 2005-11-08 | At&T Corp. | System and method of controlling sound in a multi-media communication application |
-
2001
- 2001-03-29 US US09/821,138 patent/US20020194006A1/en not_active Abandoned
-
2002
- 2002-03-19 EP EP02705014A patent/EP1374179A1/en not_active Withdrawn
- 2002-03-19 JP JP2002578253A patent/JP2004519787A/ja not_active Withdrawn
- 2002-03-19 CN CN02800938A patent/CN1460232A/zh active Pending
- 2002-03-19 KR KR1020027016111A patent/KR20030007726A/ko not_active Application Discontinuation
- 2002-03-19 WO PCT/IB2002/000860 patent/WO2002080107A1/en not_active Application Discontinuation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0883090A2 (en) * | 1997-06-06 | 1998-12-09 | AT&T Corp. | Method for generating photo-realistic animated characters |
Non-Patent Citations (2)
Title |
---|
PARK E.A.: "Advanced model-based image coding scheme", FITH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, 22 August 1999 (1999-08-22) - 25 August 1999 (1999-08-25), Brisbane, Australia, pages -08-817 - 820, XP000937955 * |
SHIGEO MORISHIMA ET AL: "A FACIAL MOTION SYNTHESIS FOR INTELLIGENT MAN-MACHINE INTERFACE", SYSTEMS & COMPUTERS IN JAPAN, SCRIPTA TECHNICA JOURNALS. NEW YORK, US, vol. 22, no. 5, 1991, pages 50 - 59, XP000240754, ISSN: 0882-1666 * |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040039771A (ko) * | 2002-11-04 | 2004-05-12 | 김남조 | 이모티콘 사운드 재생 장치 및 방법 |
EP1575025A4 (en) * | 2002-12-20 | 2010-01-13 | Sony Electronics Inc | TEXT DISPLAY TERMINAL AND SERVER |
EP1575025A1 (en) * | 2002-12-20 | 2005-09-14 | Sony Electronics Inc. | Text display terminal device and server |
US7805307B2 (en) | 2003-09-30 | 2010-09-28 | Sharp Laboratories Of America, Inc. | Text to speech conversion system |
EP1523160A1 (en) * | 2003-10-10 | 2005-04-13 | Nec Corporation | Apparatus and method for sending messages which indicate an emotional state |
EP1528483A1 (en) * | 2003-10-30 | 2005-05-04 | Nec Corporation | Device and method for displaying a text message together with information on emotional content of the message |
US7570814B2 (en) | 2003-10-30 | 2009-08-04 | Nec Corporation | Data processing device, data processing method, and electronic device |
EP1942601A1 (en) * | 2006-12-29 | 2008-07-09 | Union Creations Limited | Device and method of expressing information in a communication message sent through a network |
WO2008096099A1 (en) | 2007-02-05 | 2008-08-14 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
GB2459073A (en) * | 2007-02-05 | 2009-10-14 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
GB2459073B (en) * | 2007-02-05 | 2011-10-12 | Amegoworld Ltd | A communication network and devices |
AU2007346312B2 (en) * | 2007-02-05 | 2012-04-26 | Amegoworld Ltd | A communication network and devices for text to speech and text to facial animation conversion |
WO2010034362A1 (en) * | 2008-09-23 | 2010-04-01 | Sony Ericsson Mobile Communications Ab | Methods and devices for controlling a presentation of an object |
EP2256642A3 (en) * | 2009-05-28 | 2011-05-04 | Samsung Electronics Co., Ltd. | Animation system for generating animation based on text-based data and user information |
US9665563B2 (en) | 2009-05-28 | 2017-05-30 | Samsung Electronics Co., Ltd. | Animation system and methods for generating animation based on text-based data and user information |
CN104053131A (zh) * | 2013-03-12 | 2014-09-17 | 华为技术有限公司 | 一种文本通讯信息处理方法及相关设备 |
US9288303B1 (en) | 2014-09-18 | 2016-03-15 | Twin Harbor Labs, LLC | FaceBack—automated response capture using text messaging |
KR20180105005A (ko) * | 2017-03-14 | 2018-09-27 | 이명철 | 감성 콘텐츠 적용이 가능한 텍스트 에디터 지원 시스템 |
KR101994803B1 (ko) | 2017-03-14 | 2019-07-01 | 이명철 | 감성 콘텐츠 적용이 가능한 텍스트 에디터 지원 시스템 |
KR102053076B1 (ko) * | 2018-07-09 | 2019-12-06 | 주식회사 한글과컴퓨터 | 감성 분석 기반의 스타일 적용이 가능한 문서 편집 장치 및 그 동작 방법 |
Also Published As
Publication number | Publication date |
---|---|
US20020194006A1 (en) | 2002-12-19 |
EP1374179A1 (en) | 2004-01-02 |
JP2004519787A (ja) | 2004-07-02 |
CN1460232A (zh) | 2003-12-03 |
KR20030007726A (ko) | 2003-01-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020194006A1 (en) | Text to visual speech system and method incorporating facial emotions | |
KR102503413B1 (ko) | 애니메이션 인터랙션 방법, 장치, 기기 및 저장 매체 | |
US9667574B2 (en) | Animated delivery of electronic messages | |
US20020007276A1 (en) | Virtual representatives for use as communications tools | |
TWI454955B (zh) | 使用模型檔產生動畫的方法及電腦可讀取的訊號承載媒體 | |
US11657557B2 (en) | Method and system for generating data to provide an animated visual representation | |
US20030163315A1 (en) | Method and system for generating caricaturized talking heads | |
US20030149569A1 (en) | Character animation | |
US20040220812A1 (en) | Speech-controlled animation system | |
US11005796B2 (en) | Animated delivery of electronic messages | |
WO2007098560A1 (en) | An emotion recognition system and method | |
Morishima | Real-time talking head driven by voice and its application to communication and entertainment | |
KR20160010810A (ko) | 실음성 표출 가능한 실사형 캐릭터 생성 방법 및 생성 시스템 | |
KR100300966B1 (ko) | 애니메이션 채팅 시스템 및 그 방법 | |
Morishima et al. | Face-to-face communicative avatar driven by voice | |
Luerssen et al. | Head x: Customizable audiovisual synthesis for a multi-purpose virtual head | |
Prasetyahadi et al. | Eye lip and crying expression for virtual human | |
Barakonyi et al. | Communicating Multimodal information on the WWW using a lifelike, animated 3D agent | |
CN115766971A (zh) | 演示视频生成方法、装置、电子设备和可读存储介质 | |
Emura et al. | Personal Media Producer: A System for Creating 3D CG Animation from Mobile Phone E-mail. | |
Morishima | Real-time voice driven facial animation system | |
Pandžic | Multimodal HCI Output: Facial Motion, Gestures and Synthesised Speech Synchronisation | |
Filntisis et al. | Video-realistic expressive audio-visual speech synthesis for the Greek | |
Hasegawa et al. | Processing of facial information by computer | |
Karunaratne et al. | Techniques for modelling and training multimedia expressive talking heads |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002705014 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020027016111 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 02800938X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1020027016111 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002578253 Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 2002705014 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2002705014 Country of ref document: EP |