WO2010078972A2 - Method and arrangement for handling non-textual information - Google Patents
Method and arrangement for handling non-textual information Download PDFInfo
- Publication number
- WO2010078972A2 WO2010078972A2 PCT/EP2009/058771 EP2009058771W WO2010078972A2 WO 2010078972 A2 WO2010078972 A2 WO 2010078972A2 EP 2009058771 W EP2009058771 W EP 2009058771W WO 2010078972 A2 WO2010078972 A2 WO 2010078972A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data set
- information
- textual information
- instructions
- user
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72436—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. SMS or e-mail
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/52—Details of telephonic subscriber devices including functional features of a camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/70—Details of telephonic subscriber devices methods for entering alphabetical characters, e.g. multi-tap or dictionary disambiguation
Definitions
- the present invention relates to a method, device and a computer program for controlling input of non-textual symbols in a device, and especially in a communication device.
- Mobile telephones have evolved during the last few years from being simple voice communication devices to present day intelligent communication devices having processing and communication capabilities.
- the use of a mobile telephone may involve such activities as interactive messaging, sending e-mail messages, browsing the World Wide Web and many other activities, both business related as well as more leisure oriented.
- the operation of current communication devices is often controlled via user interface means that include, in addition to or instead of traditional keypads, touch sensitive displays on which a virtual keypad is displayed. In the latter case, usually a user inputs text and other symbols using a stylus by touching the virtual keypad.
- Instant messaging and chat is very popular and one important part is to express emotions using smileys, keyboard character combinations mapped to emoticons.
- smileys were formed as plain text characters, like :-) and ;(.
- smileys are also provided as unique nontextual symbols, which are small graphical bitmaps.
- a drawback with current devices is that they typically have to display a plurality of possible non-textuai symbols, including the smileys, for selection by the user.
- a user must usually select from a list of smileys or use symbols to form a smiley, which depending on applications may be converted to a graphical smiley.
- this may be problem as the user must choose from a list and find a correct smiley. This is time consuming and may need a number of additional moments.
- the modern mobile telephones, computers, PDAs and communication devices usually comprise one or several image recording devices in form of cameras.
- mobile telephones enabled for video calls have a camera directed towards the user.
- the present invention uses the advantage of having a camera on a messaging device such as a mobile telephone to generate symbols, preferably non-textual symbols such as smileys.
- the proposed solution uses face detection preferably combined with facial part analysis to automatically generate the emoticons.
- the invention according to a first aspect relates to a method for inserting non-textual information in a set of information.
- the method comprises the steps of: using a facial image of a user, generating a first data set corresponding to the facial image, comparing the first data set with a stored data set corresponding to the non-textual information, selecting a second data set based on the comparison, and providing the second data set as the non-textual information into the set of information.
- the set of information is a text based information.
- the set of non-textual information is an emoticon.
- the emoticon corresponds to the facial appearance of the user.
- the invention also relates to a device according to a second aspect comprising a processing unit, a memory unit and an image recording arrangement.
- the image recording arrangement is configured to capture at least a portion of a users face
- the processing unit is configured to process the captured image corresponding to at least the portion of the users face and compare it to a data set stored in the memory, the processing unit being further configured to select a data set based on the comparison.
- the selected data is output to a text processing unit.
- the device may further comprise a display, input and output units, a transceiver portion and an antenna.
- the invention also relates to a computer program stored on a computer readable medium for inserting non-textual information in a set of information.
- the computer program comprises: a set of instructions for selecting a facial image of a user, a set of instructions for generating a first data set corresponding to the facial image, a set of instructions for comparing the first data set with a stored data set corresponding to the non-textual information, a set of instructions for selecting a second data set based on the comparison, and a set of instructions for providing the second data set as the non-textual information into the set of information.
- Fig. 1 shows a schematically drawn block diagram of an embodiment of a mobile communication device according to the invention
- Figs. 2a-2c show schematically block diagram of a facial recognition embodiment according to the invention
- Fig. 3 is a flow diagram illustrating exemplary method steps according to the present invention.
- Fig. 4 is a schematically drawn block diagram of an embodiment and screen shots of a communication device during execution of a computer program that implements the method of the present invention.
- Fig. 1 illustrates schematically a communication device in the form of a mobile telephone device 100.
- the device 100 comprises a processor 101 , memory 102, one or several cameras 103, a display 104, input and output units 105, a transceiver portion 106 and an antenna 107.
- the display may be a touch sensitive display on which a user writes using, e.g., a stylus or similar device.
- Other input/output units 105 in the form of a speaker, a microphone, a keyboard may also be present, functions of which are well known for a skilled person and not described herein in detail.
- the display 103, input/output units 105 and the camera 103 may communicate with the processor 104 through an l/0-interface (not shown).
- the communication device 100 may, in addition to the illustrated mobile telephone device, be a Personal Digital Assistant (PDA) equipped with radio communication means or a computer, stationary or laptop equipped with a camera.
- PDA Personal Digital Assistant
- the telephone 100 is capable of communication via a transceiver unit 106 and an antenna 107 through an air interface with a mobile (radio) communication system (not shown) such as the well known systems GSM/GPRS, UMTS, CDMA 2000 etc.
- the present invention uses one of the mobile phone's sensors, preferably the video telephony camera 103, to automatically generate emoticons (smileys) contrary to the classic input methods using the keyboard or touch screen display to enter the right character combinations.
- Figs. 2a-2c and 3 in conjunction with Fig. 1 illustrate the principles of the invention according to one embodiment.
- an application such as chatting or text processing application, with ability to use smileys is started (1 )
- a user's face 250a-250c (happy, blinking and unhappy, respectively) is captured (2) by the camera 103 of the exemplary communication device.
- a facial recognition portion 1 10, implemented as hardware or an instruction set executed by the processor 101 processes (3) the recorded image from the camera and searches for certain characteristics, such as leaps (motion), eyes, cheeks etc., and the processor 101 looks up (4) for the similarities, e.g. in a look up table in the memory 102.
- the smileys or emoticons may be in form of so-called western style, eastern style, East Asian Style, ideographic style, a mixture of styles or any other usable styles.
- One benefit of the proposed solution is that the user can interact using his/her face via the camera to express emotions in text form.
- Fig. 4 illustrates an exemplary application embodiment during an instant messaging chat session:
- the user 250 writes a text message 520 and during writing the video telephony camera of the mobile phone 100 analysis the facial parts to find out when the user wants to express an emotion in the text 521. 2) If the user winks with the eye, a wink smiley 522 is automatically generated at the current text cursor position.
- the method according to the preferred embodiments will in general reside in the form of software instructions of a computer program with an associated memory area 1 10, together with other software components necessary for the operation of the device 100, in the memory 102 of the device 100.
- the computer program 110 may be resident or it may be loaded into the memory 102 from a software provider, e.g. via the air interface and the network, by way of methods known to the skilled person.
- the program will be executed by the processor 101 , which will receive and process input data from the camera and input units, keyboard or touch sensitive display (virtual keyboard) in the device 100
Abstract
The present invention relates to a method and device for inserting non-textual information in a set of information. The method comprises: using a facial image of a user, generating a first data set corresponding to said facial image, comparing said first data set with a stored data set corresponding to said non-textual information, selecting a second data set based on said comparison, and providing said second data set as said non-textual information into said set of information.
Description
METHOD AND ARRANGEMENT FOR HANDLING NON-TEXTUAL INFORMATION
TECHNICAL FIELD
The present invention relates to a method, device and a computer program for controlling input of non-textual symbols in a device, and especially in a communication device.
BACKGROUND
Mobile telephones have evolved during the last few years from being simple voice communication devices to present day intelligent communication devices having processing and communication capabilities. The use of a mobile telephone may involve such activities as interactive messaging, sending e-mail messages, browsing the World Wide Web and many other activities, both business related as well as more leisure oriented. Moreover, the operation of current communication devices is often controlled via user interface means that include, in addition to or instead of traditional keypads, touch sensitive displays on which a virtual keypad is displayed. In the latter case, usually a user inputs text and other symbols using a stylus by touching the virtual keypad.
Instant messaging and chat is very popular and one important part is to express emotions using smileys, keyboard character combinations mapped to emoticons.
Originally the smileys were formed as plain text characters, like :-) and ;(. However, in current messaging and chatting applications, smileys are also provided as unique nontextual symbols, which are small graphical bitmaps.
A drawback with current devices, such as mobile phones, PDAs, etc., is that they typically have to display a plurality of possible non-textuai symbols, including the smileys, for selection by the user. A user must usually select from a list of smileys or use symbols to form a smiley, which depending on applications may be converted to a graphical smiley. When chatting, for example, this may be problem as the user must choose from a list and find a correct smiley. This is time consuming and may need a number of additional moments.
SUMMARY
The modern mobile telephones, computers, PDAs and communication devices usually comprise one or several image recording devices in form of cameras. Especially, mobile telephones enabled for video calls have a camera directed towards the user. The present invention uses the advantage of having a camera on a messaging device such as a mobile telephone to generate symbols, preferably non-textual symbols such as smileys. Thus, the proposed solution uses face detection preferably combined with facial part analysis to automatically generate the emoticons.
Thus, the invention according to a first aspect relates to a method for inserting non-textual information in a set of information. The method comprises the steps of: using a facial image of a user, generating a first data set corresponding to the facial image, comparing the first data set with a stored data set corresponding to the non-textual information, selecting a second data set based on the comparison, and providing the second data set as the non-textual information into the set of information. Preferably, the set of information is a text based information. The set of non-textual information is an emoticon. Preferably, the emoticon corresponds to the facial appearance of the user.
The invention also relates to a device according to a second aspect comprising a processing unit, a memory unit and an image recording arrangement. The image recording arrangement is configured to capture at least a portion of a users face, the processing unit is configured to process the captured image corresponding to at least the portion of the users face and compare it to a data set stored in the memory, the processing unit being further configured to select a data set based on the comparison. The selected data is output to a text processing unit. The device may further comprise a display, input and output units, a transceiver portion and an antenna.
The invention also relates to a computer program stored on a computer readable medium for inserting non-textual information in a set of information. The computer program comprises: a set of instructions for selecting a facial image of a user, a set of instructions for generating a first data set corresponding to the facial image, a set of instructions for comparing the first data set with a stored data set corresponding to the non-textual information, a set of instructions for selecting a second data set based on the comparison,
and a set of instructions for providing the second data set as the non-textual information into the set of information.
BRIEF DESCRIPTION OF THE DRAWINGS In the following, the invention is described with reference to drawings illustrating some exemplary embodiments, in which:
Fig. 1 shows a schematically drawn block diagram of an embodiment of a mobile communication device according to the invention, Figs. 2a-2c show schematically block diagram of a facial recognition embodiment according to the invention, Fig. 3 is a flow diagram illustrating exemplary method steps according to the present invention.
Fig. 4 is a schematically drawn block diagram of an embodiment and screen shots of a communication device during execution of a computer program that implements the method of the present invention.
DETAILED DESCRIPTION
Fig. 1 illustrates schematically a communication device in the form of a mobile telephone device 100. The device 100 comprises a processor 101 , memory 102, one or several cameras 103, a display 104, input and output units 105, a transceiver portion 106 and an antenna 107. The display may be a touch sensitive display on which a user writes using, e.g., a stylus or similar device. Other input/output units 105 in the form of a speaker, a microphone, a keyboard may also be present, functions of which are well known for a skilled person and not described herein in detail. The display 103, input/output units 105 and the camera 103 may communicate with the processor 104 through an l/0-interface (not shown). The details regarding how these units communicate are known to the skilled person and are therefore not discussed further. The communication device 100 may, in addition to the illustrated mobile telephone device, be a Personal Digital Assistant (PDA) equipped with radio communication means or a computer, stationary or laptop equipped with a camera.
The telephone 100 is capable of communication via a transceiver unit 106 and an antenna 107 through an air interface with a mobile (radio) communication system (not shown) such as the well known systems GSM/GPRS, UMTS, CDMA 2000 etc.
The present invention uses one of the mobile phone's sensors, preferably the video telephony camera 103, to automatically generate emoticons (smileys) contrary to the classic input methods using the keyboard or touch screen display to enter the right character combinations.
Figs. 2a-2c and 3 in conjunction with Fig. 1 , illustrate the principles of the invention according to one embodiment. When an application, such as chatting or text processing application, with ability to use smileys is started (1 ) a user's face 250a-250c (happy, blinking and unhappy, respectively) is captured (2) by the camera 103 of the exemplary communication device. A facial recognition portion 1 10, implemented as hardware or an instruction set executed by the processor 101 processes (3) the recorded image from the camera and searches for certain characteristics, such as leaps (motion), eyes, cheeks etc., and the processor 101 looks up (4) for the similarities, e.g. in a look up table in the memory 102. When a smiley or emoticon similar to facial recognized data is found and selected (5), it is outputted (6) as smileys 255a-255c (smiling/happy, wink and frowning/sad, respectively) into the application 260 which calls the functionality of the present invention. The procedure is executed until (7) the application is terminated or the user decides to use other input means, for example.
The smileys or emoticons may be in form of so-called western style, eastern style, East Asian Style, ideographic style, a mixture of styles or any other usable styles.
One benefit of the proposed solution is that the user can interact using his/her face via the camera to express emotions in text form.
Fig. 4 illustrates an exemplary application embodiment during an instant messaging chat session:
1 ) The user 250 writes a text message 520 and during writing the video telephony camera of the mobile phone 100 analysis the facial parts to find out when the user wants to express an emotion in the text 521.
2) If the user winks with the eye, a wink smiley 522 is automatically generated at the current text cursor position.
3) If the user smiles to express happiness, a happy smiley 523 is automatically generated at the current text cursor position.
The method according to the preferred embodiments will in general reside in the form of software instructions of a computer program with an associated memory area 1 10, together with other software components necessary for the operation of the device 100, in the memory 102 of the device 100. The computer program 110 may be resident or it may be loaded into the memory 102 from a software provider, e.g. via the air interface and the network, by way of methods known to the skilled person. The program will be executed by the processor 101 , which will receive and process input data from the camera and input units, keyboard or touch sensitive display (virtual keyboard) in the device 100
It should be noted that the word "comprising" does not exclude the presence of other elements or steps than those listed and the words "a" or "an" preceding an element do not exclude the presence of a plurality of such elements. It should further be noted that any reference signs do not limit the scope of the claims, that the invention may be implemented at least in part by means of both hardware and software, and that several "means", "units" or "devices" may be represented by the same item of hardware.
The above mentioned and described embodiments are only given as examples and should not be limiting to the present invention. Other solutions, uses, objectives, and functions within the scope of the invention as claimed in the below described patent claims should be apparent for the person skilled in the art.
Claims
1. A method for inserting non-textual information in a set of information, the method comprising:
* using a facial image of a user, » generating a first data set corresponding to said facial image,
« comparing said first data set with a stored data set corresponding to said nontextual information,
• selecting a second data set based on said comparison, and
« providing said second data set as said non-textual information into said set of information.
2. The method of claim 1 , wherein said set of information is a text based information.
3. The method of claim 1 , wherein said set of non-textual information is an emoticon.
4. The method of claim 3, wherein said emoticon corresponds to the facial appearance of the user.
5. The method of claim 3, wherein said emoticon is in form of western style, eastern style, East Asian Style, ideographic style, a mixture of said styles or any other usable styles.
6. A device comprising a processing unit, a memory unit and an image recording arrangement, said image recording arrangement being configured to capture at least a portion of a users face, said processing unit being configured to process said captured image corresponding to at least said portion of said users face and compare it to a data set stored in said memory, said processing unit being further configured to select a data set based on said comparison.
7. The device of claim 6, wherein said selected data is output to a text processing unit.
8. The device of claim 6, further comprising a display, input and output units, a transceiver portion and an antenna.
9. The device of claim 6, being one of mobile communication device, a Personal Digital Assistance, or a computer.
10. A computer program stored on a computer readable medium for inserting non-textual information in a set of information, the computer program comprising:
» a set of instructions for selecting a facial image of a user,
• a set of instructions for generating a first data set corresponding to said facial image, β a set of instructions for comparing said first data set with a stored data set corresponding to said non-textual information,
® a set of instructions for selecting a second data set based on said comparison, and
• a set of instructions for providing said second data set as said non-textual information into said set of information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/351,477 US20100177116A1 (en) | 2009-01-09 | 2009-01-09 | Method and arrangement for handling non-textual information |
US12/351,477 | 2009-01-09 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2010078972A2 true WO2010078972A2 (en) | 2010-07-15 |
WO2010078972A3 WO2010078972A3 (en) | 2011-01-13 |
Family
ID=42316894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2009/058771 WO2010078972A2 (en) | 2009-01-09 | 2009-07-09 | Method and arrangement for handling non-textual information |
Country Status (2)
Country | Link |
---|---|
US (1) | US20100177116A1 (en) |
WO (1) | WO2010078972A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2611124A1 (en) * | 2011-12-30 | 2013-07-03 | Research In Motion Limited | Method, system and apparatus for automated alerts |
US9294718B2 (en) | 2011-12-30 | 2016-03-22 | Blackberry Limited | Method, system and apparatus for automated alerts |
CN105519047A (en) * | 2014-07-02 | 2016-04-20 | 华为技术有限公司 | Information transmission method and transmission device |
EP2972910A4 (en) * | 2013-03-15 | 2016-11-09 | Intel Corp | System for adaptive selection and presentation of context-based media in communications |
EP3011730A4 (en) * | 2013-06-20 | 2017-01-25 | Elwha LLC | Systems and methods for enhancement of facial expressions |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101647305B1 (en) * | 2009-11-23 | 2016-08-10 | 삼성전자주식회사 | Apparatus and method for video call in mobile communication terminal |
US20110143728A1 (en) * | 2009-12-16 | 2011-06-16 | Nokia Corporation | Method and apparatus for recognizing acquired media for matching against a target expression |
KR101722687B1 (en) | 2010-08-10 | 2017-04-04 | 삼성전자주식회사 | Method for providing information between objects or object and user, user device, and storage medium thereof |
US20120182309A1 (en) * | 2011-01-14 | 2012-07-19 | Research In Motion Limited | Device and method of conveying emotion in a messaging application |
US20120233633A1 (en) * | 2011-03-09 | 2012-09-13 | Sony Corporation | Using image of video viewer to establish emotion rank of viewed video |
TWI464695B (en) * | 2011-09-22 | 2014-12-11 | Hon Hai Prec Ind Co Ltd | Electronic device with function of playing document based on facial expression and method thereof |
WO2013085409A1 (en) * | 2011-12-08 | 2013-06-13 | Общество С Ограниченной Ответственностью Базелевс-Инновации | Method for animating sms messages |
US20130147933A1 (en) * | 2011-12-09 | 2013-06-13 | Charles J. Kulas | User image insertion into a text message |
JP5845200B2 (en) * | 2012-06-25 | 2016-01-20 | 株式会社コナミデジタルエンタテインメント | Message browsing system, server, terminal device, control method, and program |
JP5937992B2 (en) | 2012-06-25 | 2016-06-22 | 株式会社コナミデジタルエンタテインメント | Message browsing system, server, terminal device, control method, and program |
WO2014078948A1 (en) * | 2012-11-22 | 2014-05-30 | Perch Communications Inc. | System and method for automatically triggered synchronous and asynchronous video and audio communications between users at different endpoints |
US20150127753A1 (en) | 2013-11-04 | 2015-05-07 | Meemo, Llc | Word Recognition and Ideograph or In-App Advertising System |
US10013601B2 (en) * | 2014-02-05 | 2018-07-03 | Facebook, Inc. | Ideograms for captured expressions |
NL2012827B1 (en) * | 2014-05-16 | 2016-03-02 | Real Smile B V | Method of providing an insert image for in-line use in a text message. |
US9576175B2 (en) * | 2014-05-16 | 2017-02-21 | Verizon Patent And Licensing Inc. | Generating emoticons based on an image of a face |
US9721024B2 (en) * | 2014-12-19 | 2017-08-01 | Facebook, Inc. | Searching for ideograms in an online social network |
KR20160105321A (en) * | 2015-02-27 | 2016-09-06 | 임머숀 코퍼레이션 | Generating actions based on a user's mood |
CN106649712B (en) * | 2016-12-20 | 2020-03-03 | 北京小米移动软件有限公司 | Method and device for inputting expression information |
WO2018128996A1 (en) * | 2017-01-03 | 2018-07-12 | Clipo, Inc. | System and method for facilitating dynamic avatar based on real-time facial expression detection |
CN107153496B (en) * | 2017-07-04 | 2020-04-28 | 北京百度网讯科技有限公司 | Method and device for inputting emoticons |
US10870056B2 (en) * | 2017-11-01 | 2020-12-22 | Sony Interactive Entertainment Inc. | Emoji-based communications derived from facial features during game play |
CN108200463B (en) * | 2018-01-19 | 2020-11-03 | 上海哔哩哔哩科技有限公司 | Bullet screen expression package generation method, server and bullet screen expression package generation system |
US10699104B2 (en) * | 2018-05-03 | 2020-06-30 | International Business Machines Corporation | Image obtaining based on emotional status |
US11340707B2 (en) * | 2020-05-29 | 2022-05-24 | Microsoft Technology Licensing, Llc | Hand gesture-based emojis |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002318649A (en) * | 2001-04-19 | 2002-10-31 | Masanobu Kujirada | System, method and program for inputting face mark |
EP1509042A1 (en) * | 2003-08-19 | 2005-02-23 | Sony Ericsson Mobile Communications AB | System and method for a mobile phone for classifying a facial expression |
US20050163379A1 (en) * | 2004-01-28 | 2005-07-28 | Logitech Europe S.A. | Use of multimedia data for emoticons in instant messaging |
JP2007199908A (en) * | 2006-01-25 | 2007-08-09 | Fujifilm Corp | Emoticon input apparatus |
WO2008109619A2 (en) * | 2007-03-05 | 2008-09-12 | Emotiv Systems Pty Ltd | Interface to convert mental states and facial expressions to application input |
WO2009056921A2 (en) * | 2007-10-30 | 2009-05-07 | Sony Ericsson Mobile Communications Ab | System and method for facial expression control of a user interface |
-
2009
- 2009-01-09 US US12/351,477 patent/US20100177116A1/en not_active Abandoned
- 2009-07-09 WO PCT/EP2009/058771 patent/WO2010078972A2/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002318649A (en) * | 2001-04-19 | 2002-10-31 | Masanobu Kujirada | System, method and program for inputting face mark |
EP1509042A1 (en) * | 2003-08-19 | 2005-02-23 | Sony Ericsson Mobile Communications AB | System and method for a mobile phone for classifying a facial expression |
US20050163379A1 (en) * | 2004-01-28 | 2005-07-28 | Logitech Europe S.A. | Use of multimedia data for emoticons in instant messaging |
JP2007199908A (en) * | 2006-01-25 | 2007-08-09 | Fujifilm Corp | Emoticon input apparatus |
WO2008109619A2 (en) * | 2007-03-05 | 2008-09-12 | Emotiv Systems Pty Ltd | Interface to convert mental states and facial expressions to application input |
WO2009056921A2 (en) * | 2007-10-30 | 2009-05-07 | Sony Ericsson Mobile Communications Ab | System and method for facial expression control of a user interface |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2611124A1 (en) * | 2011-12-30 | 2013-07-03 | Research In Motion Limited | Method, system and apparatus for automated alerts |
US9294718B2 (en) | 2011-12-30 | 2016-03-22 | Blackberry Limited | Method, system and apparatus for automated alerts |
EP2972910A4 (en) * | 2013-03-15 | 2016-11-09 | Intel Corp | System for adaptive selection and presentation of context-based media in communications |
EP3011730A4 (en) * | 2013-06-20 | 2017-01-25 | Elwha LLC | Systems and methods for enhancement of facial expressions |
US9792490B2 (en) | 2013-06-20 | 2017-10-17 | Elwha Llc | Systems and methods for enhancement of facial expressions |
CN105519047A (en) * | 2014-07-02 | 2016-04-20 | 华为技术有限公司 | Information transmission method and transmission device |
EP3110078A4 (en) * | 2014-07-02 | 2017-03-08 | Huawei Technologies Co., Ltd. | Information transmission method and transmission device |
US10387717B2 (en) | 2014-07-02 | 2019-08-20 | Huawei Technologies Co., Ltd. | Information transmission method and transmission apparatus |
Also Published As
Publication number | Publication date |
---|---|
US20100177116A1 (en) | 2010-07-15 |
WO2010078972A3 (en) | 2011-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010078972A2 (en) | Method and arrangement for handling non-textual information | |
US8373799B2 (en) | Visual effects for video calls | |
US8620850B2 (en) | Dynamically manipulating an emoticon or avatar | |
US10152207B2 (en) | Method and device for changing emoticons in a chat interface | |
KR101789626B1 (en) | Mobile terminal and method for controlling the same | |
US20090110246A1 (en) | System and method for facial expression control of a user interface | |
EP1973314A1 (en) | Method and apparatus for motion-based communication | |
US8466950B2 (en) | Method and apparatus for video call in a mobile terminal | |
CN111857500B (en) | Message display method and device, electronic equipment and storage medium | |
EP2426902A1 (en) | Dynamically manipulating an emoticon or avatar | |
US20130147933A1 (en) | User image insertion into a text message | |
KR101651131B1 (en) | Mobile Terminal and Method for Controlling Communication Service thereof | |
CN107767864B (en) | Method and device for sharing information based on voice and mobile terminal | |
CN109412929A (en) | The method, device and mobile terminal that expression adaptively adjusts in instant messaging application | |
KR20110012491A (en) | System, management server, terminal and method for transmitting of message using image data and avatar | |
US7817858B2 (en) | Communication terminal | |
CN106447747B (en) | Image processing method and device | |
CN112817676A (en) | Information processing method and electronic device | |
CN112000766A (en) | Data processing method, device and medium | |
CN110324230B (en) | Interface display method, client and computer storage medium | |
CN109144286B (en) | Input method and device | |
KR100788300B1 (en) | Method for displaying idle screen in mobile terminal | |
CN109976549B (en) | Data processing method, device and machine readable medium | |
CN113141296A (en) | Message display method and device and electronic equipment | |
US11474691B2 (en) | Method for displaying a virtual keyboard on a mobile terminal screen |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09780393 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09780393 Country of ref document: EP Kind code of ref document: A2 |