CN101789990A - Method and mobile terminal for judging emotion of opposite party in conservation process - Google Patents
Method and mobile terminal for judging emotion of opposite party in conservation process Download PDFInfo
- Publication number
- CN101789990A CN101789990A CN200910214105A CN200910214105A CN101789990A CN 101789990 A CN101789990 A CN 101789990A CN 200910214105 A CN200910214105 A CN 200910214105A CN 200910214105 A CN200910214105 A CN 200910214105A CN 101789990 A CN101789990 A CN 101789990A
- Authority
- CN
- China
- Prior art keywords
- user
- characteristics information
- portable terminal
- emotional
- emotional characteristics
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
The invention provides a method for judging emotion of the opposite party in the conservation process, which is applicable to a mobile terminal and includes that the audio data or video data of the opposite party user is acquired; emotion feature information is extracted from the acquired audio data or video data for analyzing and judging the current emotion state of the opposite party user; the current emotion state of the opposite party user is displayed on an own party mobile terminal display equipment. Meanwhile, the invention also provides a mobile terminal for providing opposite party user emotion changes for the own party user so that the own party user can timely change or select proper conversion content or talking ways to avoid unfriendly conversion situation.
Description
Technical field
The present invention relates to the communications field, relate in particular to a kind of method and portable terminal of in communication process, judging the other side's mood.
Background technology
Existing portable terminal, generally can only come subjective judgement partner user's mood by user's sense organ, in this case, one's own side user may be no judge of the other side's mood or not pay close attention to the other side's emotional change, thereby in communication process, when the other user's mood because dialog context, factors such as tongue and when changing, one's own side user can not in time find the other user's emotional change, and adjust dialog context or utterance in view of the above, so just may influence the atmosphere of both sides' conversation, even can cause both sides' contradiction or deepen contradiction etc.
Summary of the invention
In view of above-mentioned existing in prior technology problem, the embodiment of the invention provides a kind of method and portable terminal of judging the other side's mood in communication process, in communication process, from the other user's voice or video information, extract the relevant characteristic information of mood, real-time detection partner user's emotional change also makes prompting for one's own side user, so that one's own side user in time adjusts or selects suitable dialog context or tongue, to avoid disagreeableness conversation atmosphere.
In order to solve the problem that prior art exists, the embodiment of the invention provides a kind of method of judging the other side's mood in communication process, is used for portable terminal, comprising:
Gather the other user's speech data or video data;
From the speech data that collects or video data, extract emotional characteristics information, and the current emotional state of analysis and judgement the other user;
The current emotional state of described the other user is presented on one's own side's portable terminal display device.
Accordingly, the embodiment of the invention also provides a kind of portable terminal, comprising:
Voice or video information acquisition module are used for speech data or video data communication process collection the other user;
The emotional characteristics information extraction modules links to each other with described voice or video information acquisition module, is used for extracting emotional characteristics information from the speech data or the video data that collect;
The analysis and judgement module is used for the current emotional state of emotional characteristics information analysis and judgment the other user according to described emotional characteristics information extraction modules extraction, draws judged result;
Display module links to each other with described analysis and judgement module, and the judgment result displays that is used for described analysis and judgement module is drawn is in one's own side's portable terminal display device.
Implement the embodiment of the invention, by gathering the other user's speech data or video data; From the speech data that collects or video data, extract emotional characteristics information, and the current emotional state of analysis and judgement the other user; The current emotional state of described the other user is presented on one's own side's portable terminal display device.Give one's own side's user prompt the other user's emotional change in real time, so that one's own side user in time adjusts or select suitable dialog context or tongue, to avoid disagreeableness conversation atmosphere.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the embodiment of the invention is judged the method for the other side's mood in communication process a flow chart;
Fig. 2 is an embodiment of the invention mobile terminal structure schematic diagram;
Fig. 3 is the structural representation of analysis and judgement module among Fig. 2.
Embodiment
The embodiment of the invention provides a kind of method and portable terminal of judging the other side's mood in communication process, in communication process, from the other user's voice or video information, extract the relevant characteristic information of mood, real-time detection partner user's emotional change also makes prompting for one's own side user, so that one's own side user in time adjusts or selects suitable dialog context or tongue, to avoid disagreeableness conversation atmosphere.
When judging the other user's mood, judged according to the other side caller's information such as the sound intensity, word speed, pitch and intonation usually by gathering speech data.Because mood is long process of a time, can be that unit extracts above feature with the sentence.The sound intensity is the energy intensity of detected sound, and usually the mean intensity of caller's sound when being in state such as indignation or fear can increase; The speed of word speed can reflect that caller's emotional state deflection is happy or sad; Various pauses in the communication process also can be used for analyzing mood, may reflect that as the long-time pause between the statement speaker is in sad or indignant state, and the pause between the word or expression may reflect a kind of state of hesitation; Pitch is that the parameter of frequency dependence also can be judged part mood deflection by the lower limit, audio frequency and the average fundamental frequency that detect pitch.The various parameters of extracting are compared with default discrimination standard, such as the sound intensity, we can be set in certain mood of representative deflection within some scope of the sound intensity, each characteristic item differentiated comprehensively judge again after finishing, if the result that each characteristic item draws is or major part is deflection certain mood then think that this user is in this mood, thereby judges the other side caller's current emotional state.When judging the other user's mood by gathering video data, can be by detecting the face's correlated characteristic in the video image, bending direction and radian as lip and eyebrow, the various face features such as stretching degree of face, eyes, nose are also analyzed the mood that can judge the caller, and the function that all has the smiling face to discern such as now a lot of cameras is exactly a kind of of Expression Recognition.After portable terminal is judged the other user's emotional state, mode with literal or image expression is shown in the portable terminal display device, and by prompt tone or vibrations with the prompting one's own side user, so that one's own side user in time adjusts or selects suitable dialog context or tongue, to avoid disagreeableness conversation atmosphere.
Describe embodiments of the invention in detail below in conjunction with accompanying drawing.
Referring to Fig. 1, judge in communication process for the embodiment of the invention to comprise the flow chart of the method for the other side's mood:
Step 101: gather the other user's speech data or video data, this acquisition step is to gather in real time, could realize detecting in real time the other user's emotional state like this.
Step 102: from the speech data that collects or video data, extract emotional characteristics information, and the current emotional state of analysis and judgement the other user.Wherein, emotional characteristics information can be gathered the emotional characteristics in multiple voice data or the video data as required; Also can judge as characteristic matching in several ways when judging mood according to feature.
As, this step can be divided into following several steps and carry out in proper order:
A: from the speech data that collects or video data, extract emotional characteristics information;
In a specific embodiment, when from speech data, extracting emotional characteristics information, described emotional characteristics information comprises one or more in the multiple information such as the sound intensity, word speed, pitch, intonation, generally needs multiple information in conjunction with judging the other side's mood more accurately; When from video data, extracting emotional characteristics information, described emotional characteristics information comprises the face's correlated characteristic in the video image, various face features such as stretching degree as lip, eyebrow, eyes, opening angle such as people's eyes when laughing at can diminish, the crooked radian of lip is also different, the open angle of when indignation eyes can be bigger etc.From the speech data that collects or video data, extract emotional characteristics information, need to pre-determine in portable terminal is to gather which characteristic information, such as when from speech data, extracting emotional characteristics information, can stipulate only to gather the sound intensity, word speed, pitch, four kinds of characteristic informations of intonation.
B: each the emotional characteristics information that will extract, corresponding with default and corresponding emotional characteristics information in the portable terminal respectively discrimination standard compares, and draws judged result one by one;
As in a specific embodiment, in mobile terminal memory, the corresponding discrimination standard of each characteristic information that need set in advance and gather, described discrimination standard be by collecting the speech samples of a large amount of different people when the different mood, by these samples being analyzed formation.During as extraction emotional characteristics information from speech data, stipulate the collection sound intensity, word speed, pitch, four kinds of characteristic informations of intonation, in memory, need preestablish certain certain mood of scope representative deflection of certain characteristic information so.After passing through macromethod, can set the sound intensity and in a~b scope, be happiness, it in b~c scope indignation, in c~d scope is frightened, can set certain mood of certain scope correspondence of word speed, pitch, intonation in the same way, after configuring the mood of all scope correspondences, be stored in the memory as discrimination standard.After from the speech data that collects or video data, extracting emotional characteristics information, with each emotional characteristics information of extracting, corresponding with default and corresponding emotional characteristics information in the portable terminal respectively discrimination standard compares, draw judged result one by one, for example can be with current the other user's sound intensity and the preset standard scope of extracting: a~b scope, b~c scope, c~d scope is compared, draw current the other user's sound intensity scope of living in, if the current sound intensity is in c~d scope, then can draw first judged result for frightened according to the discrimination standard that prestores in the memory, other each emotional characteristics information of extracting compare with corresponding discrimination standard according to same principle, draw judged result one by one.
C: each judged result is carried out statistical analysis, judge the current emotional state of the other user.
In the B step, each emotional characteristics information compares with corresponding discrimination standard, after drawing judged result one by one, need that also each is differentiated the result and carry out analysis and judgement, if each emotional characteristics information for judged result all be or major part is certain mood of deflection then can judges this user and be in this mood.This analysis and judgement also needs to analyze and set through great amount of samples, for example, suppose that the emotional characteristics information of extracting is 5 kinds, in the B step, can draw 5 judged results so, if the judged result quantity of default certain mood of deflection is then judged this user and is in this mood more than or equal to 3.Certainly this is the simplest a kind of giving an example, and has diversified situation in actual applications, and statistical analysis standard so can be according to the actual requirements by the portable terminal factory settings.
Step 103: the current emotional state of described the other user is presented on one's own side's portable terminal display device.
After judging the current emotional state of the other user, emotional state is presented on the portable terminal display device by literal, image expression or other modes, certainly need the corresponding relation of default various moods in the portable terminal with display result, suppose to set glad corresponding symbol ^_^, sad corresponding symbol :-(, then judge the current emotional state of the other user when glad, then can be in portable terminal displaying symbol ^_^.Display mode can only be a simple example for multiple modes such as literal, picture expression, symbols above.When in step 102, judging the other user's emotional state and change, when the current emotional state of described the other user is presented at one's own side's portable terminal, can send prompt tone or vibrations with the square user of prompting.
Referring to Fig. 2, be embodiment of the invention mobile terminal structure schematic diagram, comprising: voice or video information acquisition module 1, emotional characteristics information extraction modules 2, analysis and judgement module 3 and display module 4;
Described voice or video information acquisition module 1 are used for speech data or video data communication process collection the other user;
Described emotional characteristics information extraction modules 2 links to each other with described voice or video information acquisition module 1, is used for extracting emotional characteristics information from the speech data or the video data that collect; When extracting emotional characteristics information from speech data, described emotional characteristics information comprises one or more in the multiple information such as the sound intensity, word speed, pitch, intonation, generally needs multiple information in conjunction with judging the other side's mood more accurately; When from video data, extracting emotional characteristics information, described emotional characteristics information comprises the face's correlated characteristic in the video image, various face features such as stretching degree as lip, eyebrow, eyes, opening angle such as people's eyes when laughing at can diminish, the crooked radian of lip is also different, the open angle of when indignation eyes can be bigger etc.From the speech data that collects or video data, extract emotional characteristics information, need to pre-determine in portable terminal is to gather which characteristic information, such as when from speech data, extracting emotional characteristics information, can stipulate only to gather the sound intensity, word speed, pitch, four kinds of characteristic informations of intonation.
Described analysis and judgement module 3 is used for the current emotional state of emotional characteristics information analysis and judgment the other user according to described emotional characteristics information extraction modules 2 extractions, draws judged result.
Described display module 4 links to each other with described analysis and judgement module 3, and the judgment result displays that is used for described analysis and judgement module 3 is drawn is in one's own side's portable terminal display device, and concrete display mode can show by the mode of literal or image expression.Need the corresponding relation of default various moods in the portable terminal, suppose to set glad corresponding symbol ^_^, sad corresponding symbol with display result :-(, when then judging the current emotional state of the other user for happiness, then can be in portable terminal displaying symbol ^_^.Display mode can only be a simple example for multiple modes such as literal, picture expression, symbols above.
Described portable terminal also comprises: memory cell 5, be used to store the discrimination standard of default emotional characteristics information correspondence, call for analysis and judgement module 3, so that the discrimination standard of storage in emotional characteristics information that analysis and judgement module 3 can be extracted according to described emotional characteristics information extraction modules 2 and the described memory module 5 is judged the current mood of the other side.Described discrimination standard is by collecting the speech samples of a large amount of different people when the different mood, by these samples being analyzed formation.During as extraction emotional characteristics information from speech data, stipulate the collection sound intensity, word speed, pitch, four kinds of characteristic informations of intonation, in memory, need preestablish certain certain mood of scope representative deflection of certain characteristic information so.After passing through macromethod, can set the sound intensity and in a~b scope, be happiness, it in b~c scope indignation, in c~d scope is frightened, can set certain mood of certain scope correspondence of word speed, pitch, intonation in the same way, after configuring the mood of all scope correspondences, can be stored in the memory as discrimination standard by mapping table or other modes.
Described portable terminal also comprises: emotional change reminding module 6, be used for when described analysis and judgement module 3 is judged the other user's emotional state and changed, and send prompt tone or vibrations with the square user of prompting.
In the present embodiment, the concrete structure of described analysis and judgement module 3 can be referring to Fig. 3.
Referring to Fig. 3, structural representation for analysis and judgement module 3 among Fig. 2, described analysis and judgement module 3 comprises feature contrast unit 31 and statistical analysis unit 32, described feature contrast unit 31 links to each other with described memory module 5 with described emotional characteristics information extraction modules 2 respectively, each the emotional characteristics information that is used for described emotional characteristics information extraction modules 2 is extracted, corresponding with default and corresponding emotional characteristics information in the described memory module 5 respectively discrimination standard compares, draw judged result one by one, for example, can be with the discrimination standard scope that prestores in the current the other user's sound intensity that extracts and the memory: a~b scope, b~c scope, c~d scope is compared, draw current the other user's sound intensity scope of living in, if the current sound intensity is in c~d scope, then can draw first judged result for frightened according to the discrimination standard that prestores in the memory, other each emotional characteristics information of extracting compare with corresponding discrimination standard according to same principle, draw judged result one by one.Described statistical analysis unit 32 links to each other with described feature contrast unit 31, and each judged result that described feature contrast unit draws is carried out statistical analysis, judges the current emotional state of the other user.If each emotional characteristics information for judged result all be or major part is certain mood of deflection then can judges this user and be in this mood.This analysis and judgement also needs to analyze and set through great amount of samples, for example, suppose that the emotional characteristics information of extracting is five kinds, in the B step, can draw five judged results so, if the judged result quantity of default certain mood of deflection is then judged this user and is in this mood more than or equal to three.Certainly this is the simplest a kind of giving an example, and has diversified situation in actual applications, and statistical analysis standard so can be according to the actual requirements by the portable terminal factory settings.When described analysis and judgement module 3 was structure shown in Fig. 3, described display module 4 and described emotional characteristics reminding module 6 all linked to each other with statistical analysis unit 32 in the described analysis and judgement module 3.
The portable terminal that the embodiment of the invention provides can be analyzed the other user's emotional state in communication process, and prompting one's own side user, this function can not guarantee that each user likes, therefore, when this function of design, need to consider user's right to choose, in portable terminal, can design the opening and closing inlet, allow mobile phone users select whether to open this function voluntarily.
Implement the embodiment of the invention, by gathering the other user's speech data or video data; From the speech data that collects or video data, extract emotional characteristics information, and the current emotional state of analysis and judgement the other user; The current emotional state of described the other user is presented on one's own side's portable terminal display device.Give one's own side's user prompt the other user's emotional change in real time, so that one's own side user in time adjusts or select suitable dialog context or tongue, to avoid disagreeableness conversation atmosphere.
Above disclosed is a kind of preferred embodiment of the present invention only, can not limit the present invention's interest field certainly with this, and therefore the equivalent variations of doing according to claim of the present invention still belongs to the scope that the present invention is contained.
Through the above description of the embodiments, those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential hardware platform, can certainly all implement by hardware.Based on such understanding, all or part of can the embodying that technical scheme of the present invention contributes to background technology with the form of software product, this computer software product can be stored in the storage medium, as ROM/RAM, magnetic disc, CD etc., comprise that some instructions are with so that a computer equipment (can be a personal computer, server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the present invention or embodiment.
Claims (10)
1. a method of judging the other side's mood in communication process is used for portable terminal, it is characterized in that, comprising:
Gather the other user's speech data or video data;
From the speech data that collects or video data, extract emotional characteristics information, and the current emotional state of analysis and judgement the other user;
The current emotional state of described the other user is presented on one's own side's portable terminal display device.
2. the method for claim 1 is characterized in that, describedly extracts emotional characteristics information from the speech data that collects or video data, and the current emotional state of analysis and judgement the other user comprises:
From the speech data that collects or video data, extract emotional characteristics information;
With each emotional characteristics information of extracting, corresponding with default and corresponding emotional characteristics information in the portable terminal respectively discrimination standard compares, and draws judged result one by one;
Each judged result is carried out statistical analysis, judge the current emotional state of the other user.
3. method as claimed in claim 1 or 2, it is characterized in that, when judging the other user's emotional state and change, when the current emotional state of described the other user is presented at one's own side's portable terminal display device, send prompt tone or vibrations with the square user of prompting.
4. method as claimed in claim 3 is characterized in that, described the current emotional state of described the other user is presented on one's own side's portable terminal display device and can shows by the mode of literal or image expression.
5. method as claimed in claim 4 is characterized in that, the described extraction from the speech data that collects or video data in the emotional characteristics information step:
When extracting emotional characteristics information from speech data, described emotional characteristics information comprises one or more in the sound intensity, word speed, pitch, the intonation;
When from video data, extracting emotional characteristics information, described emotional characteristics information comprises face feature, described face feature comprises at least one bending direction and the radian in lip, the eyebrow, and/or the one or more stretching degree in the eyes, face, nose.
6. a portable terminal is characterized in that, comprising:
Voice or video information acquisition module are used for speech data or video data communication process collection the other user;
The emotional characteristics information extraction modules links to each other with described voice or video information acquisition module, is used for extracting emotional characteristics information from the speech data or the video data that collect;
The analysis and judgement module is used for the current emotional state of emotional characteristics information analysis and judgment the other user according to described emotional characteristics information extraction modules extraction, draws judged result;
Display module links to each other with described analysis and judgement module, and the judgment result displays that is used for described analysis and judgement module is drawn is in one's own side's portable terminal display device.
7. portable terminal as claimed in claim 6 is characterized in that, also comprises:
Memory module links to each other with described analysis and judgement module, is used to store the discrimination standard of default emotional characteristics information correspondence.
8. portable terminal as claimed in claim 7 is characterized in that, described analysis and judgement module comprises:
Feature contrast unit, link to each other with described memory module with described emotional characteristics information extraction modules respectively, each the emotional characteristics information that is used for described emotional characteristics information extraction modules is extracted, corresponding with default and corresponding emotional characteristics information in the described memory module respectively discrimination standard compares, and draws judged result one by one;
Statistical analysis unit links to each other with described feature contrast unit, and each judged result that described feature contrast unit draws is carried out statistical analysis, judges the current emotional state of the other user.
9. as each described portable terminal of claim 6 to 8, it is characterized in that described portable terminal also comprises:
The emotional change reminding module links to each other with described analysis and judgement module, is used for when described analysis and judgement module is judged the other user's emotional state and changed, and sends prompt tone or vibrations with the square user of prompting.
10. as portable terminal as described in the claim 9, it is characterized in that described display module also is used for judged result that described analysis and judgement module the is drawn mode by literal or image expression and is shown in one's own side's portable terminal display device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910214105A CN101789990A (en) | 2009-12-23 | 2009-12-23 | Method and mobile terminal for judging emotion of opposite party in conservation process |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200910214105A CN101789990A (en) | 2009-12-23 | 2009-12-23 | Method and mobile terminal for judging emotion of opposite party in conservation process |
Publications (1)
Publication Number | Publication Date |
---|---|
CN101789990A true CN101789990A (en) | 2010-07-28 |
Family
ID=42533039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200910214105A Pending CN101789990A (en) | 2009-12-23 | 2009-12-23 | Method and mobile terminal for judging emotion of opposite party in conservation process |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101789990A (en) |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101917585A (en) * | 2010-08-13 | 2010-12-15 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for regulating video information sent from visual telephone to opposite terminal |
WO2012028107A1 (en) * | 2010-09-02 | 2012-03-08 | 联想(北京)有限公司 | Mobile terminal and transmission processing method thereof |
CN102509550A (en) * | 2011-11-18 | 2012-06-20 | 中国联合网络通信集团有限公司 | Sound information processing method and user equipment |
WO2012089906A1 (en) * | 2010-12-30 | 2012-07-05 | Nokia Corporation | Method, apparatus and computer program product for emotion detection |
CN102752704A (en) * | 2012-06-29 | 2012-10-24 | 华为终端有限公司 | Sound information processing method and terminal |
CN102780651A (en) * | 2012-07-21 | 2012-11-14 | 上海量明科技发展有限公司 | Method for inserting emotion data in instant messaging messages, client and system |
WO2012153320A2 (en) * | 2011-05-09 | 2012-11-15 | MALAVIYA, Rakesh | System and method for personalized media rating and related emotional profile analytics |
CN103024521A (en) * | 2012-12-27 | 2013-04-03 | 深圳Tcl新技术有限公司 | Program screening method, program screening system and television with program screening system |
CN103269405A (en) * | 2013-05-23 | 2013-08-28 | 深圳市中兴移动通信有限公司 | Method and device for hinting friendlily |
CN103283267A (en) * | 2011-01-07 | 2013-09-04 | 英派尔科技开发有限公司 | Quantifying frustration via a user interface |
CN103297742A (en) * | 2012-02-27 | 2013-09-11 | 联想(北京)有限公司 | Data processing method, microprocessor, communication terminal and server |
CN103312897A (en) * | 2010-09-02 | 2013-09-18 | 联想(北京)有限公司 | Mobile terminal and sending processing method thereof |
CN103366760A (en) * | 2012-03-26 | 2013-10-23 | 联想(北京)有限公司 | Method, device and system for data processing |
CN103390409A (en) * | 2012-05-11 | 2013-11-13 | 鸿富锦精密工业(深圳)有限公司 | Electronic device and method for sensing pornographic voice bands |
CN103425247A (en) * | 2013-06-04 | 2013-12-04 | 深圳市中兴移动通信有限公司 | User reaction based control terminal and information processing method thereof |
WO2013182118A1 (en) * | 2012-12-27 | 2013-12-12 | 中兴通讯股份有限公司 | Transmission method and device for voice data |
CN103491251A (en) * | 2013-09-24 | 2014-01-01 | 深圳市金立通信设备有限公司 | Method and terminal for monitoring user calls |
CN103514455A (en) * | 2012-06-19 | 2014-01-15 | 国际商业机器公司 | Recognition and feedback of facial and vocal emotions |
CN103546503A (en) * | 2012-07-10 | 2014-01-29 | 百度在线网络技术(北京)有限公司 | Voice-based cloud social system, voice-based cloud social method and cloud analysis server |
CN103596144A (en) * | 2012-08-14 | 2014-02-19 | 宏达国际电子股份有限公司 | Systems for providing emotional tone-based notifications for communications and related methods |
CN103634472A (en) * | 2013-12-06 | 2014-03-12 | 惠州Tcl移动通信有限公司 | Method, system and mobile phone for judging mood and character of user according to call voice |
CN103873658A (en) * | 2012-12-13 | 2014-06-18 | 联想(北京)有限公司 | Electronic device control method and electronic device |
CN103905644A (en) * | 2014-03-27 | 2014-07-02 | 郑明� | Generating method and equipment of mobile terminal call interface |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN104079703A (en) * | 2013-03-26 | 2014-10-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104113634A (en) * | 2013-04-22 | 2014-10-22 | 三星电子(中国)研发中心 | Voice processing method |
CN104301498A (en) * | 2013-07-15 | 2015-01-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104683606A (en) * | 2015-02-06 | 2015-06-03 | 深圳市中兴移动通信有限公司 | Call data processing method and device |
CN104836979A (en) * | 2014-03-17 | 2015-08-12 | 腾讯科技(北京)有限公司 | Information interaction method and device |
CN105227765A (en) * | 2015-09-10 | 2016-01-06 | 三星电子(中国)研发中心 | Interactive approach in communication process and system |
CN105261362A (en) * | 2015-09-07 | 2016-01-20 | 科大讯飞股份有限公司 | Conversation voice monitoring method and system |
CN105260154A (en) * | 2015-10-15 | 2016-01-20 | 桂林电子科技大学 | Multimedia data display method and display apparatus |
CN105280187A (en) * | 2015-11-13 | 2016-01-27 | 上海斐讯数据通信技术有限公司 | Family emotion management device and method |
CN105554245A (en) * | 2015-12-04 | 2016-05-04 | 广东小天才科技有限公司 | Communication method and device |
CN106649703A (en) * | 2016-12-20 | 2017-05-10 | 中国科学院深圳先进技术研究院 | Method and device for visualizing audio data |
CN106775665A (en) * | 2016-11-29 | 2017-05-31 | 竹间智能科技(上海)有限公司 | The acquisition methods and device of the emotional state change information based on sentiment indicator |
CN106874265A (en) * | 2015-12-10 | 2017-06-20 | 深圳新创客电子科技有限公司 | A kind of content outputting method matched with user emotion, electronic equipment and server |
WO2017219450A1 (en) * | 2016-06-21 | 2017-12-28 | 中兴通讯股份有限公司 | Information processing method and device, and mobile terminal |
CN107993674A (en) * | 2016-10-27 | 2018-05-04 | 中兴通讯股份有限公司 | A kind of emotion control method and device |
CN108173835A (en) * | 2017-12-25 | 2018-06-15 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus of verification, server and terminal |
JP2018170714A (en) * | 2017-03-30 | 2018-11-01 | 日本電気株式会社 | Information processing device, information processing method, information processing program, and information processing system |
CN108833721A (en) * | 2018-05-08 | 2018-11-16 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN109040471A (en) * | 2018-10-15 | 2018-12-18 | Oppo广东移动通信有限公司 | Emotive advisory method, apparatus, mobile terminal and storage medium |
CN109087670A (en) * | 2018-08-30 | 2018-12-25 | 西安闻泰电子科技有限公司 | Mood analysis method, system, server and storage medium |
CN109165571A (en) * | 2018-08-03 | 2019-01-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for being inserted into image |
CN109215683A (en) * | 2018-08-10 | 2019-01-15 | 维沃移动通信有限公司 | A kind of reminding method and terminal |
CN109256151A (en) * | 2018-11-21 | 2019-01-22 | 努比亚技术有限公司 | Call voice regulates and controls method, apparatus, mobile terminal and readable storage medium storing program for executing |
WO2019016647A1 (en) * | 2017-07-19 | 2019-01-24 | International Business Machines Corporation | Automated system and method for improving healthcare communication |
CN110086937A (en) * | 2019-04-28 | 2019-08-02 | 上海掌门科技有限公司 | Display methods, electronic equipment and the computer-readable medium of call interface |
CN110809090A (en) * | 2019-10-31 | 2020-02-18 | Oppo广东移动通信有限公司 | Call control method and related product |
CN111179943A (en) * | 2019-10-30 | 2020-05-19 | 王东 | Conversation auxiliary equipment and method for acquiring information |
US10832803B2 (en) | 2017-07-19 | 2020-11-10 | International Business Machines Corporation | Automated system and method for improving healthcare communication |
WO2022201196A1 (en) * | 2021-03-23 | 2022-09-29 | Rn Chidakashi Technologies Pvt Ltd | System and method for classifying activity of users based on micro-expression and emotion using ai |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1532775A (en) * | 2003-03-19 | 2004-09-29 | ���µ�����ҵ��ʽ���� | Visuable telephone terminal |
US7043008B1 (en) * | 2001-12-20 | 2006-05-09 | Cisco Technology, Inc. | Selective conversation recording using speech heuristics |
CN1906939A (en) * | 2004-04-07 | 2007-01-31 | 松下电器产业株式会社 | Communication terminal and communication method |
CN2924957Y (en) * | 2006-07-20 | 2007-07-18 | 昆盈企业股份有限公司 | Image tracking mood video machine |
CN101420665A (en) * | 2008-12-11 | 2009-04-29 | 北京邮电大学 | System and method for implementing emotion detection and service guidance based on emotion detection technique |
CN101485188A (en) * | 2006-07-06 | 2009-07-15 | Ktf电信公司 | Method and system for providing voice analysis service, and apparatus therefor |
-
2009
- 2009-12-23 CN CN200910214105A patent/CN101789990A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7043008B1 (en) * | 2001-12-20 | 2006-05-09 | Cisco Technology, Inc. | Selective conversation recording using speech heuristics |
CN1532775A (en) * | 2003-03-19 | 2004-09-29 | ���µ�����ҵ��ʽ���� | Visuable telephone terminal |
CN1906939A (en) * | 2004-04-07 | 2007-01-31 | 松下电器产业株式会社 | Communication terminal and communication method |
CN101485188A (en) * | 2006-07-06 | 2009-07-15 | Ktf电信公司 | Method and system for providing voice analysis service, and apparatus therefor |
CN2924957Y (en) * | 2006-07-20 | 2007-07-18 | 昆盈企业股份有限公司 | Image tracking mood video machine |
CN101420665A (en) * | 2008-12-11 | 2009-04-29 | 北京邮电大学 | System and method for implementing emotion detection and service guidance based on emotion detection technique |
Cited By (78)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101917585A (en) * | 2010-08-13 | 2010-12-15 | 宇龙计算机通信科技(深圳)有限公司 | Method, device and terminal for regulating video information sent from visual telephone to opposite terminal |
US8929949B2 (en) * | 2010-09-02 | 2015-01-06 | Lenovo (Beijing) Limited | Mobile terminal and transmission processing method thereof |
WO2012028107A1 (en) * | 2010-09-02 | 2012-03-08 | 联想(北京)有限公司 | Mobile terminal and transmission processing method thereof |
CN102387241A (en) * | 2010-09-02 | 2012-03-21 | 联想(北京)有限公司 | Mobile terminal and transmission processing method thereof |
CN103312897A (en) * | 2010-09-02 | 2013-09-18 | 联想(北京)有限公司 | Mobile terminal and sending processing method thereof |
CN102387241B (en) * | 2010-09-02 | 2015-09-23 | 联想(北京)有限公司 | A kind of mobile terminal and transmission processing method thereof |
US20130157719A1 (en) * | 2010-09-02 | 2013-06-20 | Yong Liu | Mobile terminal and transmission processing method thereof |
WO2012089906A1 (en) * | 2010-12-30 | 2012-07-05 | Nokia Corporation | Method, apparatus and computer program product for emotion detection |
CN103283267B (en) * | 2011-01-07 | 2016-06-22 | 英派尔科技开发有限公司 | The apparatus and method of setback are quantified via user interface |
US9547408B2 (en) | 2011-01-07 | 2017-01-17 | Empire Technology Development Llc | Quantifying frustration via a user interface |
CN103283267A (en) * | 2011-01-07 | 2013-09-04 | 英派尔科技开发有限公司 | Quantifying frustration via a user interface |
WO2012153320A3 (en) * | 2011-05-09 | 2013-01-24 | MALAVIYA, Rakesh | System and method for personalized media rating and related emotional profile analytics |
WO2012153320A2 (en) * | 2011-05-09 | 2012-11-15 | MALAVIYA, Rakesh | System and method for personalized media rating and related emotional profile analytics |
CN102509550A (en) * | 2011-11-18 | 2012-06-20 | 中国联合网络通信集团有限公司 | Sound information processing method and user equipment |
CN103297742A (en) * | 2012-02-27 | 2013-09-11 | 联想(北京)有限公司 | Data processing method, microprocessor, communication terminal and server |
CN103366760A (en) * | 2012-03-26 | 2013-10-23 | 联想(北京)有限公司 | Method, device and system for data processing |
CN103390409A (en) * | 2012-05-11 | 2013-11-13 | 鸿富锦精密工业(深圳)有限公司 | Electronic device and method for sensing pornographic voice bands |
CN103514455A (en) * | 2012-06-19 | 2014-01-15 | 国际商业机器公司 | Recognition and feedback of facial and vocal emotions |
CN103514455B (en) * | 2012-06-19 | 2017-11-14 | 国际商业机器公司 | For characterizing the method and system of Emotive advisory |
CN102752704A (en) * | 2012-06-29 | 2012-10-24 | 华为终端有限公司 | Sound information processing method and terminal |
CN103546503B (en) * | 2012-07-10 | 2017-03-15 | 百度在线网络技术(北京)有限公司 | Voice-based cloud social intercourse system, method and cloud analysis server |
CN103546503A (en) * | 2012-07-10 | 2014-01-29 | 百度在线网络技术(北京)有限公司 | Voice-based cloud social system, voice-based cloud social method and cloud analysis server |
CN102780651A (en) * | 2012-07-21 | 2012-11-14 | 上海量明科技发展有限公司 | Method for inserting emotion data in instant messaging messages, client and system |
CN103596144A (en) * | 2012-08-14 | 2014-02-19 | 宏达国际电子股份有限公司 | Systems for providing emotional tone-based notifications for communications and related methods |
CN103873658B (en) * | 2012-12-13 | 2016-08-17 | 联想(北京)有限公司 | A kind of method controlling electronic equipment and electronic equipment |
CN103873658A (en) * | 2012-12-13 | 2014-06-18 | 联想(北京)有限公司 | Electronic device control method and electronic device |
CN103024521A (en) * | 2012-12-27 | 2013-04-03 | 深圳Tcl新技术有限公司 | Program screening method, program screening system and television with program screening system |
WO2013182118A1 (en) * | 2012-12-27 | 2013-12-12 | 中兴通讯股份有限公司 | Transmission method and device for voice data |
CN103024521B (en) * | 2012-12-27 | 2017-02-08 | 深圳Tcl新技术有限公司 | Program screening method, program screening system and television with program screening system |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN103916536B (en) * | 2013-01-07 | 2018-12-18 | 三星电子株式会社 | For the user interface method and system in mobile terminal |
CN104079703B (en) * | 2013-03-26 | 2019-03-29 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN104079703A (en) * | 2013-03-26 | 2014-10-01 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104113634A (en) * | 2013-04-22 | 2014-10-22 | 三星电子(中国)研发中心 | Voice processing method |
CN103269405A (en) * | 2013-05-23 | 2013-08-28 | 深圳市中兴移动通信有限公司 | Method and device for hinting friendlily |
CN103425247A (en) * | 2013-06-04 | 2013-12-04 | 深圳市中兴移动通信有限公司 | User reaction based control terminal and information processing method thereof |
CN104301498A (en) * | 2013-07-15 | 2015-01-21 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104301498B (en) * | 2013-07-15 | 2018-07-03 | 联想(北京)有限公司 | The method and electronic equipment of information processing |
CN103491251A (en) * | 2013-09-24 | 2014-01-01 | 深圳市金立通信设备有限公司 | Method and terminal for monitoring user calls |
CN103634472A (en) * | 2013-12-06 | 2014-03-12 | 惠州Tcl移动通信有限公司 | Method, system and mobile phone for judging mood and character of user according to call voice |
CN104836979A (en) * | 2014-03-17 | 2015-08-12 | 腾讯科技(北京)有限公司 | Information interaction method and device |
CN103905644A (en) * | 2014-03-27 | 2014-07-02 | 郑明� | Generating method and equipment of mobile terminal call interface |
CN104683606A (en) * | 2015-02-06 | 2015-06-03 | 深圳市中兴移动通信有限公司 | Call data processing method and device |
CN104683606B (en) * | 2015-02-06 | 2018-08-14 | 努比亚技术有限公司 | Communicating data processing method and processing device |
CN105261362A (en) * | 2015-09-07 | 2016-01-20 | 科大讯飞股份有限公司 | Conversation voice monitoring method and system |
CN105227765A (en) * | 2015-09-10 | 2016-01-06 | 三星电子(中国)研发中心 | Interactive approach in communication process and system |
CN105260154A (en) * | 2015-10-15 | 2016-01-20 | 桂林电子科技大学 | Multimedia data display method and display apparatus |
CN105280187A (en) * | 2015-11-13 | 2016-01-27 | 上海斐讯数据通信技术有限公司 | Family emotion management device and method |
CN105554245A (en) * | 2015-12-04 | 2016-05-04 | 广东小天才科技有限公司 | Communication method and device |
CN106874265B (en) * | 2015-12-10 | 2021-11-26 | 深圳新创客电子科技有限公司 | Content output method matched with user emotion, electronic equipment and server |
CN106874265A (en) * | 2015-12-10 | 2017-06-20 | 深圳新创客电子科技有限公司 | A kind of content outputting method matched with user emotion, electronic equipment and server |
WO2017219450A1 (en) * | 2016-06-21 | 2017-12-28 | 中兴通讯股份有限公司 | Information processing method and device, and mobile terminal |
CN107993674A (en) * | 2016-10-27 | 2018-05-04 | 中兴通讯股份有限公司 | A kind of emotion control method and device |
CN106775665A (en) * | 2016-11-29 | 2017-05-31 | 竹间智能科技(上海)有限公司 | The acquisition methods and device of the emotional state change information based on sentiment indicator |
CN106649703A (en) * | 2016-12-20 | 2017-05-10 | 中国科学院深圳先进技术研究院 | Method and device for visualizing audio data |
JP2018170714A (en) * | 2017-03-30 | 2018-11-01 | 日本電気株式会社 | Information processing device, information processing method, information processing program, and information processing system |
US11227148B2 (en) | 2017-03-30 | 2022-01-18 | Nec Corporation | Information processing apparatus, information processing method, information processing program, and information processing system |
US10832803B2 (en) | 2017-07-19 | 2020-11-10 | International Business Machines Corporation | Automated system and method for improving healthcare communication |
US10825558B2 (en) | 2017-07-19 | 2020-11-03 | International Business Machines Corporation | Method for improving healthcare |
WO2019016647A1 (en) * | 2017-07-19 | 2019-01-24 | International Business Machines Corporation | Automated system and method for improving healthcare communication |
CN108173835A (en) * | 2017-12-25 | 2018-06-15 | 北京奇艺世纪科技有限公司 | A kind of method, apparatus of verification, server and terminal |
CN108173835B (en) * | 2017-12-25 | 2021-04-02 | 北京奇艺世纪科技有限公司 | Verification method, device, server and terminal |
CN108833721B (en) * | 2018-05-08 | 2021-03-12 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN108833721A (en) * | 2018-05-08 | 2018-11-16 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN109165571A (en) * | 2018-08-03 | 2019-01-08 | 北京字节跳动网络技术有限公司 | Method and apparatus for being inserted into image |
US11205290B2 (en) | 2018-08-03 | 2021-12-21 | Beijing Bytedance Network Technology Co., Ltd. | Method and device for inserting an image into a determined region of a target eye image |
CN109215683A (en) * | 2018-08-10 | 2019-01-15 | 维沃移动通信有限公司 | A kind of reminding method and terminal |
CN109215683B (en) * | 2018-08-10 | 2021-09-14 | 维沃移动通信有限公司 | Prompting method and terminal |
CN109087670A (en) * | 2018-08-30 | 2018-12-25 | 西安闻泰电子科技有限公司 | Mood analysis method, system, server and storage medium |
CN109040471A (en) * | 2018-10-15 | 2018-12-18 | Oppo广东移动通信有限公司 | Emotive advisory method, apparatus, mobile terminal and storage medium |
CN109256151A (en) * | 2018-11-21 | 2019-01-22 | 努比亚技术有限公司 | Call voice regulates and controls method, apparatus, mobile terminal and readable storage medium storing program for executing |
CN109256151B (en) * | 2018-11-21 | 2023-06-27 | 努比亚技术有限公司 | Call voice regulation and control method and device, mobile terminal and readable storage medium |
WO2020221089A1 (en) * | 2019-04-28 | 2020-11-05 | 上海掌门科技有限公司 | Call interface display method, electronic device and computer readable medium |
CN110086937A (en) * | 2019-04-28 | 2019-08-02 | 上海掌门科技有限公司 | Display methods, electronic equipment and the computer-readable medium of call interface |
CN111179943A (en) * | 2019-10-30 | 2020-05-19 | 王东 | Conversation auxiliary equipment and method for acquiring information |
WO2021083125A1 (en) * | 2019-10-31 | 2021-05-06 | Oppo广东移动通信有限公司 | Call control method and related product |
CN110809090A (en) * | 2019-10-31 | 2020-02-18 | Oppo广东移动通信有限公司 | Call control method and related product |
WO2022201196A1 (en) * | 2021-03-23 | 2022-09-29 | Rn Chidakashi Technologies Pvt Ltd | System and method for classifying activity of users based on micro-expression and emotion using ai |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101789990A (en) | Method and mobile terminal for judging emotion of opposite party in conservation process | |
CN111128223B (en) | Text information-based auxiliary speaker separation method and related device | |
CN110300001A (en) | Conference audio control method, system, equipment and computer readable storage medium | |
CN109658923A (en) | Voice quality detecting method, equipment, storage medium and device based on artificial intelligence | |
CN104103272B (en) | Audio recognition method, device and bluetooth earphone | |
CN104538043A (en) | Real-time emotion reminder for call | |
CN202150884U (en) | Handset mood-induction device | |
CN104144108B (en) | A kind of message responding method, apparatus and system | |
CN104766608A (en) | Voice control method and voice control device | |
CN108010513B (en) | Voice processing method and device | |
CN107886951A (en) | A kind of speech detection method, device and equipment | |
CN110265011A (en) | The exchange method and its electronic equipment of a kind of electronic equipment | |
CN101867742A (en) | Television system based on sound control | |
CN110931019B (en) | Public security voice data acquisition method, device, equipment and computer storage medium | |
CN112420049A (en) | Data processing method, device and storage medium | |
JP6268916B2 (en) | Abnormal conversation detection apparatus, abnormal conversation detection method, and abnormal conversation detection computer program | |
KR20190119521A (en) | Electronic apparatus and operation method thereof | |
CN113709291A (en) | Audio processing method and device, electronic equipment and readable storage medium | |
KR100463706B1 (en) | A system and a method for analyzing human emotion based on voice recognition through wire or wireless network | |
CN110197663B (en) | Control method and device and electronic equipment | |
CN110556114B (en) | Speaker identification method and device based on attention mechanism | |
CN105551504A (en) | Method and device for triggering function application of intelligent mobile terminal based on crying sound | |
CN111149153A (en) | Information processing apparatus and utterance analysis method | |
CN106790963B (en) | Audio signal control method and device | |
CN108040185A (en) | A kind of method and apparatus for identifying harassing call |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20100728 |