CN103366760A - Method, device and system for data processing - Google Patents

Method, device and system for data processing Download PDF

Info

Publication number
CN103366760A
CN103366760A CN2012100826597A CN201210082659A CN103366760A CN 103366760 A CN103366760 A CN 103366760A CN 2012100826597 A CN2012100826597 A CN 2012100826597A CN 201210082659 A CN201210082659 A CN 201210082659A CN 103366760 A CN103366760 A CN 103366760A
Authority
CN
China
Prior art keywords
emotional state
voice
data processing
sound
sound emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100826597A
Other languages
Chinese (zh)
Inventor
刘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN2012100826597A priority Critical patent/CN103366760A/en
Publication of CN103366760A publication Critical patent/CN103366760A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method, a device and a system for data processing. Audio data is collected and processed according to a preset strategy of classification so as to determine and output the states of sound and emotion corresponding to the audio data. According to the invention, a user can see his or her frequency analysis chart and a corresponding affinity chart so as to detect his or her sound states, thereby adjusting their own intonations and effectively improving call efficiency.

Description

A kind of data processing method, Apparatus and system
Technical field
The present invention relates to digital processing field, in particular, relate to a kind of data processing method, Apparatus and system.
Background technology
When linking up between men, the Determines of sound the effect of linking up.Tone, tone color, loudness are the three elements of sound, and this three has determined the affinity degree of sound just.Under some scene, such as momentous conference, telemarketing, interview etc., corresponding personnel need to oneself instantly sound status have gained some understanding, in order to better carry out relevant matters.
For example, whether telemarketing personnel's sound has affinity often determines that can a business reach, if the telemarketing personnel can detect the state of own sound in the middle of communication process, thereby the sound of timely adjustment oneself so just can reach better effect.
And for example, when the reporter need to do the interview of a cheese, he need to adjust oneself to best sound status before interview, and then improved interview efficient.
Yet prior art does not provide the method that detects own sound status.
Summary of the invention
In view of this, the invention provides a kind of data processing method, Apparatus and system, to overcome the problem that can not detect own sound status in the prior art.
For achieving the above object, the invention provides following technical scheme:
A kind of data processing method is applied to the first electronic equipment, comprising:
Gather voice data;
Process described voice data according to default classification policy, determine the sound emotional state that described voice data is corresponding;
Export described sound emotional state.
Preferably, described collection voice data comprises:
Record the voice of sound status side to be detected, and when detecting described voice and be analog voice signal, described analog voice signal is converted to audio digital signals;
Accordingly, the default classification policy of described foundation is processed described voice data for processing described audio digital signals according to described default allocation strategy.
Preferably, the default classification policy of described foundation is processed described voice data and is specially:
Described voice data is carried out frequency resolution, and compare with the characteristic frequency of sound emotional state.
Preferably, described sound emotional state comprises positive mood and negative emotions.
Preferably, after the described sound emotional state of described output, also comprise:
For described voice data distributes the sign of corresponding sound emotional state, and store.
Preferably, the described sound emotional state of described output is specially the described sound emotional state of voice output and/or shows the described sound emotional state of output.
Preferably, the described sound emotional state of described output comprises:
Export in real time described sound emotional state or when scheduled event occurs, export described sound emotional state.
Preferably, after the described sound emotional state of described output, also comprise:
Judge whether described sound emotional state is negative emotions, if so, carry out predetermined work.
Preferably, described execution predetermined work comprises:
The default voice suggestion of output and/or the default picture of demonstration.
Preferably, described execution predetermined work is specially:
Set up the interface channel of described the first electronic equipment and the second electronic equipment, send presupposed information toward described the second electronic equipment by described interface channel.
A kind of data processing equipment comprises:
The audio collection unit is used for gathering voice data;
Processing unit is used for processing described voice data according to default classification policy, determines the sound emotional state that described voice data is corresponding;
The first display unit is used for exporting described sound emotional state.
Preferably, described audio collection unit comprises:
Recording elements is used for recording the voice of sound status side to be detected;
Detecting unit is used for when detecting described voice and be analog voice signal described analog voice signal being converted to audio digital signals.
Preferably, described processing unit comprises:
Resolving cell is used for described voice data is carried out frequency resolution;
Comparing unit is used for comparing with the characteristic frequency of sound emotional state.
Preferably, also comprise:
Decompose storage unit, be used to described voice data to distribute the sign of corresponding sound emotional state, and store.
Preferably, described the first display unit comprises:
The first voice-output unit is used for the sound emotional state of determining is exported by voice mode;
And/or,
First shows output unit, is used for the sound emotional state of determining is exported by display mode.
Preferably, described the first display unit also comprises:
Setup unit is used for setting event generation state, is exporting described sound emotional state when scheduled event occurs.
Preferably, also comprise:
Whether judging unit is negative emotions for detection of described sound emotional state, if so, carries out predetermined work.
Preferably, also comprise:
The second display unit is used for the default voice suggestion of output and/or shows default picture.
Preferably, also comprise:
Transmitting element for the interface channel of setting up described the first electronic equipment and the second electronic equipment, sends presupposed information toward described the second electronic equipment by described interface channel.
A kind of data handling system is characterized in that, comprises all devices such as above-mentioned claim.
Via above-mentioned technical scheme as can be known, compared with prior art, the invention provides a kind of data processing method, Apparatus and system, by gathering voice data, process described voice data according to default classification policy, determine the sound emotional state that described voice data is corresponding, and export described sound emotional state.By the present invention, the user can see the frequency analysis figure of own sound and corresponding affinity chart, and detecting oneself sound status, thereby the intonation of in time adjustment oneself effectively raises transmission efficiency.
Description of drawings
In order to be illustrated more clearly in the embodiment of the invention or technical scheme of the prior art, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is embodiments of the invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to the accompanying drawing that provides other accompanying drawing.
The process flow diagram of a kind of data processing method that Fig. 1 provides for the embodiment of the invention;
The another process flow diagram of a kind of data processing method that Fig. 2 provides for the embodiment of the invention;
The another process flow diagram of a kind of data processing method that Fig. 3 provides for the embodiment of the invention;
The another process flow diagram of a kind of data processing method that Fig. 4 provides for the embodiment of the invention;
The structural drawing of a kind of data processing equipment that Fig. 5 provides for the embodiment of the invention;
The another structural drawing of a kind of data processing equipment that Fig. 6 provides for the embodiment of the invention;
The another structural drawing of a kind of data processing equipment that Fig. 7 provides for the embodiment of the invention;
The another structural drawing of a kind of data processing equipment that Fig. 8 provides for the embodiment of the invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that obtains under the creative work prerequisite.
The invention provides a kind of data processing method, Apparatus and system, by gathering voice data, process described voice data according to default classification policy, determine the sound emotional state that described voice data is corresponding, and export described sound emotional state.By the present invention, the user can see the frequency analysis figure of own sound and corresponding affinity chart, and detecting oneself sound status, thereby the intonation of in time adjustment oneself effectively raises transmission efficiency.
Embodiment one
A kind of data processing method provided by the invention can be applied in the scenes such as telemarketing, meeting interview, now gives an example take telemarketing as scene.
See also accompanying drawing 1, be the process flow diagram of a kind of data processing method provided by the invention, comprising step:
S101: gather voice data.
S102: process described voice data according to default classification policy, determine the sound emotional state that described voice data is corresponding.
S103: export described sound emotional state.
When calling party and callee carry out voice call, dynamic acquisition calling party's call voice, and be audio digital signals with this voice signal by analog signal conversion; By the audio digital signals after the conversion is decomposed, the mode of wherein decomposing has multiple again, such as: can decompose for the frequency size according to described audio digital signals; And the audio digital signals frequency after will decomposing is mated with default characteristic frequency; Here, described default characteristic frequency sets up on their own according to user's different user demands, and is stored in telephone terminal and the network data base; Described sound emotional state comprises positive mood and negative emotions; At last, according to different matching results the frequency spectrum of described audio digital signals is sorted out, and shown at electronic equipment.Wherein, on electronic equipment, can show by the form of speech analysis curve or chart.
Adopt the present invention can effectively make the user in time know the susceptibility that the voice of oneself sound the other side; Secondly, the user can debug the speech of oneself in advance when not conversing, determine oneself whether to be fit to certain visit on the same day, except this, applied range of the present invention, the user only need to install at own mobile phone, operation is practical easily, makes the user can see timely frequency analysis figure and the corresponding affinity chart of own sound when conversation, thereby in time adjust the intonation of oneself, the Effective Raise transmission efficiency.
Embodiment two
Fig. 2 is the process flow diagram of the preferred embodiment of data processing method provided by the invention, the difference of itself and embodiment one is that described collection voice data comprises: the voice of recording sound status side to be detected, and when detecting described voice and be analog voice signal, described analog voice signal is converted to audio digital signals; Accordingly, the default classification policy of described foundation is processed described voice data for processing described audio digital signals according to described default allocation strategy.The default classification policy of described foundation is processed described voice data and is specially: described voice data is carried out frequency resolution, and compare with the characteristic frequency of sound emotional state.Its concrete steps are:
S201: record the voice of sound status side to be detected, and when detecting described voice and be analog voice signal, described analog voice signal is converted to audio digital signals.
S202: described audio digital signals is carried out frequency resolution, and compare with the characteristic frequency of sound emotional state, determine the sound emotional state that described voice data is corresponding.
Usually, the sound that we hear, the image of seeing etc. mostly is simulating signal, and the analog voice signal of present embodiment by will converse the time carries out recording and storing, and conversion becomes audio digital signals through A/D.This audio digital signals is carried out spectrum analysis, and mate with the characteristic frequency of predefined alternative sounds state, here need to prove, described sound emotional state comprises positive mood and negative emotions, wherein, positive mood can be divided into excitement, loosens, feels grateful, self-confident etc., negative emotions can be divided into sadness, is sick of, misery, melancholy etc.The characteristic frequency of different sound status is different, suppose excitement, confidence, feel grateful, loosen, sad, melancholy, be sick of, painful these sound status characteristic frequency be followed successively by 4,3,2,1 ,-1 ,-2 ,-3 ,-4, when the frequency that analyzes the Contemporary Digital voice signal is 1, determine that then sound emotional state corresponding to described voice data is " loosening " state.
S103: export described sound emotional state.
Sound emotional state corresponding to described voice data of determining exported.
Adopt the present invention can effectively make the user in conversation, learn the voice status of oneself, except this, the user can also debug the speech of oneself in advance when not conversing, determine whether be fit to certain visit on oneself on the same day, except this, applied range of the present invention, the user only need to install at own mobile phone, easily operation, practical, make the user when conversation, can see timely frequency analysis figure and the corresponding affinity chart of own sound, thereby in time adjust the intonation of oneself, Effective Raise transmission efficiency.
Embodiment three
Present embodiment three is to have increased following steps with the difference of above-described embodiment two, and as shown in Figure 3, a kind of data processing method also comprises step S104: for described voice data distributes the sign of corresponding sound emotional state, and store.
Increasing step S104 is for more intelligent operation is provided to the user, the sign of distributing corresponding sound emotional state for described voice data, such as, the sound status that the analog voice signal that present analysis goes out is corresponding is " sadness ", " sadness " state assignment a certain sign corresponding to these voice, and should identify storage, so that the user provides reference when carrying out speech analysis next time.
Except this, the described sound emotional state of described output is specially step S303: the described sound emotional state of voice output and/or the described sound emotional state of demonstration output.Wherein, voice output can be reported for the sound of default sound emotional state (such as " excitement ").Show and export and for sound emotional state corresponding expression and specific picture, not do concrete restriction here.
See also Fig. 4, Fig. 4 is the process flow diagram of a preferred embodiment provided by the invention, and wherein, the described sound emotional state of described output comprises step S403: export in real time described sound emotional state or export described sound emotional state when scheduled event occurs.
Here need to prove that when conversation, the mode of output sound emotional state can be collection voice signal in conversation, and analyzes coupling, output voice mood state.The user demand that detects voice status is arranged, but need not converse the time, can also carry out voice collecting and analysis according to user's oneself demand, wherein, foregone conclusion spare can be for behind the end of conversation, perhaps converses when carrying out special time etc.
After the described sound emotional state of described output, also comprise step:
S105: judge whether described sound emotional state is negative emotions, if so, carry out predetermined work.
Wherein, described execution predetermined work can comprise: the default voice suggestion of output and/or the default picture of demonstration.
Wherein, described execution predetermined work can also be step:
S106: set up the interface channel of described the first electronic equipment and the second electronic equipment, send presupposed information toward described the second electronic equipment by described interface channel.
When detecting described output mood and be negative emotions, can carry out corresponding incentives, such as output voice incentive language, such as " this voice status is not good; but do not want discouragedly, and next time can be better ", " believing oneself, as long as you change diligently; must be ' excitement ' next time " etc., also can be the picture of the relevant encouragement of output here.
The specific implementation of step S106 can for, first terminal sends the sign of connection requirement to the second terminal, receive this sign and connect each other behind the passage until the second terminal, first terminal can send as the second terminal note of apologizing etc., wherein, the mode that first terminal and the second terminal connect can be note, bluetooth and wired transmission device, such as data line etc.
Adopt the present invention can effectively make the user in time know the susceptibility that the voice of oneself sound the other side; Secondly, the user can debug the speech of oneself in advance when not conversing, determine oneself whether to be fit to certain visit on the same day, except this, applied range of the present invention, the user only need to install at own mobile phone, operation is practical easily, makes the user can see timely frequency analysis figure and the corresponding affinity chart of own sound when conversation, thereby in time adjust the intonation of oneself, the Effective Raise transmission efficiency.
Describe method in detail among the embodiment that the invention described above provides, can adopt the device of various ways to realize for method of the present invention, so the present invention also provides a kind of device, the below provides specific embodiment and is elaborated.
See also accompanying drawing 5, be the structural drawing of a kind of data processing equipment provided by the invention, comprising:
Audio collection unit 101 is used for gathering voice data;
Processing unit 102 is used for processing described voice data according to default classification policy, determines the sound emotional state that described voice data is corresponding;
The first display unit 103 is used for exporting described sound emotional state.
Preferably, as shown in Figure 6, described audio collection unit 101 comprises:
Recording elements 1011 is used for recording the voice of sound status side to be detected;
Detecting unit 1012 is used for when detecting described voice and be analog voice signal described analog voice signal being converted to audio digital signals.
Preferably, described processing unit 102 comprises:
Resolving cell 1021 is used for described voice data is carried out frequency resolution;
Comparing unit 1022 is used for comparing with the characteristic frequency of sound emotional state.
Referring to Fig. 7, preferred, also comprise:
Decompose storage unit 104, be used to described voice data to distribute the sign of corresponding sound emotional state, and store.
Wherein, described the first display unit comprises:
The first voice-output unit 1031 is used for the sound emotional state of determining is exported by voice mode;
And/or,
First shows output unit 1032, is used for the sound emotional state of determining is exported by display mode.
Preferably, described the first display unit 103 also comprises:
Setup unit 1033 is used for setting event generation state, is exporting described sound emotional state when scheduled event occurs.
Data processing equipment provided by the invention referring to Fig. 8, also comprises:
Whether judging unit 105 is negative emotions for detection of described sound emotional state, if so, carries out predetermined work.
The second display unit 106 is used for the default voice suggestion of output and/or shows default picture.
Transmitting element 107 for the interface channel of setting up described the first electronic equipment and the second electronic equipment, sends presupposed information toward described the second electronic equipment by described interface channel.
In sum: by this programme, the user can see frequency analysis figure and the corresponding affinity chart of own sound timely when conversation, thereby in time adjusts the intonation of oneself, Effective Raise transmission efficiency.
The present invention also provides a kind of data handling system, comprises all devices described above.
Each embodiment adopts the mode of going forward one by one to describe in this instructions, and what each embodiment stressed is and the difference of other embodiment that identical similar part is mutually referring to getting final product between each embodiment.For the device that embodiment provides, because it is corresponding with the method that embodiment provides, so description is fairly simple, relevant part partly illustrates referring to method and gets final product.
Above-mentioned explanation to the embodiment that provides makes this area professional and technical personnel can realize or use the present invention.Multiple modification to these embodiment will be apparent concerning those skilled in the art, and General Principle as defined herein can in the situation that does not break away from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention will can not be restricted to these embodiment shown in this article, but principle and the features of novelty the widest consistent scope that provides with this paper will be provided.

Claims (20)

1. a data processing method is applied to the first electronic equipment, it is characterized in that, comprising:
Gather voice data;
Process described voice data according to default classification policy, determine the sound emotional state that described voice data is corresponding;
Export described sound emotional state.
2. data processing method according to claim 1 is characterized in that, described collection voice data comprises:
Record the voice of sound status side to be detected, and when detecting described voice and be analog voice signal, described analog voice signal is converted to audio digital signals;
Accordingly, the default classification policy of described foundation is processed described voice data for processing described audio digital signals according to described default allocation strategy.
3. data processing method according to claim 1 is characterized in that, the default classification policy of described foundation is processed described voice data and is specially:
Described voice data is carried out frequency resolution, and compare with the characteristic frequency of sound emotional state.
4. data processing method according to claim 1 is characterized in that, described sound emotional state comprises positive mood and negative emotions.
5. data processing method according to claim 1 is characterized in that, after the described sound emotional state of described output, also comprises:
For described voice data distributes the sign of corresponding sound emotional state, and store.
6. data processing method according to claim 1 is characterized in that, the described sound emotional state of described output is specially the described sound emotional state of voice output and/or shows the described sound emotional state of output.
7. data processing method according to claim 1 is characterized in that, the described sound emotional state of described output comprises:
Export in real time described sound emotional state or when scheduled event occurs, export described sound emotional state.
8. data processing method according to claim 1 is characterized in that, after the described sound emotional state of described output, also comprises:
Judge whether described sound emotional state is negative emotions, if so, carry out predetermined work.
9. data processing method according to claim 8 is characterized in that, described execution predetermined work comprises:
The default voice suggestion of output and/or the default picture of demonstration.
10. data processing method according to claim 8 is characterized in that, described execution predetermined work is specially:
Set up the interface channel of described the first electronic equipment and the second electronic equipment, send presupposed information toward described the second electronic equipment by described interface channel.
11. a data processing equipment is characterized in that, comprising:
The audio collection unit is used for gathering voice data;
Processing unit is used for processing described voice data according to default classification policy, determines the sound emotional state that described voice data is corresponding;
The first display unit is used for exporting described sound emotional state.
12. data processing equipment according to claim 11 is characterized in that, described audio collection unit comprises:
Recording elements is used for recording the voice of sound status side to be detected;
Detecting unit is used for when detecting described voice and be analog voice signal described analog voice signal being converted to audio digital signals.
13. data processing equipment according to claim 11 is characterized in that, described processing unit comprises:
Resolving cell is used for described voice data is carried out frequency resolution;
Comparing unit is used for comparing with the characteristic frequency of sound emotional state.
14. data processing equipment according to claim 11 is characterized in that, also comprises:
Decompose storage unit, be used to described voice data to distribute the sign of corresponding sound emotional state, and store.
15. data processing equipment according to claim 11 is characterized in that, described the first display unit comprises:
The first voice-output unit is used for the sound emotional state of determining is exported by voice mode;
And/or,
First shows output unit, is used for the sound emotional state of determining is exported by display mode.
16. data processing equipment according to claim 11 is characterized in that, described the first display unit also comprises:
Setup unit is used for setting event generation state, is exporting described sound emotional state when scheduled event occurs.
17. data processing equipment according to claim 11 is characterized in that, also comprises:
Whether judging unit is negative emotions for detection of described sound emotional state, if so, carries out predetermined work.
18. data processing equipment according to claim 11 is characterized in that, also comprises:
The second display unit is used for the default voice suggestion of output and/or shows default picture.
19. data processing equipment according to claim 11 is characterized in that, also comprises:
Transmitting element for the interface channel of setting up described the first electronic equipment and the second electronic equipment, sends presupposed information toward described the second electronic equipment by described interface channel.
20. a data handling system is characterized in that, comprises each described data processing equipment of claim 11-19.
CN2012100826597A 2012-03-26 2012-03-26 Method, device and system for data processing Pending CN103366760A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100826597A CN103366760A (en) 2012-03-26 2012-03-26 Method, device and system for data processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100826597A CN103366760A (en) 2012-03-26 2012-03-26 Method, device and system for data processing

Publications (1)

Publication Number Publication Date
CN103366760A true CN103366760A (en) 2013-10-23

Family

ID=49367957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100826597A Pending CN103366760A (en) 2012-03-26 2012-03-26 Method, device and system for data processing

Country Status (1)

Country Link
CN (1) CN103366760A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN104754150A (en) * 2015-03-05 2015-07-01 上海斐讯数据通信技术有限公司 Emotion acquisition method and system
CN104851422A (en) * 2015-06-09 2015-08-19 张维秀 Voice signal processing method and system
CN104915174A (en) * 2014-03-11 2015-09-16 阿里巴巴集团控股有限公司 Method and apparatus for feeding back sound signal of user
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN106910513A (en) * 2015-12-22 2017-06-30 微软技术许可有限责任公司 Emotional intelligence chat engine
CN107004428A (en) * 2014-12-01 2017-08-01 雅马哈株式会社 Session evaluating apparatus and method
CN109599127A (en) * 2017-09-29 2019-04-09 松下知识产权经营株式会社 Information processing method, information processing unit and message handling program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1586078A (en) * 2001-11-13 2005-02-23 皇家飞利浦电子股份有限公司 Affective television monitoring and control
CN101645961A (en) * 2008-08-06 2010-02-10 深圳富泰宏精密工业有限公司 Mobilephone and method for achieving caller emotion identification
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
CN101894550A (en) * 2010-07-19 2010-11-24 东南大学 Speech emotion classifying method for emotion-based characteristic optimization
EP2515242A2 (en) * 2011-04-21 2012-10-24 Palo Alto Research Center Incorporated Incorporating lexicon knowledge to improve sentiment classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1586078A (en) * 2001-11-13 2005-02-23 皇家飞利浦电子股份有限公司 Affective television monitoring and control
CN101645961A (en) * 2008-08-06 2010-02-10 深圳富泰宏精密工业有限公司 Mobilephone and method for achieving caller emotion identification
CN101789990A (en) * 2009-12-23 2010-07-28 宇龙计算机通信科技(深圳)有限公司 Method and mobile terminal for judging emotion of opposite party in conservation process
CN101894550A (en) * 2010-07-19 2010-11-24 东南大学 Speech emotion classifying method for emotion-based characteristic optimization
EP2515242A2 (en) * 2011-04-21 2012-10-24 Palo Alto Research Center Incorporated Incorporating lexicon knowledge to improve sentiment classification

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915174A (en) * 2014-03-11 2015-09-16 阿里巴巴集团控股有限公司 Method and apparatus for feeding back sound signal of user
CN107004428A (en) * 2014-12-01 2017-08-01 雅马哈株式会社 Session evaluating apparatus and method
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN104754150A (en) * 2015-03-05 2015-07-01 上海斐讯数据通信技术有限公司 Emotion acquisition method and system
CN104851422A (en) * 2015-06-09 2015-08-19 张维秀 Voice signal processing method and system
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN106910513A (en) * 2015-12-22 2017-06-30 微软技术许可有限责任公司 Emotional intelligence chat engine
CN109599127A (en) * 2017-09-29 2019-04-09 松下知识产权经营株式会社 Information processing method, information processing unit and message handling program

Similar Documents

Publication Publication Date Title
CN103366760A (en) Method, device and system for data processing
CN101502089B (en) Method for carrying out an audio conference, audio conference device, and method for switching between encoders
EP2814244A1 (en) A method and a system for improving communication quality of a video conference
CN106024015A (en) Call center agent monitoring method and system
CN103139351A (en) Volume control method and device, and communication terminal
KR20140071831A (en) Moble terminal and method for reciving call
US20100220844A1 (en) Method and arrangement for capturing of voice during a telephone conference
CN111508531B (en) Audio processing method and device
CN105120063A (en) Volume prompting method of input voice and electronic device
CN105282339B (en) A kind of method, device and mobile terminal monitoring Mike's working condition
CN105376515A (en) Method, apparatus and system for presenting communication information in video communication
CN106302997A (en) A kind of output control method, electronic equipment and system
CN104092809A (en) Communication sound recording method and recorded communication sound playing method and device
CN105874517B (en) The server of more quiet open space working environment is provided
JP2019153099A (en) Conference assisting system, and conference assisting program
CN103297896A (en) Audio output method and electronic equipment
CN101699837B (en) Telephone voice output gain adjustment method, device and communication terminal
CN106911832A (en) A kind of method and device of voice record
CN105657156A (en) Incoming call ring tone customizing method and terminal
CN110336919A (en) A kind of audio communication system and its call scheme of intelligent monitoring device
CN102695151A (en) Method and terminal for sending prompt information for slow listening of mobile phone
CN112788489B (en) Control method and device and electronic equipment
US11783837B2 (en) Transcription generation technique selection
CN101834957A (en) Incoming call managing method and system based on home gateway
CN110517678B (en) AI voice response system based on visual sense

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20131023

RJ01 Rejection of invention patent application after publication