CN106548788A - A kind of intelligent emotion determines method and system - Google Patents

A kind of intelligent emotion determines method and system Download PDF

Info

Publication number
CN106548788A
CN106548788A CN201510613689.XA CN201510613689A CN106548788A CN 106548788 A CN106548788 A CN 106548788A CN 201510613689 A CN201510613689 A CN 201510613689A CN 106548788 A CN106548788 A CN 106548788A
Authority
CN
China
Prior art keywords
audio
frequency information
information section
frequency
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510613689.XA
Other languages
Chinese (zh)
Other versions
CN106548788B (en
Inventor
刘振虎
许玲玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Group Shandong Co Ltd
Original Assignee
China Mobile Group Shandong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Group Shandong Co Ltd filed Critical China Mobile Group Shandong Co Ltd
Priority to CN201510613689.XA priority Critical patent/CN106548788B/en
Publication of CN106548788A publication Critical patent/CN106548788A/en
Application granted granted Critical
Publication of CN106548788B publication Critical patent/CN106548788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Embodiments provide a kind of intelligent emotion and determine method and system, obtain the audio-frequency information of personnel to be detected and user's communication;From each audio-frequency information section for constituting audio-frequency information, abnormal emotion audio-frequency information section is determined, wherein, abnormal emotion audio-frequency information section meets the pre-conditioned audio-frequency information section of correspondence by the preset audio information for characterizing personnel's abnormal emotion to be detected for including;When abnormal emotion audio-frequency information section is capable of determining that, determine that the corresponding personnel to be detected of audio-frequency information section have abnormal emotion.The personnel to be detected that more objective, analysis is obtained are more accurate with the presence or absence of abnormal emotion.The present invention relates to field of computer technology.

Description

A kind of intelligent emotion determines method and system
Technical field
The present invention relates to field of computer technology, more particularly to a kind of intelligent emotion determines method and system.
Background technology
With society development science and technology progress, bring abundant material life, then social competition also with Become fierce.People produce negative emotion in keen competition unavoidably, not only affect work also to affect Personal is healthy, below by taking the contact staff for receiving calls as an example.
Contact staff is by receiving calls as user's solve problem, user's shape shape and color that longevity of service runs into Color, psychology and body bear immense pressure and are difficult to mediate, and easily produce abnormal emotion, may so lead Work efficiency drop is caused, with generation situations such as user's confusing communication.In prior art, for contact staff The method and channel that pressure is discongested all waits to expand, and mainly passes through traditional to the motion management of contact staff The mental status assessment of the communication way of person to person's face-to-face interview and questionnaire form made of paper, often cannot be true Reflect the real mental verbs of employee.
In order to solve this problem, in prior art propose emotion record analyses bootstrap technique, mainly include as Lower step:The mood data for contact staff's record is read by motion management platform, issuing;By intelligent handss Table gathers the physiological datas such as the heart rate of user, body temperature;By the analysis to these data, contact staff is obtained Currently whether there is abnormal emotion.
Emotion record analyses bootstrap technique of the prior art realizes the analysis to abnormal emotion and guides, but its The acquisition of abnormal emotion data is depended on contact staff oneself typing and professional equipment to physiological data Collection information, the mood data of contact staff oneself typing are unavoidable excessively subjective, and physiological data is by customer service people The health of member it is different and different, therefore, so analyze the abnormal emotion that obtains not accurate enough.
The content of the invention
Embodiments provide a kind of intelligent emotion and determine method and system, to solve in prior art Inaccurate problem is determined to abnormal emotion.
Based on the problems referred to above, a kind of intelligent emotion provided in an embodiment of the present invention determines method, including:
Obtain the audio-frequency information of personnel to be detected and user's communication;
From each audio-frequency information section for constituting the audio-frequency information, abnormal emotion audio-frequency information section is determined,
Wherein, the abnormal emotion audio-frequency information section is by including for characterizing personnel's exception to be detected The preset audio information of emotion meets the pre-conditioned audio-frequency information section of correspondence;
When the abnormal emotion audio-frequency information section is capable of determining that, determines that the audio-frequency information section is corresponding and treat There is abnormal emotion in testing staff.
A kind of intelligent emotion provided in an embodiment of the present invention determines system, including:
Audio collection module, for obtaining the audio-frequency information of personnel to be detected and user's communication;
Speech waveform analysis module, for, from each audio-frequency information section for constituting the audio-frequency information, determining different Reason thread audio-frequency information section, wherein, the abnormal emotion audio-frequency information section is described for characterizing by including The preset audio information of personnel's abnormal emotion to be detected meets the pre-conditioned audio-frequency information section of correspondence;And work as When being capable of determining that the abnormal emotion audio-frequency information section, the corresponding people to be detected of the audio-frequency information section is determined There is abnormal emotion in member.
The beneficial effect of the embodiment of the present invention includes:
A kind of intelligent emotion provided in an embodiment of the present invention determines method and system, obtains personnel to be detected and uses The audio-frequency information of family call;From each audio-frequency information section for constituting audio-frequency information, determine that abnormal emotion audio frequency is believed Breath section, wherein, abnormal emotion audio-frequency information section is by including for characterizing personnel's abnormal emotion to be detected Preset audio information meets the pre-conditioned audio-frequency information section of correspondence;When be capable of determining that abnormal emotion audio frequency believe During breath section, determine that the corresponding personnel to be detected of audio-frequency information section have abnormal emotion.The embodiment of the present invention is provided Intelligent emotion determine method, by obtaining the audio-frequency information of personnel, talking to be detected and audio-frequency information being carried out Analyze to determine people to be detected with the presence or absence of abnormal emotion, as the language of people tends to embody current feelings Thread, therefore audio-frequency information can objectively embody the current emotional state of personnel to be detected, in prior art The acquisition of abnormal emotion data is depended on contact staff oneself typing and professional equipment to physiological data Collection information is compared, more objective, and the personnel to be detected that so analysis is obtained whether there is abnormal emotion more Accurately.And the acquisition of abnormal emotion data needs to complete by professional external equipment auxiliary in prior art, Increased cost.
Description of the drawings
Fig. 1 is the flow chart that a kind of intelligent emotion provided in an embodiment of the present invention determines method;
Fig. 2 is the flow chart that a kind of intelligent emotion that the embodiment of the present invention 1 is provided determines method;
Fig. 3 is the flow chart of a kind of intelligent emotion bootstrap technique that the embodiment of the present invention 2 is provided;
Fig. 4 is intelligent AC module provided in an embodiment of the present invention, exchange injection module and sensitive language The flow chart of collection module cooperating;
Fig. 5 is the structural representation that a kind of intelligent emotion provided in an embodiment of the present invention determines one of system;
Fig. 6 be a kind of intelligent emotion provided in an embodiment of the present invention determine system two structural representation.
Specific embodiment
Embodiments provide a kind of intelligent emotion and determine method and system, below in conjunction with Figure of description The preferred embodiments of the present invention are illustrated, it will be appreciated that preferred embodiment described herein is only used for The description and interpretation present invention, is not intended to limit the present invention.And in the case where not conflicting, in the application Embodiment and embodiment in feature can be mutually combined.
The embodiment of the present invention provides a kind of intelligent emotion and determines method, as shown in figure 1, including:
S101, the audio-frequency information for obtaining personnel to be detected and user's communication.
In S102, each audio-frequency information section of the audio-frequency information obtained from composition S101, abnormal emotion sound is determined Frequency message segment,
Wherein, abnormal emotion audio-frequency information section is by including for characterizing the pre- of personnel's abnormal emotion to be detected If audio-frequency information meets the pre-conditioned audio-frequency information section of correspondence.
S103, when abnormal emotion audio-frequency information section is capable of determining that, determine that audio-frequency information section is corresponding to be checked There is abnormal emotion in survey personnel.
Below in conjunction with the accompanying drawings, the method and relevant device for being provided to the present invention with specific embodiment is retouched in detail State.
Embodiment 1:
In the embodiment of the present invention 1, there is provided a kind of intelligent emotion determines method, as shown in Fig. 2 specifically including Following steps:
S201, the audio-frequency information for obtaining personnel to be detected and user's communication.
By taking the contact staff for receiving calls as an example, using contact staff as personnel to be detected, can obtain in real time Contact staff's terminal attends a banquet microphone audio input as the audio frequency of the personnel to be detected and user's communication for obtaining Information, and obtain the voice document comprising audio-frequency information.
Further, the embodiment of the present invention realizes a kind of intelligent emotion and determines system, and the system can have many Individual functional module composition, the executive agent of this step can be the audio collection module in system.
S202, the audio-frequency information obtained in S201 is converted to into structurized audio frequency text.
In this step, text index can be set up by existing speech transcription mode, by non-structured language Sound file is converted to structurized text message i.e. audio frequency text, is that subsequently audio information analysis are processed Lay the first stone.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be audio collection module.It is follow-up to can be by the voice in system the step of audio information analysis Waveform analyses module is performed, therefore, the audio frequency text that this step can be generated by audio collection module is sent out Give speech waveform analysis module, and also can by employee's job number of personnel correspondence to be detected, login time, The information such as work accumulative duration, history degrees of emotion are sent to speech waveform analysis module, so as to speech waveform Analysis module is used when being analyzed to audio-frequency information.
Further, due to analysis be personnel to be detected audio-frequency information, and different places have different sides Speech, the embodiment of the present invention is additionally contemplates that the problem of accent, actually used provided in an embodiment of the present invention When intelligent emotion determines method, in order that audio-frequency information to be converted to structurized audio frequency text more accurate Really, also need to be adapted to reference to each region accent, optimize acoustic model so as to extensively can cover Accent, also needs with reference to professional knowledge and hotline service scope, carries out speech model and optimizes to lift voice Transcription accuracy rate.
S203, the audio frequency text to obtaining in S202 are parsed, and determine each audio-frequency information Duan Yin The preset audio information that corresponding each audio frequency text chunk is included respectively in frequency text,
Wherein, preset audio packet is containing at least one following information:Keyword message, emotion detection information With quiet duration information.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be speech waveform analysis module.Speech waveform analysis module can detect base in certain audio output Frequently, the amplitude of variation such as pitch, there is provided be likely to occur the prediction of anxious state of mind, it is possible to navigate to anxious state of mind Positional information of the audio frequency in whole piece voice;Detect and analyze the change of word speed, mute time etc..
These information, when being embodied as, can be generated the index file of XML format, the index file can With including following one or more information:
The personnel to be detected of the both call sides that audio frequency text includes and the voice of user;
Exceed the word speed of average speed in the audio frequency of word speed information, that is, both call sides in short-term text chunk;
In the sound end information of call, average word speed information, that is, arbitrary audio frequency text chunk, call is double (unit can be for beginning and ending time of each say per of side, word speed information:Word/second), and arbitrary sound In frequency text chunk, (unit can be the respective average word speed information of both call sides:Word/second);
The frequency that crest amplitude information and/or crest amplitude occur in audio frequency text chunk;
Audio frequency text chunk medium pitch information, can be highest frequency and max volume information;
Fundamental frequency information in audio frequency text chunk.
Further, the division for constituting each audio-frequency information section i.e. audio frequency text chunk of audio-frequency information can be with root Carry out according to being actually needed, can be using the audio section of setting duration as an audio-frequency information section, it is also possible to one Secondary call is an audio-frequency information section.
After index file is generated, the default sound that each audio frequency text chunk is included can be determined from index file Frequency information.That is, speech waveform analysis module can be directed to the preset audio letter for needing analysis and retrieving Breath (keyword message, emotion detection information and it is long when silence information in one or more), from index text Enter line retrieval in part, and return the audio-frequency information that there may be abnormal emotion of concern.
It is embodied as performing following steps:
Step one, when preset audio packet when containing keyword message, for each in each audio-frequency information section Audio-frequency information section, the audio-frequency information section corresponding audio frequency text chunk is compared with preset keyword;It is determined that The preset keyword included in the audio frequency text chunk, and preset keyword occur initial time and terminate when Between;
In step one, preset keyword can be word, word, phrase etc., can arrange according to actual needs The word for characterizing abnormal emotion is preset keyword.When multiple keywords are provided with, keyword can be generated List, each audio frequency text chunk is compared with the keyword in list respectively, obtains comprising any in list The audio frequency text chunk list of one or more keywords, and the appearance in audio frequency text chunk respectively of each keyword Initial time and terminate the time.
Step 2, when preset audio packet detection information containing emotion, for each audio-frequency information section in it is every Individual audio-frequency information section, determine characterize in the corresponding audio frequency text chunk of the audio-frequency information section emotion Testing index as The desired value of next or multiple indexs:The word speed information of both call sides, amplitude information, frequency information, sound Amount information, fundamental frequency information;In determining the audio frequency text chunk, desired value reaches the These parameters for corresponding to metrics-thresholds, And the desired value of These parameters reach correspondence metrics-thresholds when occur in the audio frequency text chunk it is initial when Between and terminate the time;
In step 2, emotion Testing index can be characterized by following one or more indexs:Both call sides Word speed information, amplitude information, frequency information, information volume, fundamental frequency information etc., it is right that each index has The metrics-thresholds answered, each metrics-thresholds can be set according to the history voice data of the personnel to be detected, Namely by history voice data empirically condition, the setting of alarm threshold is carried out.So, by index Threshold value can characterize the personnel to be detected, and in the emotion safety range of non-abnormal emotion subaudio frequency, (fundamental frequency is relative to be become Change degree, word speed thresholding, change persistent period etc., wherein, fundamental frequency can refer to the basic tone of sound, base Frequency contrast can refer to the change contrast of the basic basis tone of two sections of voices, and the change persistent period can refer to that fundamental frequency is sent out Have lasting duration during changing altogether, for example:Certain continue for 10 minutes, occur one after 2 hours again Secondary pitch variation, continue for 15 minutes again), then by each index of each audio frequency text chunk in index file Desired value compare with corresponding metrics-thresholds, determine each audio frequency text chunk whether in the safe model of the emotion In enclosing, if exceeding emotion safety range, then it is assumed that mood disorderss occur.
Step 3, when preset audio packet contain quiet duration information when, for each audio-frequency information section in it is every Individual audio-frequency information section, according to per words diaphone of both call sides in the corresponding audio frequency text chunk of the audio-frequency information section The initial time of frequency, end time and persistent period, determine personnel to be detected described in the audio frequency text chunk The quiet duration that correspondence audio frequency is included;Include in determining the audio frequency text chunk meet preset duration it is quiet when It is long, and initial time and the termination time of the quiet duration generation for including.
Further, in this step, preset audio packet is containing keyword message, emotion detection information and quiet One or more information in sound duration information, the index for characterizing emotion Testing index can be both call sides One or more index in word speed information, amplitude information, frequency information, information volume, fundamental frequency information, It can be seen that, can be according to actual needs using above- mentioned information as not when the Audio Information Retrieval of abnormal emotion is carried out Same search condition carries out logical combination, for example:Comprising keyword A and the exception not comprising keyword B Audio-frequency information of emotion etc..
S204, from each audio-frequency information section for constituting the audio-frequency information, determine abnormal emotion audio-frequency information section,
Wherein, the abnormal emotion audio-frequency information section is by including for characterizing personnel's exception to be detected The preset audio information of emotion meets the pre-conditioned audio-frequency information section of correspondence.
Further, according to the result in S203, in this step, can be true by speech waveform analysis module Make abnormal emotion audio-frequency information section.For characterizing the preset audio information of personnel's abnormal emotion to be detected Can be above-mentioned preset audio information.
This step can be specifically embodied as:
The audio-frequency information section comprising preset keyword is determined from each audio-frequency information section;And/or
Determine that the desired value of the pre-set level for characterizing emotion Testing index reaches correspondence from each audio-frequency information section The audio-frequency information section of metrics-thresholds;And/or
Determine that quiet duration meets the audio-frequency information section of preset duration from each audio-frequency information section.
S205, when the abnormal emotion audio-frequency information section is capable of determining that, determine that audio-frequency information section is corresponding There is abnormal emotion in personnel to be detected.
Embodiment 2:
With embodiment 1 correspondingly, the embodiment of the present invention 2 provides a kind of intelligent emotion bootstrap technique, is implementing In example 1, when determining that personnel to be detected have abnormal emotion, by personnel to be detected in prior art Transmission makes laughs picture or joke guiding the abnormal emotion for analyze personnel to be detected, so cannot from it is basic must To the source of personnel's abnormal emotion to be detected, cannot also treat testing staff carries out profound interactive guiding.Cause This, the embodiment of the present invention 2 provides a kind of intelligent emotion bootstrap technique, is determining that it is different that personnel to be detected are present During reason thread, testing staff is treated by way of intelligent interaction carries out profound interactive guiding, can be timely It was found that the source of personnel's abnormal emotion to be detected, and alleviate the abnormal emotion of personnel to be detected well.
A kind of intelligent emotion bootstrap technique that the embodiment of the present invention 2 is provided, as shown in figure 3, including following step Suddenly:
S301, the audio-frequency information for obtaining personnel to be detected and user's communication.
S302, the audio-frequency information obtained in S301 is converted to into structurized audio frequency text.
S303, the audio frequency text to obtaining in S302 are parsed, and determine each audio-frequency information Duan Yin The preset audio information that corresponding each audio frequency text chunk is included respectively in frequency text,
Wherein, preset audio packet is containing at least one following information:Keyword message, emotion detection information With quiet duration information;
S304, from constitute audio-frequency information each audio-frequency information section in, determine abnormal emotion audio-frequency information section,
Wherein, abnormal emotion audio-frequency information section is by including for characterizing the pre- of personnel's abnormal emotion to be detected If audio-frequency information meets the pre-conditioned audio-frequency information section of correspondence.
Further, the specific embodiment of step S301~step S304 is walked in may refer to embodiment 1 Rapid S201~step S204.
S305, for each abnormal emotion audio-frequency information section, determine that the abnormal emotion audio-frequency information section is corresponding Credibility,
Wherein, when in abnormal emotion audio-frequency information section, it is more that correspondence desired value meets pre-conditioned index, Corresponding credibility is higher, pre-conditioned including following at least two condition:Comprising preset keyword, quiet It is right that duration meets preset duration, the word speed information of both call sides reaches correspondence metrics-thresholds, amplitude information reaches Answer metrics-thresholds, frequency information to reach correspondence metrics-thresholds, information volume and reach correspondence metrics-thresholds, fundamental frequency Information reaches correspondence metrics-thresholds.
In this step, the corresponding credibility of abnormal emotion audio-frequency information section can be the abnormal emotion audio-frequency information Segment table levies the credibility of abnormal emotion, then in each condition arranged for judgement abnormal emotion audio-frequency information section In, the condition for meeting is more, and the audio-frequency information section is higher for the credibility of abnormal emotion audio-frequency information section.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be early warning pushing module.Speech waveform analysis module determine abnormal emotion audio-frequency information section it Afterwards, the corresponding audio frequency text chunk of abnormal emotion audio-frequency information section determined can be sent to early warning and pushes mould Block, determines the credibility of each audio frequency text chunk for receiving by early warning pushing module.
S306, corresponding credibility is distinguished based on each abnormal emotion audio-frequency information section, from each abnormal emotion In audio-frequency information section, it is determined that optimum abnormal emotion audio-frequency information section.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be early warning pushing module.Early warning pushing module can be according to the abnormal emotion audio-frequency information for receiving The corresponding audio frequency text chunk of section, determines optimum abnormal emotion audio-frequency information section, so as to subsequently according to the optimum The determination of abnormal emotion audio-frequency information section carries out the content of intelligent AC with personnel to be detected.
This step can be specifically embodied as following steps:
In judging each abnormal emotion audio-frequency information section, if there is an abnormal emotion audio-frequency information section and cause this The credibility highest of one abnormal emotion audio-frequency information section;
If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section;
Otherwise, if there are multiple abnormal emotion audio-frequency information sections causes the plurality of abnormal emotion audio-frequency information section Credibility is equal and highest, then in judging the plurality of abnormal emotion audio-frequency information section, if there is an exception Emotion audio-frequency information section causes the corresponding cumulative activation of time of origin of an abnormal emotion audio-frequency information section Duration is most long;
If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section;
Otherwise, cause the plurality of abnormal emotion audio-frequency information section right if there are multiple abnormal emotion audio-frequency information sections Answer cumulative activation duration equal and most long, then judge the time of origin of the plurality of abnormal emotion audio-frequency information section In, the worst abnormal emotion audio-frequency information of the emotion that characterized of history degrees of emotion in the range of the correspondence time period Section;
The abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section.
Further, it is determined that during optimum abnormal emotion audio-frequency information section, first can according to credibility, if Exist multiple audio-frequency information sections credibility is equal and highest, then can consider the tired of the plurality of audio-frequency information section Meter operating time, the cumulative activation duration can start to correspondence audio frequency for the last working of personnel to be detected The stream time of message segment time of origin, it is assumed that there are the first audio-frequency information section and the second audio-frequency information Section, the two credibility is equal and highest, and the first audio-frequency information section time of origin is morning 9:00 to 9:30 (on Work hours at noon are 9:00, then correspondence cumulative operation time is 0 to half an hour), the second audio-frequency information section Time of origin is afternoon 4:00 to 4:20 (work hours in the afternoon be 1:00, then correspondence cumulative operation time For 3 hours to 20 minutes 3 hours), then judge according to cumulative operation time, the second audio-frequency information section For optimum audio-frequency information section, if the corresponding cumulative operation time of multiple audio-frequency information sections is also equal, then, During the time of origin of multiple abnormal emotion audio-frequency information sections can be considered, history emotion in the range of the correspondence time period The situation of the emotion characterized by rank, it is assumed that the first audio-frequency information section and the second audio-frequency information section, the two is credible Spend equal and highest, and cumulative operation time is identical, and the time of origin of the first audio-frequency information section it is corresponding when Between scope (9:00~10:00) time of origin correspondence of the history degrees of emotion less than the second audio-frequency information section in Time range (4:00~6:00) history degrees of emotion in, then characterize the second worst sound of history emotion Frequency message segment is optimum abnormal emotion audio-frequency information section.
S307, according in the optimum abnormal emotion audio-frequency information section determined in S306, correspondence desired value meets The species of the abnormal emotion characterized by pre-conditioned index, determines the content of intelligent AC.
Further, the species of the abnormal emotion of different index characterizations is different, for example:Keyword index may Personnel's abnormal emotion to be detected is characterized from user, long pointer may characterize personnel's exception to be detected when quiet Emotion therefore, it can, according in optimum abnormal emotion audio-frequency information section, correspond to desired value from fatigue etc. Meet the species of the abnormal emotion characterized by pre-conditioned index, determine the content of intelligent AC.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be early warning pushing module.
S308, the platform being located to the personnel to be detected that there is abnormal emotion are sent with the intelligent AC for determining Content intelligent AC dialog box, and receive the response message of personnel to be detected.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be intelligent AC module, and early warning pushing module determines the content of intelligent AC, by intelligently handing over Flow module is exchanged with personnel to be detected.
Intelligent AC dialogue can be realized by the language database enriched, by key search technology reality The now input to employee to be detected carries out intelligent response.In language library, same keyword adopts a plurality of reply data Correspondence, when intelligent response is carried out, can be from random selection output in a plurality of reply data.
The form of communication of intelligent AC module can include it is various, for example:Written communication pattern and speech exchange Pattern, when being embodied as, according to the selection of contact staff, confirms AC mode, and perform this step it Before, determine that employee to be detected is currently at the idle condition of inoperative, when being embodied as, can provide and exempt to beat Button is disturbed, it is currently working condition or idle shape that employee to be detected can show itself by interruption-free button State, in order to avoid during voice output, affect normal customer service subsequent duty.
Further, the corresponding intelligent AC dialogue open mode of method provided in an embodiment of the present invention is passive Pattern, by emotion warning module automatic push, can also open aggressive mode, as needed by people to be detected Member is autonomous to be selected to enter, to realize the self-oriented management of emotion.
Whether S309, each response message for receiving, include specifying in judging the response message Keyword.
Further, the nominal key in this step can be with identical with the preset keyword in above-mentioned steps Can be with difference.For each response message for receiving, can compare with nominal key.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be sensitive language collection module.
S310, after intelligent AC terminates, the nominal key included according to the response message for receiving The order of severity and number of times are that personnel to be detected determine this degrees of emotion.
During intelligent AC, system is likely to be received the response message that a plurality of personnel to be detected send, can With the order of severity of nominal key that included according to a plurality of response message and number of times as personnel to be detected Determine this degrees of emotion, using in next abnormal emotion determination process as history degrees of emotion.
Further, for determining system for intelligent emotion provided in an embodiment of the present invention, this step is held Row main body can be sensitive language collection module.
Further, intelligent emotion provided in an embodiment of the present invention determines that system can also include exchange injection mould Block, exchange injection module provide a kind of exchange injection way, that is to say, that, it is allowed to backstage authority user can To choose whether to be injected during carrying out Intelligent dialogue with system in personnel to be detected, can after implant operation Exchanged with carrying out recessiveness in real time with personnel to be detected by artificial replacement intelligent AC system, and by selecting to move back Go out injection, automatically switch back into intelligent AC system.Personnel's unaware to be detected in whole communication process.
Further, during intelligent AC is carried out with personnel to be detected, when the response of personnel to be detected is When needing the problem of routine processes (for example:The problems such as consulting date, time), correspondence program can be called Problem is processed, and routine processes result is returned to into personnel to be detected.
Further, intelligent AC module, exchange injection module and sensitive language collection module coordinate work The flow process of work can be shown in Fig. 4, as shown in figure 4, Fig. 4 is to carry out the process of intelligent AC, step with user S401 can equivalent to the response message of personnel to be detected is received in step S308 the step of, step S402 Can be implemented by sensitive language collection module equivalent to step S309, step S403 and step S404, Step S405 and step S408 can be implemented by sensitive language collection module, step S406 and step S407 Can be implemented by exchange injection module, step S409~step S414 can be implemented by intelligent AC module, Wherein, the problem of routine processes is needed in step S409 for example:The problems such as consulting date, time.
Based on same inventive concept, the embodiment of the present invention additionally provides a kind of intelligent emotion and determines system, due to To a kind of aforementioned intelligent emotion, the principle of these devices and system institute solve problem determines that method is similar, therefore should The enforcement of device and system may refer to the enforcement of preceding method, repeats part and repeats no more.
A kind of intelligent emotion provided in an embodiment of the present invention determines one of system, as shown in figure 5, including:
Audio collection module 501, for obtaining the audio-frequency information of personnel to be detected and user's communication;
Speech waveform analysis module 502, for from each audio-frequency information section for constituting the audio-frequency information, really Determine abnormal emotion audio-frequency information section, wherein, the abnormal emotion audio-frequency information section is by including for characterizing The preset audio information of personnel's abnormal emotion to be detected meets the pre-conditioned audio-frequency information section of correspondence;With And when the abnormal emotion audio-frequency information section is capable of determining that, determine that the audio-frequency information section is corresponding to be checked There is abnormal emotion in survey personnel.
Further, the audio collection module 501, is additionally operable in the speech waveform analysis module 502 From each audio-frequency information section for constituting the audio-frequency information, before determining abnormal emotion audio-frequency information section, by institute State audio-frequency information and be converted to structurized audio frequency text;And the audio frequency text is solved Analysis, determines that each audio-frequency information section distinguishes corresponding each audio frequency text chunk bag in the audio frequency text The preset audio information for containing, wherein, the preset audio packet is containing at least one following information:Keyword Information, emotion detection information and quiet duration information;
The speech waveform analysis module 502, specifically for determining comprising pre- from each audio-frequency information section If the audio-frequency information section of keyword;And/or sign emotion Testing index is determined from each audio-frequency information section The desired value of pre-set level reaches the audio-frequency information section of correspondence metrics-thresholds;And/or from each audio-frequency information section The middle audio-frequency information section for determining that quiet duration meets preset duration.
Further, the audio collection module 501, specifically for when the preset audio packet is containing pass During key word information, for each the audio-frequency information section in each audio-frequency information section, will be the audio-frequency information section corresponding Audio frequency text chunk is compared with preset keyword;The preset keyword included in determining the audio frequency text chunk, And initial time and the termination time of preset keyword appearance;
When the preset audio packet detection information containing emotion, for each sound in each audio-frequency information section Frequency message segment, determine characterize in the corresponding audio frequency text chunk of the audio-frequency information section emotion Testing index as next The desired value of individual or multiple indexs:The word speed information of both call sides, amplitude information, frequency information, volume letter Breath, fundamental frequency information;In determining the audio frequency text chunk, desired value reaches the These parameters for corresponding to metrics-thresholds, with And the desired value of These parameters reaches the initial time occurred in the audio frequency text chunk during correspondence metrics-thresholds With the time of termination;
When the preset audio packet contains quiet duration information, for each sound in each audio-frequency information section Frequency message segment, according to per words correspondence audio frequency of both call sides in the corresponding audio frequency text chunk of the audio-frequency information section Initial time, end time and persistent period, determine personnel's correspondence to be detected described in the audio frequency text chunk The quiet duration that audio pack contains;What is included in determining the audio frequency text chunk meets the quiet duration of preset duration, And initial time and the termination time of the quiet duration generation for including.
Further, the system also includes:Early warning pushing module 503;
The early warning pushing module 503, in the speech waveform analysis module 502, described in constituting In each audio-frequency information section of audio-frequency information, after determining abnormal emotion audio-frequency information section, for each abnormal feelings Thread audio-frequency information section, determines the corresponding credibility of abnormal emotion audio-frequency information section, wherein, when the exception In emotion audio-frequency information section, correspondence desired value meets that pre-conditioned index is more, and corresponding credibility is higher, It is described pre-conditioned including following at least two condition:When meeting default comprising preset keyword, quiet duration Long, the word speed information of both call sides reaches correspondence metrics-thresholds, amplitude information and reaches correspondence metrics-thresholds, frequency Rate information reaches correspondence metrics-thresholds, information volume and reaches correspondence metrics-thresholds, fundamental frequency information and reaches and correspondingly refer to Mark threshold value;And corresponding credibility is distinguished based on each abnormal emotion audio-frequency information section, from the different reason In thread audio-frequency information section, it is determined that optimum abnormal emotion audio-frequency information section.
Further, the early warning pushing module 503, specifically for judging each abnormal emotion audio frequency letter In breath section, if there is an abnormal emotion audio-frequency information section and cause an abnormal emotion audio-frequency information section Credibility highest;If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio frequency letter Breath section;Otherwise, if there are multiple abnormal emotion audio-frequency information sections causes the plurality of abnormal emotion audio-frequency information section Credibility is equal and highest, then in judging the plurality of abnormal emotion audio-frequency information section, if exist one it is different Reason thread audio-frequency information section causes the corresponding accumulative work of the time of origin of an abnormal emotion audio-frequency information section Make duration most long;If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio frequency letter Breath section;Otherwise, if there are multiple abnormal emotion audio-frequency information sections causes the plurality of abnormal emotion audio-frequency information section Correspondence cumulative activation duration is equal and most long, then judge the time of origin of the plurality of abnormal emotion audio-frequency information section In, the worst abnormal emotion audio-frequency information of the emotion that characterized of history degrees of emotion in the range of the correspondence time period Section;The abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section.
Further, the system also includes:Intelligent AC module 504 and sensitive language collection module 505;
The early warning pushing module 503, is additionally operable to it is determined that after optimum abnormal emotion audio-frequency information section, root According in the optimum abnormal emotion audio-frequency information section, correspondence desired value meets the pre-conditioned index institute table The species of the abnormal emotion levied, determines the content of intelligent AC;
The intelligent AC module 504, it is flat for what is be located to the personnel to be detected that there is abnormal emotion Platform sends the intelligent AC dialog box of the content with the intelligent AC for determining, and receives the personnel to be detected Response message;
The sensitive language collection module 505, for each response message for receiving, judging should Whether nominal key is included in bar response message;After intelligent AC terminates, disappeared according to the response for receiving The order of severity and number of times of the nominal key that breath includes is that the personnel to be detected determine this emotion Rank.
A kind of intelligent emotion provided in an embodiment of the present invention determines the two of system, as shown in fig. 6, carrying with Fig. 5 For a kind of intelligent emotion determine that one of system is compared, also include:Exchange injection module 601, the exchange Injection module 601, for during intelligent AC, when selecting to start injection, receives backstage and has The voice of authority user carries out real-time, interactive with personnel to be detected, when not selecting or exiting injection, by intelligence Energy AC system carries out real-time, interactive with personnel to be detected.
The function of above-mentioned each unit may correspond to the respective handling step in flow process shown in Fig. 1 to Fig. 4, here Repeat no more.
A kind of intelligent emotion provided in an embodiment of the present invention determines method and system, obtains personnel to be detected and uses The audio-frequency information of family call;From each audio-frequency information section for constituting audio-frequency information, determine that abnormal emotion audio frequency is believed Breath section, wherein, abnormal emotion audio-frequency information section is by including for characterizing personnel's abnormal emotion to be detected Preset audio information meets the pre-conditioned audio-frequency information section of correspondence;When be capable of determining that abnormal emotion audio frequency believe During breath section, determine that the corresponding personnel to be detected of audio-frequency information section have abnormal emotion.The embodiment of the present invention is provided Intelligent emotion determine method, by obtaining the audio-frequency information of personnel, talking to be detected and audio-frequency information being carried out Analyze to determine people to be detected with the presence or absence of abnormal emotion, as the language of people tends to embody current feelings Thread, therefore audio-frequency information can objectively embody the current emotional state of personnel to be detected, in prior art The acquisition of abnormal emotion data is depended on contact staff oneself typing and professional equipment to physiological data Collection information is compared, more objective, and the personnel to be detected that so analysis is obtained whether there is abnormal emotion more Accurately.And the acquisition of abnormal emotion data needs to complete by professional external equipment auxiliary in prior art, Increased cost.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention Embodiment can be realized by hardware, it is also possible to come real by the mode of software plus necessary general hardware platform It is existing.Based on such understanding, the technical scheme of the embodiment of the present invention can be embodied in the form of software product Come, the software product can be stored in a non-volatile memory medium (can be CD-ROM, USB flash disk, Portable hard drive etc.) in, use including some instructions so that computer equipment (can be personal computer, Server, or the network equipment etc.) perform method described in each embodiment of the invention.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the mould in accompanying drawing Block or flow process are not necessarily implemented necessary to the present invention.
It will be appreciated by those skilled in the art that the module in device in embodiment can be described according to embodiment Carry out being distributed in the device of embodiment, it is also possible to carry out that respective change is disposed other than the present embodiment one Or in multiple devices.The module of above-described embodiment can merge into a module, it is also possible to be further split into Multiple submodule.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Obviously, those skilled in the art can carry out various changes and modification without deviating from the present invention's to the present invention Spirit and scope.So, if these modifications of the present invention and modification belong to the claims in the present invention and its wait
Within the scope of technology, then the present invention is also intended to comprising these changes and modification.

Claims (12)

1. a kind of intelligent emotion determines method, it is characterised in that include:
Obtain the audio-frequency information of personnel to be detected and user's communication;
From each audio-frequency information section for constituting the audio-frequency information, abnormal emotion audio-frequency information section is determined,
Wherein, the abnormal emotion audio-frequency information section is by including for characterizing personnel's exception to be detected The preset audio information of emotion meets the pre-conditioned audio-frequency information section of correspondence;
When the abnormal emotion audio-frequency information section is capable of determining that, determines that the audio-frequency information section is corresponding and treat There is abnormal emotion in testing staff.
2. the method for claim 1, it is characterised in that from constituting each of the audio-frequency information In audio-frequency information section, before determining abnormal emotion audio-frequency information section, also include:
The audio-frequency information is converted to into structurized audio frequency text;
The audio frequency text is parsed, determines each audio-frequency information section in audio frequency text text The preset audio information that corresponding each audio frequency text chunk is included respectively in part, wherein, the preset audio information Comprising at least one following information:Keyword message, emotion detection information and quiet duration information;
From each audio-frequency information section for constituting the audio-frequency information, abnormal emotion audio-frequency information section is determined, specifically Including:
The audio-frequency information section comprising preset keyword is determined from each audio-frequency information section;And/or
Determine that the desired value of the pre-set level for characterizing emotion Testing index reaches from each audio-frequency information section The audio-frequency information section of correspondence metrics-thresholds;And/or
Determine that quiet duration meets the audio-frequency information section of preset duration from each audio-frequency information section.
3. method as claimed in claim 2, it is characterised in that the audio frequency text is solved Analysis, determines that each audio-frequency information section distinguishes corresponding each audio frequency text chunk bag in the audio frequency text The preset audio information for containing, specifically includes:
When preset audio packet when containing keyword message, for each audio frequency in each audio-frequency information section Message segment, the audio-frequency information section corresponding audio frequency text chunk is compared with preset keyword;Determine the sound The preset keyword included in frequency text chunk, and initial time and the termination time of preset keyword appearance;
When the preset audio packet detection information containing emotion, for each sound in each audio-frequency information section Frequency message segment, determine characterize in the corresponding audio frequency text chunk of the audio-frequency information section emotion Testing index as next The desired value of individual or multiple indexs:The word speed information of both call sides, amplitude information, frequency information, volume letter Breath, fundamental frequency information;In determining the audio frequency text chunk, desired value reaches the These parameters for corresponding to metrics-thresholds, with And the desired value of These parameters reaches the initial time occurred in the audio frequency text chunk during correspondence metrics-thresholds With the time of termination;
When the preset audio packet contains quiet duration information, for each sound in each audio-frequency information section Frequency message segment, according to per words correspondence audio frequency of both call sides in the corresponding audio frequency text chunk of the audio-frequency information section Initial time, end time and persistent period, determine personnel's correspondence to be detected described in the audio frequency text chunk The quiet duration that audio pack contains;What is included in determining the audio frequency text chunk meets the quiet duration of preset duration, And initial time and the termination time of the quiet duration generation for including.
4. the method as described in any one of claim 1-3, it is characterised in that from constituting the audio frequency In each audio-frequency information section of information, after determining abnormal emotion audio-frequency information section, also include:
For each abnormal emotion audio-frequency information section, determine that the abnormal emotion audio-frequency information section is corresponding credible Degree,
Wherein, when, in the abnormal emotion audio-frequency information section, correspondence desired value meets pre-conditioned index and gets over Many, corresponding credibility is higher, described pre-conditioned including following at least two condition:Comprising preset critical Word, quiet duration meet preset duration, the word speed information of both call sides reaches correspondence metrics-thresholds, amplitude letter Breath reaches correspondence metrics-thresholds, frequency information and reaches correspondence metrics-thresholds, information volume and reaches correspondence index threshold Value, fundamental frequency information reach correspondence metrics-thresholds;
Corresponding credibility is distinguished based on each abnormal emotion audio-frequency information section, is believed from each abnormal emotion audio frequency In breath section, it is determined that optimum abnormal emotion audio-frequency information section.
5. method as claimed in claim 4, it is characterised in that based on each abnormal emotion audio-frequency information section The corresponding credibility of difference, from each abnormal emotion audio-frequency information section, it is determined that optimum abnormal emotion audio frequency Message segment, specifically includes:
In judging each abnormal emotion audio-frequency information section, if there is an abnormal emotion audio-frequency information section makes Obtain the credibility highest of an abnormal emotion audio-frequency information section;
If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section;
Otherwise, if there are multiple abnormal emotion audio-frequency information sections causes the plurality of abnormal emotion audio-frequency information section Credibility is equal and highest, then in judging the plurality of abnormal emotion audio-frequency information section, if there is an exception Emotion audio-frequency information section causes the corresponding cumulative activation of time of origin of an abnormal emotion audio-frequency information section Duration is most long;
If existing, the abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section;
Otherwise, cause the plurality of abnormal emotion audio-frequency information section right if there are multiple abnormal emotion audio-frequency information sections Answer cumulative activation duration equal and most long, then judge the time of origin of the plurality of abnormal emotion audio-frequency information section In, the worst abnormal emotion audio-frequency information of the emotion that characterized of history degrees of emotion in the range of the correspondence time period Section;
The abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion audio-frequency information section.
6. method as claimed in claim 4, it is characterised in that it is determined that optimum abnormal emotion audio frequency letter After breath section, also include:
According in the optimum abnormal emotion audio-frequency information section, correspondence desired value meets the pre-conditioned finger The species of the characterized abnormal emotion of mark, determines the content of intelligent AC;
The platform being located to the personnel to be detected that there is abnormal emotion is sent with the intelligent AC for determining Content intelligent AC dialog box, and receive the response message of the personnel to be detected;
For each response message for receiving, in judging the response message, whether include designated key Word;
After intelligent AC terminates, the serious journey of the nominal key included according to the response message for receiving Degree and number of times are that the personnel to be detected determine this degrees of emotion.
7. a kind of intelligent emotion determines system, it is characterised in that include:
Audio collection module, for obtaining the audio-frequency information of personnel to be detected and user's communication;
Speech waveform analysis module, for, from each audio-frequency information section for constituting the audio-frequency information, determining different Reason thread audio-frequency information section, wherein, the abnormal emotion audio-frequency information section is described for characterizing by including The preset audio information of personnel's abnormal emotion to be detected meets the pre-conditioned audio-frequency information section of correspondence;And work as When being capable of determining that the abnormal emotion audio-frequency information section, the corresponding people to be detected of the audio-frequency information section is determined There is abnormal emotion in member.
8. system as claimed in claim 7, it is characterised in that the audio collection module, is additionally operable to In the speech waveform analysis module from each audio-frequency information section for constituting the audio-frequency information, it is determined that abnormal feelings Before thread audio-frequency information section, the audio-frequency information is converted to into structurized audio frequency text;And to institute State audio frequency text to be parsed, determine each audio-frequency information section in the audio frequency text respectively The preset audio information that corresponding each audio frequency text chunk is included, wherein, the preset audio packet is containing at least A kind of following information:Keyword message, emotion detection information and quiet duration information;
The speech waveform analysis module, specifically for determining from each audio-frequency information section comprising default pass The audio-frequency information section of key word;And/or the default of sign emotion Testing index is determined from each audio-frequency information section The desired value of index reaches the audio-frequency information section of correspondence metrics-thresholds;And/or from each audio-frequency information section really Fixed quiet duration meets the audio-frequency information section of preset duration.
9. system as claimed in claim 8, it is characterised in that the audio collection module is concrete to use In when preset audio packet when containing keyword message, believe for each audio frequency in each audio-frequency information section Breath section, the audio-frequency information section corresponding audio frequency text chunk is compared with preset keyword;Determine the audio frequency The preset keyword included in text chunk, and initial time and the termination time of preset keyword appearance;
When the preset audio packet detection information containing emotion, for each sound in each audio-frequency information section Frequency message segment, determine characterize in the corresponding audio frequency text chunk of the audio-frequency information section emotion Testing index as next The desired value of individual or multiple indexs:The word speed information of both call sides, amplitude information, frequency information, volume letter Breath, fundamental frequency information;In determining the audio frequency text chunk, desired value reaches the These parameters for corresponding to metrics-thresholds, with And the desired value of These parameters reaches the initial time occurred in the audio frequency text chunk during correspondence metrics-thresholds With the time of termination;
When the preset audio packet contains quiet duration information, for each sound in each audio-frequency information section Frequency message segment, according to per words correspondence audio frequency of both call sides in the corresponding audio frequency text chunk of the audio-frequency information section Initial time, end time and persistent period, determine personnel's correspondence to be detected described in the audio frequency text chunk The quiet duration that audio pack contains;What is included in determining the audio frequency text chunk meets the quiet duration of preset duration, And initial time and the termination time of the quiet duration generation for including.
10. the system as described in any one of claim 7-9, it is characterised in that also include:Early warning is pushed Module;
The early warning pushing module, in the speech waveform analysis module, from the composition audio-frequency information Each audio-frequency information section in, after determining abnormal emotion audio-frequency information section, believe for each abnormal emotion audio frequency Breath section, determines the corresponding credibility of abnormal emotion audio-frequency information section, wherein, when the abnormal emotion audio frequency In message segment, correspondence desired value meets that pre-conditioned index is more, and corresponding credibility is higher, described pre- If condition includes following at least two condition:Meet preset duration, lead to comprising preset keyword, quiet duration The word speed information of words both sides reaches correspondence metrics-thresholds, amplitude information and reaches correspondence metrics-thresholds, frequency information Reach correspondence metrics-thresholds, information volume and reach correspondence metrics-thresholds, fundamental frequency information and reach correspondence metrics-thresholds; And corresponding credibility is distinguished based on each abnormal emotion audio-frequency information section, believe from each abnormal emotion audio frequency In breath section, it is determined that optimum abnormal emotion audio-frequency information section.
11. systems as claimed in claim 10, it is characterised in that the early warning pushing module, specifically For judging in each abnormal emotion audio-frequency information section, if there is an abnormal emotion audio-frequency information section makes Obtain the credibility highest of an abnormal emotion audio-frequency information section;If existing, the abnormal emotion audio frequency is believed Breath section is defined as optimum abnormal emotion audio-frequency information section;Otherwise, if there are multiple abnormal emotion audio-frequency information sections So that the credibility of the plurality of abnormal emotion audio-frequency information section is equal and highest, then the plurality of abnormal emotion is judged In audio-frequency information section, if there is an abnormal emotion audio-frequency information section and cause an abnormal emotion audio frequency letter The corresponding cumulative activation duration of time of origin of breath section is most long;If existing, by the abnormal emotion audio-frequency information Section is defined as optimum abnormal emotion audio-frequency information section;Otherwise, make if there are multiple abnormal emotion audio-frequency information sections Obtain the plurality of abnormal emotion audio-frequency information section correspondence cumulative activation duration equal and most long, then judge the plurality of different In the time of origin of reason thread audio-frequency information section, in the range of the correspondence time period feelings characterized by history degrees of emotion The worst abnormal emotion audio-frequency information section of thread;The abnormal emotion audio-frequency information section is defined as into optimum abnormal emotion Audio-frequency information section.
12. systems as claimed in claim 10, it is characterised in that also include:Intelligent AC module and Sensitive language collection module;
The early warning pushing module, is additionally operable to it is determined that after optimum abnormal emotion audio-frequency information section, according to institute State in optimum abnormal emotion audio-frequency information section, correspondence desired value meets what the pre-conditioned index was characterized The species of abnormal emotion, determines the content of intelligent AC;
The intelligent AC module, the platform for being located to the personnel to be detected that there is abnormal emotion are sent out The intelligent AC dialog box of the content with the intelligent AC for determining is sent, and receives answering for the personnel to be detected Answer message;
The sensitive language collection module, for each response message for receiving, judges that this should Whether nominal key is included in answering message;After intelligent AC terminates, according in the response message for receiving Including nominal key the order of severity and number of times be that the personnel to be detected determine this degrees of emotion.
CN201510613689.XA 2015-09-23 2015-09-23 Intelligent emotion determining method and system Active CN106548788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510613689.XA CN106548788B (en) 2015-09-23 2015-09-23 Intelligent emotion determining method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510613689.XA CN106548788B (en) 2015-09-23 2015-09-23 Intelligent emotion determining method and system

Publications (2)

Publication Number Publication Date
CN106548788A true CN106548788A (en) 2017-03-29
CN106548788B CN106548788B (en) 2020-01-07

Family

ID=58365640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510613689.XA Active CN106548788B (en) 2015-09-23 2015-09-23 Intelligent emotion determining method and system

Country Status (1)

Country Link
CN (1) CN106548788B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293309A (en) * 2017-05-19 2017-10-24 四川新网银行股份有限公司 A kind of method that lifting public sentiment monitoring efficiency is analyzed based on customer anger
CN107609736A (en) * 2017-08-09 2018-01-19 广州思涵信息科技有限公司 A kind of teaching diagnostic analysis system and method for integrated application artificial intelligence technology
CN108648768A (en) * 2018-04-16 2018-10-12 广州市菲玛尔咨询服务有限公司 A kind of consulting recommendation method and its management system
CN108735233A (en) * 2017-04-24 2018-11-02 北京理工大学 A kind of personality recognition methods and device
CN109087670A (en) * 2018-08-30 2018-12-25 西安闻泰电子科技有限公司 Mood analysis method, system, server and storage medium
CN109145101A (en) * 2018-09-06 2019-01-04 北京京东尚科信息技术有限公司 Interactive method, device and computer readable storage medium
CN109587360A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Electronic device should talk with art recommended method and computer readable storage medium
CN109785123A (en) * 2019-01-21 2019-05-21 中国平安财产保险股份有限公司 A kind of business handling assisted method, device and terminal device
CN110393539A (en) * 2019-06-21 2019-11-01 合肥工业大学 Psychological abnormality detection method, device, storage medium and electronic equipment
CN111599379A (en) * 2020-05-09 2020-08-28 北京南师信息技术有限公司 Conflict early warning method, device, equipment, readable storage medium and triage system
CN112235468A (en) * 2020-10-16 2021-01-15 绍兴市寅川软件开发有限公司 Audio processing method and system for voice customer service evaluation
CN113053385A (en) * 2021-03-30 2021-06-29 中国工商银行股份有限公司 Abnormal emotion detection method and device
CN113222458A (en) * 2021-05-31 2021-08-06 上海工程技术大学 Urban rail transit driver safety risk assessment model and system
CN113515636A (en) * 2021-09-13 2021-10-19 阿里健康科技(中国)有限公司 Text data processing method and electronic equipment
CN114693319A (en) * 2022-04-21 2022-07-01 广州美保科技有限公司 Customer service quality management improving system and method based on artificial intelligence
CN115547501A (en) * 2022-11-24 2022-12-30 国能大渡河大数据服务有限公司 Employee emotion perception method and system combining working characteristics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102623009A (en) * 2012-03-02 2012-08-01 安徽科大讯飞信息技术股份有限公司 Abnormal emotion automatic detection and extraction method and system on basis of short-time analysis
CN102625005A (en) * 2012-03-05 2012-08-01 广东天波信息技术股份有限公司 Call center system with function of real-timely monitoring service quality and implement method of call center system
CN102831891A (en) * 2011-06-13 2012-12-19 富士通株式会社 Processing method and system for voice data
CN103491251A (en) * 2013-09-24 2014-01-01 深圳市金立通信设备有限公司 Method and terminal for monitoring user calls
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN104036776A (en) * 2014-05-22 2014-09-10 毛峡 Speech emotion identification method applied to mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831891A (en) * 2011-06-13 2012-12-19 富士通株式会社 Processing method and system for voice data
CN102623009A (en) * 2012-03-02 2012-08-01 安徽科大讯飞信息技术股份有限公司 Abnormal emotion automatic detection and extraction method and system on basis of short-time analysis
CN102625005A (en) * 2012-03-05 2012-08-01 广东天波信息技术股份有限公司 Call center system with function of real-timely monitoring service quality and implement method of call center system
CN103491251A (en) * 2013-09-24 2014-01-01 深圳市金立通信设备有限公司 Method and terminal for monitoring user calls
CN103634472A (en) * 2013-12-06 2014-03-12 惠州Tcl移动通信有限公司 Method, system and mobile phone for judging mood and character of user according to call voice
CN104036776A (en) * 2014-05-22 2014-09-10 毛峡 Speech emotion identification method applied to mobile terminal

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108735233A (en) * 2017-04-24 2018-11-02 北京理工大学 A kind of personality recognition methods and device
CN107293309A (en) * 2017-05-19 2017-10-24 四川新网银行股份有限公司 A kind of method that lifting public sentiment monitoring efficiency is analyzed based on customer anger
CN107609736A (en) * 2017-08-09 2018-01-19 广州思涵信息科技有限公司 A kind of teaching diagnostic analysis system and method for integrated application artificial intelligence technology
CN108648768A (en) * 2018-04-16 2018-10-12 广州市菲玛尔咨询服务有限公司 A kind of consulting recommendation method and its management system
CN109087670A (en) * 2018-08-30 2018-12-25 西安闻泰电子科技有限公司 Mood analysis method, system, server and storage medium
CN109145101A (en) * 2018-09-06 2019-01-04 北京京东尚科信息技术有限公司 Interactive method, device and computer readable storage medium
CN109587360A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Electronic device should talk with art recommended method and computer readable storage medium
CN109785123A (en) * 2019-01-21 2019-05-21 中国平安财产保险股份有限公司 A kind of business handling assisted method, device and terminal device
CN110393539A (en) * 2019-06-21 2019-11-01 合肥工业大学 Psychological abnormality detection method, device, storage medium and electronic equipment
CN110393539B (en) * 2019-06-21 2021-11-23 合肥工业大学 Psychological anomaly detection method and device, storage medium and electronic equipment
CN111599379A (en) * 2020-05-09 2020-08-28 北京南师信息技术有限公司 Conflict early warning method, device, equipment, readable storage medium and triage system
CN111599379B (en) * 2020-05-09 2023-09-29 北京南师信息技术有限公司 Conflict early warning method, device, equipment, readable storage medium and triage system
CN112235468A (en) * 2020-10-16 2021-01-15 绍兴市寅川软件开发有限公司 Audio processing method and system for voice customer service evaluation
CN113053385A (en) * 2021-03-30 2021-06-29 中国工商银行股份有限公司 Abnormal emotion detection method and device
CN113222458A (en) * 2021-05-31 2021-08-06 上海工程技术大学 Urban rail transit driver safety risk assessment model and system
CN113515636A (en) * 2021-09-13 2021-10-19 阿里健康科技(中国)有限公司 Text data processing method and electronic equipment
CN114693319A (en) * 2022-04-21 2022-07-01 广州美保科技有限公司 Customer service quality management improving system and method based on artificial intelligence
CN115547501A (en) * 2022-11-24 2022-12-30 国能大渡河大数据服务有限公司 Employee emotion perception method and system combining working characteristics
CN115547501B (en) * 2022-11-24 2023-04-07 国能大渡河大数据服务有限公司 Employee emotion perception method and system combining working characteristics

Also Published As

Publication number Publication date
CN106548788B (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN106548788A (en) A kind of intelligent emotion determines method and system
Low et al. Automated assessment of psychiatric disorders using speech: A systematic review
US8145474B1 (en) Computer mediated natural language based communication augmented by arbitrary and flexibly assigned personality classification systems
Ivanov et al. Recognition of personality traits from human spoken conversations
Klaylat et al. Emotion recognition in Arabic speech
US20190253558A1 (en) System and method to automatically monitor service level agreement compliance in call centers
CN114503115A (en) Generating rich action items
US8694307B2 (en) Method and apparatus for temporal speech scoring
US8687792B2 (en) System and method for dialog management within a call handling system
US11461863B2 (en) Idea assessment and landscape mapping
Burgoon et al. Choosing between micro and macro nonverbal measurement: Application to selected vocalic and kinesic indices
JP2021036292A (en) Information processing program, information processing method, and information processing device
Valenti et al. Using topic modeling to infer the emotional state of people living with Parkinson’s disease
Huber et al. Automatically analyzing brainstorming language behavior with Meeter
US10602974B1 (en) Detection and management of memory impairment
Irvine et al. Rewarding chatbots for real-world engagement with millions of users
CN114242109A (en) Intelligent outbound method and device based on emotion recognition, electronic equipment and medium
Jordanous et al. What makes a musical improvisation creative?
Khawaja et al. Think before you talk: An empirical study of relationship between speech pauses and cognitive load
Zhou et al. The role of different types of conversations for meeting success
Guha et al. A sentiment analysis of the PhD experience evidenced on Twitter
JP2012242528A (en) Talk evaluation device, method and program
KR20230047104A (en) Digital Devices and Applications for the Treatment of Social Communication Disorders
Züger et al. Sensing and supporting software developers' focus
Tran et al. VPASS: Voice Privacy Assistant System for Monitoring In-home Voice Commands

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant