JP2004037989A - Voice reception system - Google Patents

Voice reception system Download PDF

Info

Publication number
JP2004037989A
JP2004037989A JP2002197134A JP2002197134A JP2004037989A JP 2004037989 A JP2004037989 A JP 2004037989A JP 2002197134 A JP2002197134 A JP 2002197134A JP 2002197134 A JP2002197134 A JP 2002197134A JP 2004037989 A JP2004037989 A JP 2004037989A
Authority
JP
Japan
Prior art keywords
customer
voice
word
unit
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2002197134A
Other languages
Japanese (ja)
Inventor
Yasushi Tokunaga
徳永 裕史
Ryuichi Matsuzaki
松崎 隆一
Takashi Inoue
井上 貴司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2002197134A priority Critical patent/JP2004037989A/en
Publication of JP2004037989A publication Critical patent/JP2004037989A/en
Pending legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide a voice response system which estimates the state of feelings of a customer from a customer's voice and informs an operator of it. <P>SOLUTION: This system is equipped with a server device and at least one PC which is connected to the server device by a LAN and has a microphone, and the server device has a 1st means of storing customer's natural deposition information, a 2nd means of storing specified words, a 3rd means of extracting words through speech recognition, a 4th means of comparing the words extracted by the 2nd means of storing the specified words and the 3rd means of extracting the words through speech recognition with each other, and a 5th means of judging feelings according to the comparison result and the contents of the 1st means storing the natural deposition information on the customer. <P>COPYRIGHT: (C)2004,JPO

Description

【0001】
【発明の属する技術分野】
本発明は電話やインターネットを介する音声受付システムに関し、特に顧客の状態を判断することのできる音声受付システムに関する。
【0002】
【従来の技術】
従来、コールセンタなどで顧客からの苦情に電話で対応する場合、顧客の状態、すなわち苦情の原因に対して顧客の気持ちがどのような状態にあるかは、受付オペレータが電話で話しながら判断するしかなかった。顧客の怒り度合いを客観的に判断できないため、未熟なオペレータが稚拙な対応をして、顧客の怒りに油をそそぐ結果も生じることが多い。上級オペレータは各オペレータの対応を傍受、あるいはオペレータからの交代要求で顧客に対応する、というのが普通であり、システム的に顧客の状態を判断する技術は存在しない。
【0003】
【発明が解決しようとする課題】
本発明の目的は、顧客が話す言葉、速度、音量を測定し、通常の顧客の状態と比較照合することにより、顧客の感情の状態を顧客の音声から推定し、オペレータに通知することが可能な音声受付システムを提供することにある。
【0004】
【課題を解決するための手段】
本発明の特徴は、通信回線を介して顧客の音声端末と接続される音声受付システムにおいて、サーバ装置と、該サーバ装置とLANで接続される、マイクを具備する少なくともひとつのPCとを有し、前記サーバ装置は、顧客の性癖情報を記憶する第1の手段と、特定の言葉を記憶する第2の手段と、音声認識により単語を抽出する第3の手段と、特定の言葉を記憶する前記第2の手段と音声認識により単語を抽出する前記第3の手段から抽出された単語を比較照合する第4の手段と、照合結果と顧客の性癖情報を記憶する前記第1の手段の内容から感情判定をする第5の手段とを具備する音声受付システムにある。
【0005】
好ましくは、音声認識により単語を抽出する前記第3の手段で抽出された単語の発話速度を測定する第6の手段を具備し、測定された速度と顧客の性癖情報を記憶する前記第1の手段に記憶されている内容とから感情判定をする。
【0006】
好ましくは、受信した音声のボリュームを測定する第7の手段を具備し、測定されたボリュームと顧客の性癖情報を記憶する前記第1の手段に記憶されている内容とから感情判定をする。
【0007】
好ましくは、比較照合した結果の数値に応じて、異なるアラームを表示、あるいは警告音を発生する。
【0008】
好ましくは、前記の感情判定が次のSにより行なわれる。
【0009】
【数2】

Figure 2004037989
興奮値=単語により定まる係数
興奮度A,速度A,音量A;第1の手段に記憶される顧客の性癖情報
【0010】
【発明の実施の形態】
図1は本発明の全体構成を示す。顧客のPC(パソコン)と受付オペレータのPC(パソコン)がインターネットを介して接続されている。撮像用カメラがPCに接続されることもある。PCにはマイク、スピーカが接続されており、双方が音声により会話が可能である。
【0011】
図1において、顧客が通常の電話、あるいはIP電話で接続される構成でも本発明は適用可能である。
【0012】
図2は受付システムの構成を示し、サーバと少なくともひとつの受付用PCがLANで接続されている。サーバにおいて顧客の会話状態を判断して、受付オペテータのPCに顧客の状態を通知する。
【0013】
図3はサーバの機能構成を示す。単語記憶部1は一般に人間が怒っている場合に使用する単語を記憶している。図4に例を示す。怒りの度合いに応じて使用する単語が異なるため、単語ごとに興奮値を割り当てる。顧客情報記憶部2は通常の顧客情報、例えば顧客ID,顧客氏名などの他に、過去の顧客との対応において、本システムで測定された興奮度、速度、音量の平均値が記憶されている。図5にその例を示す。音声認識部3は受信した顧客の音声を常時分析して、顧客が使用する個々の単語に分解して認識する。音声認識技術は公知の技術を使用して構成する。単語照合部4は音声認識部3で認識された単語と単語記憶部1に記憶されている単語を照合し、一致した場合、感情判定部5に興奮値を転送する。速度音量測定部6は音声認識部3で認識された単語の数を測定するとともに、受信バッファから取り込んだ音声の音量を測定し、感情判定部5に転送する。感情判定部5は一定周期で単語照合部4および速度音量測定部6から測定結果を取り込んで、顧客の興奮度を判定する。
【0014】
図6に感情判定部5の機能ブロック構成を示す。特定単語計数部51は単語照合部4から照合して単語記憶部1に記憶されている単語と一致した単語の興奮値を受信し、受信数および興奮値の累積値を計算して記憶する。この値は通話終了まで累積して記憶されるとともに、S値計算部55からの指示で転送すると、0クリアされる一時記憶部にも記憶される。平均速度計算部53は速度音量測定部6から一定周期ごとに測定された単語の数を受信して累積値を記憶する。平均音量計算部52は速度音量測定部6から一定周期ごとに測定された音量を受信し、すでに記憶されている音量との加重平均をとって記憶する。平均速度計算部53、平均音量計算部52も特定単語計数部51と同様に、通話終了時までの平均値を計算し、記憶する記憶部と、S値計算部55に転送した後に0クリアされる一時記憶部をもつ。これら3つの値はS値計算部55からの起動によりS値計算部に転送される。
【0015】
S値計算部55では受信した3種の値および、顧客情報記憶部2に記憶されている過去の興奮度、速度、音量をもとに一定周期で顧客の興奮度を計算する。興奮度およびS値の計算式は例えば次のとおりである。
【0016】
【数3】
Figure 2004037989
【0017】
S値は顧客が通常の状態であれば1の近傍の値になる。S値計算部ではS値が一定の値を超えると受付オペレータのPCにアラームを転送表示する。これは1段階で、例えば2以上で警報転送という方法あるいは複数段階にして、2以上、3以上、4以上でアラーム表示方法を変えることも可能である。
【0018】
図7に全体の処理フローを示す。通話が終了するとS値計算部の特定単語計数部、平均速度計算部、平均音量計算部に記憶されている通話開始から終了までの平均興奮度、平均速度、平均音量と顧客情報記憶部に格納されている過去の値との平均値が顧客情報記憶部に格納される。平均値の算出方法としては、図5に示すように過去の通話回数をもとに加重平均を例えば次のように算出する。
【0019】
興奮度A=(興奮度A×通話回数+今回の興奮度)/(通話回数+1)。
これ以外にも過去の通話時間の総計と今回の通話時間をもとに加重平均を算出することもできる。
【0020】
【発明の効果】
このような構成になっているため、顧客から受付オペレータに着信があると同時に顧客の感情の度合いを判定し、苦情などに対する顧客の感情をオペレータに通知することができる。オペレータはPCに表示された顧客の感情の度合いを見ながら適切な応答を行うことが可能となる。また、複数のオペレータのPCに表示されている顧客の感情の度合いを別のPCで一括表示し、上級オペレータがそれを監視し、怒りの激しい顧客に対応しているオペレータと顧客の通話に介入して適切な対応をとることも可能となる。
【0021】
また、顧客情報として顧客の感情度合いの平均値を管理できるため、顧客の感情特性を着信時に把握可能である。すなわち、CTIのように着信時に顧客の電話番号あるいはIPアドレス、ログイン名などから、感情の平均値を受付オペレータのPCに表示することができる。このようにすることで、感情の動きの激しい顧客かどうかを応対を始める前にオペレータは把握することができるため、最初から適切な対応が可能となる。
【図面の簡単な説明】
【図1】本発明の全体構成を示す。
【図2】受付センタのシステム構成を示す。
【図3】サーバのブロック構成を示す。
【図4】単語記憶部の構成を示す。
【図5】顧客情報記憶部の内容を示す。
【図6】感情判定部のブロック構成を示す。
【図7】本発明の全体の処理フローを示す。
【符号の説明】
1 単語記憶部
2 顧客情報記憶部
3 音声認識部
4 単語照合部
5 感情判定部
6 測定音量測定部
51 特定単語計数部
52 平均音量計算部
53 平均速度計算部
54 タイマー監視部
55 S値計算部[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to a voice reception system via a telephone or the Internet, and more particularly, to a voice reception system capable of determining the state of a customer.
[0002]
[Prior art]
Conventionally, when responding to a customer complaint by telephone at a call center or the like, the customer's condition, that is, the state of the customer's feeling with respect to the cause of the complaint, can only be determined by the reception operator talking over the phone. Did not. Since the degree of customer anger cannot be objectively determined, an inexperienced operator often takes care of the situation, resulting in anointing the customer anger. It is common that a senior operator intercepts each operator's response or responds to a customer by a replacement request from the operator, and there is no systematic system for judging the customer's state.
[0003]
[Problems to be solved by the invention]
An object of the present invention is to measure the words spoken by a customer, speed, and volume, and to compare and match with a normal customer's state, to estimate a customer's emotional state from a customer's voice and notify an operator. To provide a simple voice reception system.
[0004]
[Means for Solving the Problems]
A feature of the present invention is a voice receiving system connected to a customer's voice terminal via a communication line, including a server device and at least one PC having a microphone connected to the server device via a LAN. The server device stores first means for storing customer propensity information, second means for storing specific words, third means for extracting words by voice recognition, and storing specific words. Contents of the second means and the fourth means for comparing and matching words extracted from the third means for extracting words by voice recognition, and the contents of the first means for storing the matching results and customer propensity information And a fifth means for judging the emotion from the voice receiving system.
[0005]
Preferably, there is provided a sixth means for measuring a speech speed of the word extracted by the third means for extracting a word by voice recognition, and the first means for storing the measured speed and customer propensity information. The emotion is determined from the contents stored in the means.
[0006]
Preferably, there is provided a seventh means for measuring the volume of the received voice, and the emotion is determined from the measured volume and the contents stored in the first means for storing customer propensity information.
[0007]
Preferably, a different alarm is displayed or a warning sound is generated according to the numerical value of the result of the comparison and collation.
[0008]
Preferably, the emotion determination is performed by the following S.
[0009]
(Equation 2)
Figure 2004037989
Excitement value = coefficient determined by word Excitement A, speed A, volume A; customer propensity information stored in the first means
BEST MODE FOR CARRYING OUT THE INVENTION
FIG. 1 shows the overall configuration of the present invention. A customer's PC (personal computer) and a reception operator's PC (personal computer) are connected via the Internet. An imaging camera may be connected to a PC. A microphone and a speaker are connected to the PC, and both can talk with each other by voice.
[0011]
In FIG. 1, the present invention is also applicable to a configuration in which a customer is connected by a normal telephone or an IP telephone.
[0012]
FIG. 2 shows the configuration of the reception system, in which a server and at least one reception PC are connected via a LAN. The server determines the conversation state of the customer and notifies the PC of the reception operator of the customer state.
[0013]
FIG. 3 shows a functional configuration of the server. The word storage unit 1 generally stores words used when a person is angry. FIG. 4 shows an example. Since different words are used depending on the degree of anger, an excitement value is assigned to each word. The customer information storage unit 2 stores, in addition to ordinary customer information, for example, a customer ID, a customer name, and the like, average values of the degree of excitement, speed, and volume measured by the present system in correspondence with past customers. . FIG. 5 shows an example. The voice recognition unit 3 constantly analyzes the received voice of the customer, breaks it down into individual words used by the customer, and recognizes it. The speech recognition technique is configured using a known technique. The word matching unit 4 matches the word recognized by the speech recognition unit 3 with the word stored in the word storage unit 1, and transfers the excitement value to the emotion determination unit 5 when they match. The speed / volume measuring unit 6 measures the number of words recognized by the voice recognizing unit 3, measures the volume of the voice fetched from the receiving buffer, and transfers it to the emotion determining unit 5. The emotion determination unit 5 fetches the measurement results from the word collation unit 4 and the speed / volume measurement unit 6 at regular intervals to determine the degree of excitement of the customer.
[0014]
FIG. 6 shows a functional block configuration of the emotion determination unit 5. The specific word counting unit 51 receives the excitement value of the word that matches the word stored in the word storage unit 1 by collating from the word collation unit 4, and calculates and stores the received value and the cumulative value of the excitement value. This value is accumulated and stored until the end of the call, and is also stored in a temporary storage unit that is cleared to 0 when transferred according to an instruction from the S value calculation unit 55. The average speed calculation unit 53 receives the number of words measured at regular intervals from the speed / volume measurement unit 6 and stores the accumulated value. The average sound volume calculation unit 52 receives the sound volume measured at regular intervals from the speed sound volume measurement unit 6, and stores the weighted average with the already stored sound volume. Similarly to the specific word counting unit 51, the average speed calculation unit 53 and the average volume calculation unit 52 also calculate and store the average value until the end of the call, and store it in the storage unit. It has a temporary storage unit. These three values are transferred to the S-value calculator upon activation from the S-value calculator 55.
[0015]
The S value calculation unit 55 calculates the customer's excitement at regular intervals based on the received three types of values and the past excitement, speed, and volume stored in the customer information storage unit 2. The formulas for calculating the excitement degree and the S value are as follows, for example.
[0016]
[Equation 3]
Figure 2004037989
[0017]
The S value is a value near 1 when the customer is in a normal state. When the S value exceeds a certain value, the S value calculation unit transfers and displays an alarm on the PC of the receiving operator. It is also possible to change the alarm display method in one step, for example, a method of transferring an alarm in two or more steps, or a plurality of steps, in two or more, three or more, and four or more steps.
[0018]
FIG. 7 shows the overall processing flow. When the call is ended, the average excitement, average speed, average volume, and average volume from the start to the end of the call stored in the specific word counting unit, average speed calculation unit, and average volume calculation unit of the S value calculation unit and stored in the customer information storage unit. The average value with the past value is stored in the customer information storage unit. As a method of calculating the average value, as shown in FIG. 5, a weighted average is calculated as follows based on the number of past calls.
[0019]
Excitement degree A = (Excitement degree A × Number of calls + Excitement degree of this time) / (Number of calls + 1).
In addition, a weighted average can be calculated based on the total of past call times and the current call time.
[0020]
【The invention's effect】
With such a configuration, the degree of the customer's emotion can be determined at the same time that the customer receives an incoming call from the customer to the reception operator, and the operator's feeling of a complaint or the like can be notified to the operator. The operator can make an appropriate response while watching the degree of the customer's emotion displayed on the PC. In addition, the degree of customer's emotion displayed on the PCs of multiple operators is displayed collectively on another PC, and the senior operator monitors it and intervenes in the conversation between the operator and the customer corresponding to the angry customer. And take appropriate measures.
[0021]
In addition, since the average value of the degree of emotion of the customer can be managed as the customer information, the emotion characteristic of the customer can be grasped at the time of an incoming call. That is, like the CTI, the average value of emotion can be displayed on the PC of the reception operator from the customer's telephone number or IP address, login name, etc. at the time of an incoming call. By doing so, the operator can grasp whether or not the customer has a strong emotional movement before starting to respond, so that an appropriate response can be performed from the beginning.
[Brief description of the drawings]
FIG. 1 shows the overall configuration of the present invention.
FIG. 2 shows a system configuration of a reception center.
FIG. 3 shows a block configuration of a server.
FIG. 4 shows a configuration of a word storage unit.
FIG. 5 shows the contents of a customer information storage unit.
FIG. 6 shows a block configuration of an emotion determination unit.
FIG. 7 shows an overall processing flow of the present invention.
[Explanation of symbols]
Reference Signs List 1 Word storage unit 2 Customer information storage unit 3 Voice recognition unit 4 Word collation unit 5 Emotion determination unit 6 Measurement volume measurement unit 51 Specific word counting unit 52 Average volume calculation unit 53 Average speed calculation unit 54 Timer monitoring unit 55 S value calculation unit

Claims (5)

通信回線を介して顧客の音声端末と接続される音声受付システムにおいて、
サーバ装置と、該サーバ装置とLANで接続される、マイクを具備する少なくともひとつのPCとを有し、
前記サーバ装置は、顧客の性癖情報を記憶する第1の手段と、特定の言葉を記憶する第2の手段と、音声認識により単語を抽出する第3の手段と、特定の言葉を記憶する前記第2の手段と音声認識により単語を抽出する前記第3の手段から抽出された単語を比較照合する第4の手段と、照合結果と顧客の性癖情報を記憶する前記第1の手段の内容から感情判定をする第5の手段とを具備することを特徴とする音声受付システム。
In a voice reception system connected to a customer's voice terminal via a communication line,
A server device and at least one PC having a microphone connected to the server device via a LAN,
The server device includes a first unit that stores customer propensity information, a second unit that stores a specific word, a third unit that extracts a word by voice recognition, and a memory that stores a specific word. The fourth means for comparing and collating words extracted from the second means and the third means for extracting words by voice recognition, and the contents of the first means for storing collation results and customer propensity information A voice receiving system comprising: a fifth means for determining an emotion.
音声認識により単語を抽出する前記第3の手段で抽出された単語の発話速度を測定する第6の手段を具備し、測定された速度と顧客の性癖情報を記憶する前記第1の手段に記憶されている内容とから感情判定をすることを特徴とする請求項1記載の音声受付システム。A sixth means for measuring the utterance speed of the word extracted by the third means for extracting a word by voice recognition is provided, and the measured speed and the propensity information of the customer are stored in the first means. 2. The voice receiving system according to claim 1, wherein an emotion determination is made based on the contents of the voice reception. 受信した音声のボリュームを測定する第7の手段を具備し、測定されたボリュームと顧客の性癖情報を記憶する前記第1の手段に記憶されている内容とから感情判定をすることを特徴とする請求項1又は2に記載の音声受付システム。Seventh means for measuring the volume of the received voice is provided, and emotion determination is performed based on the measured volume and the content stored in the first means for storing customer propensity information. The voice reception system according to claim 1. 感情判定した結果の数値に応じて、異なるアラームを表示、あるいは警告音を発生することを特徴とする請求項1−3のひとつに記載の音声受付システム。The voice receiving system according to claim 1, wherein a different alarm is displayed or a warning sound is generated according to a result of emotion determination. 前記の感情判定が次のSにより行なわれる;
Figure 2004037989
興奮値=単語により定まる係数
興奮度A,速度A,音量A;第1の手段に記憶される顧客の性癖情報
ことを特徴とする請求項1−3のひとつに記載の音声受付システム。
The above emotion determination is performed by the following S;
Figure 2004037989
4. The voice receiving system according to claim 1, wherein the excitement value = a coefficient determined by a word, an excitement degree A, a speed A, and a volume A; customer propensity information stored in the first means.
JP2002197134A 2002-07-05 2002-07-05 Voice reception system Pending JP2004037989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2002197134A JP2004037989A (en) 2002-07-05 2002-07-05 Voice reception system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2002197134A JP2004037989A (en) 2002-07-05 2002-07-05 Voice reception system

Publications (1)

Publication Number Publication Date
JP2004037989A true JP2004037989A (en) 2004-02-05

Family

ID=31704985

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002197134A Pending JP2004037989A (en) 2002-07-05 2002-07-05 Voice reception system

Country Status (1)

Country Link
JP (1) JP2004037989A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006106711A (en) * 2004-09-10 2006-04-20 Matsushita Electric Ind Co Ltd Information processing terminal
JP2006267464A (en) * 2005-03-23 2006-10-05 Tokyo Electric Power Co Inc:The Emotion analyzer, emotion analysis program and program storage medium
JP2007004001A (en) * 2005-06-27 2007-01-11 Tokyo Electric Power Co Inc:The Operator answering ability diagnosing device, operator answering ability diagnosing program, and program storage medium
JP2007052212A (en) * 2005-08-17 2007-03-01 Nec Fielding Ltd Maintenance skilled person selection device, selection system, selection method and selection program
JP2007286377A (en) * 2006-04-18 2007-11-01 Nippon Telegr & Teleph Corp <Ntt> Answer evaluating device and method thereof, and program and recording medium therefor
WO2007148493A1 (en) * 2006-06-23 2007-12-27 Panasonic Corporation Emotion recognizer
JP2008076905A (en) * 2006-09-22 2008-04-03 Univ Of Tokyo Feeling discrimination method
JP2009025517A (en) * 2007-07-19 2009-02-05 Nissan Motor Co Ltd On-vehicle information providing interactive device
JP2009111829A (en) * 2007-10-31 2009-05-21 Fujitsu Ltd Telephone job system and telephone job program
WO2010041507A1 (en) * 2008-10-10 2010-04-15 インターナショナル・ビジネス・マシーンズ・コーポレーション System and method which extract specific situation in conversation
JP2013157666A (en) * 2012-01-26 2013-08-15 Sumitomo Mitsui Banking Corp Telephone call answering job support system and method of the same
WO2014069120A1 (en) * 2012-10-31 2014-05-08 日本電気株式会社 Analysis object determination device and analysis object determination method
JP2017049364A (en) * 2015-08-31 2017-03-09 富士通株式会社 Utterance state determination device, utterance state determination method, and determination program
JP2019176442A (en) * 2018-03-29 2019-10-10 沖電気工業株式会社 Information processing device, information processing method and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0682376B2 (en) * 1986-07-10 1994-10-19 日本電気株式会社 Emotion information extraction device
JPH0922296A (en) * 1995-07-05 1997-01-21 Sanyo Electric Co Ltd Sensitivity information input processing device and processing method therefor
JPH11119791A (en) * 1997-10-20 1999-04-30 Hitachi Ltd System and method for voice feeling recognition
JP2001083984A (en) * 1999-09-09 2001-03-30 Alpine Electronics Inc Interface device
JP2001117581A (en) * 1999-10-22 2001-04-27 Alpine Electronics Inc Feeling recognition device
JP2001184082A (en) * 1999-12-24 2001-07-06 Sharp Corp Voice processor
JP2002177653A (en) * 2000-12-15 2002-06-25 Namco Ltd With-remote-player game apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0682376B2 (en) * 1986-07-10 1994-10-19 日本電気株式会社 Emotion information extraction device
JPH0922296A (en) * 1995-07-05 1997-01-21 Sanyo Electric Co Ltd Sensitivity information input processing device and processing method therefor
JPH11119791A (en) * 1997-10-20 1999-04-30 Hitachi Ltd System and method for voice feeling recognition
JP2001083984A (en) * 1999-09-09 2001-03-30 Alpine Electronics Inc Interface device
JP2001117581A (en) * 1999-10-22 2001-04-27 Alpine Electronics Inc Feeling recognition device
JP2001184082A (en) * 1999-12-24 2001-07-06 Sharp Corp Voice processor
JP2002177653A (en) * 2000-12-15 2002-06-25 Namco Ltd With-remote-player game apparatus

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006106711A (en) * 2004-09-10 2006-04-20 Matsushita Electric Ind Co Ltd Information processing terminal
JP4587854B2 (en) * 2005-03-23 2010-11-24 東京電力株式会社 Emotion analysis device, emotion analysis program, program storage medium
JP2006267464A (en) * 2005-03-23 2006-10-05 Tokyo Electric Power Co Inc:The Emotion analyzer, emotion analysis program and program storage medium
JP2007004001A (en) * 2005-06-27 2007-01-11 Tokyo Electric Power Co Inc:The Operator answering ability diagnosing device, operator answering ability diagnosing program, and program storage medium
JP2007052212A (en) * 2005-08-17 2007-03-01 Nec Fielding Ltd Maintenance skilled person selection device, selection system, selection method and selection program
JP2007286377A (en) * 2006-04-18 2007-11-01 Nippon Telegr & Teleph Corp <Ntt> Answer evaluating device and method thereof, and program and recording medium therefor
JP4728868B2 (en) * 2006-04-18 2011-07-20 日本電信電話株式会社 Response evaluation apparatus, method, program, and recording medium
WO2007148493A1 (en) * 2006-06-23 2007-12-27 Panasonic Corporation Emotion recognizer
US8204747B2 (en) 2006-06-23 2012-06-19 Panasonic Corporation Emotion recognition apparatus
JP2008076905A (en) * 2006-09-22 2008-04-03 Univ Of Tokyo Feeling discrimination method
JP2009025517A (en) * 2007-07-19 2009-02-05 Nissan Motor Co Ltd On-vehicle information providing interactive device
JP2009111829A (en) * 2007-10-31 2009-05-21 Fujitsu Ltd Telephone job system and telephone job program
WO2010041507A1 (en) * 2008-10-10 2010-04-15 インターナショナル・ビジネス・マシーンズ・コーポレーション System and method which extract specific situation in conversation
US9269357B2 (en) 2008-10-10 2016-02-23 Nuance Communications, Inc. System and method for extracting a specific situation from a conversation
JP2013157666A (en) * 2012-01-26 2013-08-15 Sumitomo Mitsui Banking Corp Telephone call answering job support system and method of the same
WO2014069120A1 (en) * 2012-10-31 2014-05-08 日本電気株式会社 Analysis object determination device and analysis object determination method
JPWO2014069120A1 (en) * 2012-10-31 2016-09-08 日本電気株式会社 Analysis object determination apparatus and analysis object determination method
US10083686B2 (en) 2012-10-31 2018-09-25 Nec Corporation Analysis object determination device, analysis object determination method and computer-readable medium
JP2017049364A (en) * 2015-08-31 2017-03-09 富士通株式会社 Utterance state determination device, utterance state determination method, and determination program
JP2019176442A (en) * 2018-03-29 2019-10-10 沖電気工業株式会社 Information processing device, information processing method and program

Similar Documents

Publication Publication Date Title
US10477020B2 (en) System and method for monitoring and visualizing emotions in call center dialogs at call centers
JP2004037989A (en) Voice reception system
US7085719B1 (en) Voice filter for normalizing an agents response by altering emotional and word content
US8654937B2 (en) System and method for call center agent quality assurance using biometric detection technologies
US8718262B2 (en) Method and system for automatically routing a telephonic communication base on analytic attributes associated with prior telephonic communication
US20170154293A1 (en) Customer service appraisal device, customer service appraisal system, and customer service appraisal method
US8861708B2 (en) System and method for monitoring a voice in real time
WO2014120291A1 (en) System and method for improving voice communication over a network
JP2010113167A (en) Harmful customer detection system, its method and harmful customer detection program
JP6432434B2 (en) Nursing care support device using conversational voice
JP5532781B2 (en) Waiting service server, waiting service system using the server, and expected end time calculation method for waiting service
JP5267995B2 (en) Conversation group grasping device, conversation group grasping method, and program
JP2010273130A (en) Device for determining progress of fraud, dictionary generator, method for determining progress of fraud, and method for generating dictionary
JP6863179B2 (en) Call center system, call center device, dialogue method, and its program with customer complaint detection function
JP6943237B2 (en) Information processing equipment, information processing methods, and programs
JP6598227B1 (en) Cat-type conversation robot
CN112188171A (en) System and method for judging visiting relationship of client
CN110225213B (en) Recognition method of voice call scene and audio policy server
JP6718623B2 (en) Cat conversation robot
JP2019015837A (en) Conversation type robot having character of cat and conversation management program for showing character of cat
JP4836752B2 (en) Operator skill management system in call center
WO2014069443A1 (en) Complaint call determination device and complaint call determination method
WO2023162009A1 (en) Emotion information utilization device, emotion information utilization method, and program
JP2005039501A (en) Portable telephone recording service system and its method and program
TWI815400B (en) Emotion analysis system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20040806

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20060829

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060912

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20070515