JP2007288242A - Operator evaluation method, device, operator evaluation program, and recording medium - Google Patents

Operator evaluation method, device, operator evaluation program, and recording medium Download PDF

Info

Publication number
JP2007288242A
JP2007288242A JP2006109752A JP2006109752A JP2007288242A JP 2007288242 A JP2007288242 A JP 2007288242A JP 2006109752 A JP2006109752 A JP 2006109752A JP 2006109752 A JP2006109752 A JP 2006109752A JP 2007288242 A JP2007288242 A JP 2007288242A
Authority
JP
Japan
Prior art keywords
time
operator
voice
utterance
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2006109752A
Other languages
Japanese (ja)
Inventor
Satoru Kobashigawa
Noboru Miyazaki
昇 宮崎
哲 小橋川
Original Assignee
Nippon Telegr & Teleph Corp <Ntt>
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegr & Teleph Corp <Ntt>, 日本電信電話株式会社 filed Critical Nippon Telegr & Teleph Corp <Ntt>
Priority to JP2006109752A priority Critical patent/JP2007288242A/en
Publication of JP2007288242A publication Critical patent/JP2007288242A/en
Application status is Pending legal-status Critical

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To evaluate the skill level of an operator who receives a speech sound as business. <P>SOLUTION: A silent time when both of the operator and a user do not speak is measured, and a lower evaluation value is outputted as the silent time becomes longer (embodiment 1). The speech time of the operator and the user is measured, and a lower evaluation value is outputted as the operator speaks longer than the user (embodiment 2). A time when the operator and the user speak simultaneously is measured, and a lower evaluation value is outputted as a time when they speak simultaneously becomes longer (embodiment 3). <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

  The present invention relates to an evaluation method, an apparatus, an operator evaluation program, and a recording medium for evaluating the proficiency level of an operator who performs telephone reception work such as a call center.

As an operator evaluation device, for example, as disclosed in Patent Document 1, a method of measuring a time away from an operator state detection sensor attached to a headset (hereinafter referred to as Conventional Technology 1) is considered.
In addition, only a voice section is extracted as in Patent Document 2, or only an emotional section is extracted as in Patent Document 3, and a supervisor who is an operator manager later listens to the voice and hears the operator's proficiency level. Is considered (hereinafter referred to as Prior Art 2).
JP 2004-282154 A JP-A-5-219270 JP 2005-345496 A

In prior art 1,
・ Attaching a biosensor to a headset and a sensor detection function are necessary, and the system becomes expensive and large.
As the evaluation of the operator, the proficiency level of the operator is more important than the absence time, and it is preferable to evaluate the time actually being handled from the viewpoint of highly evaluating an operator with high customer satisfaction.
In prior art 2,
・ Although it is only the extracted section, it is necessary to actually listen to it, and the operator's evaluation takes a huge time cost.

  There was a problem.

In order to solve the above-described problems, in the present invention, first, an operator evaluation is automatically performed only from the reception voice signals of the operator and the user.
For example, when the operator is searching for information to be communicated to the user for a long time, the operator's proficiency level is low, and even when the operator blocks the user's speech, customer satisfaction is low, and the operator's proficiency level is low . Even if the operator speaks for a relatively long time compared to the user, the operator's proficiency level is low (for detailed explanations, etc., substitute the recorded voices prepared in advance, send a pamphlet, etc.) Note that it should be evaluated (because it can be enhanced). That is, a speaker having a long section in which neither the operator nor the user speaks has a low proficiency level as an operator, and an operator having a long time that the operator and the user speak at the same time are rated low.

  As a specific method of the operator evaluation method according to the present invention, a reception start time detection process for detecting a reception start time from a volume level of an input voice, a keyboard / mouse input or a receiver operation signal, and an operator and a user utter an input voice. Silence time detection process to detect silent time, reception end time detection process to detect reception end time, silence time ratio is calculated by dividing silence time by difference between reception end time and reception start time A silent time ratio calculation process, and an operator evaluation process for outputting the silent time ratio as an operator evaluation value.

  The present invention further includes a reception start time detection process for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal, a voice separation process for separating an operator voice and a user voice, an operator voice, and a user Speak time recording process to obtain utterance time information for each voice, reception end time detection process to detect the reception end time, and calculation time of superimposition of operator voice and user voice from the voice time information of operator voice and user voice A superposition time calculation process, a superposition time ratio calculation process for calculating a superposition time ratio by dividing the superposition time by the reception time, and an operator evaluation process for outputting the superposition time ratio as an operator evaluation value. And

The present invention further includes a voice separation process for separating an input voice signal into an operator voice and a user voice after the start of reception, an utterance time measurement process for measuring the utterance time for each of the separated operator voice and user voice, and an operator utterance time length. Utterance time ratio calculation processing for calculating the utterance time ratio by dividing the utterance time by the user utterance time length, and operator evaluation processing for outputting the utterance time ratio as an operator evaluation value.
The present invention further provides an integrated operator evaluation for integrating the silent time ratio calculation process, the superposition time ratio calculation process, the speech time ratio calculation process, and at least two of the calculation results by weighted linear sum. And processing.

An operator evaluation apparatus according to the present invention includes a reception start time detection unit that detects a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal, a silence time detection unit that detects a silence time from the input voice, The reception end time detection unit for detecting the end time, the silence time ratio calculation unit for calculating the silence time ratio by dividing the silence time by the difference between the reception end time and the reception start time, and the silence time ratio as an operator evaluation value It is characterized by comprising an operator evaluation unit that outputs as
The present invention further includes a reception start time detection unit for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal, a voice separation unit for separating an operator voice and a user voice, an operator voice, and a user An utterance time recording unit that obtains utterance time information for each voice, a reception end time detection unit that detects a reception end time, and a superposition time in which the operator voice and the user voice overlap is calculated from the utterance time information of the operator voice and the user voice. A superposition time calculation unit, a superposition time ratio calculation unit that calculates a superposition time ratio by dividing the superposition time by the reception time, and an operator evaluation unit that outputs the superposition time ratio as an operator evaluation value It is characterized by that.

The present invention further includes a voice separation unit that separates an input voice signal into operator voice and user voice after the start of reception, an utterance time measurement unit that measures the utterance time for each of the separated operator voice and user voice, and an operator utterance time length. It is characterized by comprising an utterance time ratio calculation unit that calculates an utterance time ratio by dividing the utterance time by a user utterance time length, and an operator evaluation unit that outputs the utterance time ratio as an operator evaluation value.
The present invention further includes a silent time ratio calculation unit, a superposition time ratio calculation unit, an utterance time ratio calculation unit, and an integrated operator evaluation unit that integrates at least two calculation results of the calculation processing results by weighted linear sum. It is characterized by having comprised.

  According to the present invention, it is possible to semi-automatically evaluate an operator without actually listening to the dialogue state between the operator and the user. Since evaluation is performed only by voice, special equipment prepared by the operator is basically unnecessary. In addition, automatic feedback from the evaluation apparatus leads to an improvement in the skill level of the operator without bothering the operation manager.

The operator evaluation apparatus according to the present invention can be entirely configured by hardware, and the operator evaluation method according to the present invention can be executed by an evaluation apparatus configured by hardware. However, in the simplest implementation, an embodiment in which an operator evaluation program according to the present invention is installed in a computer and the computer functions as an operator evaluation device is the most desirable embodiment.
When the computer executes the operator evaluation method according to the present invention by an operator evaluation program installed on the computer, the computer detects the reception start time by detecting the reception start time from at least the volume level of the input voice, the keyboard / mouse input or the receiver operation signal. Silence time detection processing for detecting silence time from input speech, reception end time detection processing for detecting reception end time, and silence time by dividing the silence time by the difference between reception end time and reception start time A silent time ratio calculation process for calculating the ratio and an operator evaluation process for outputting the silent time ratio as an operator evaluation value are executed.

When the computer is caused to function as an operator evaluation apparatus according to the present invention by an operator evaluation program installed in the computer, the reception start time detection unit detects the reception start time from at least the volume level of the input voice, the keyboard / mouse input or the receiver operation signal. A silent time detection unit that detects a silent time from the input voice, a reception end time detection unit that detects a reception end time, and a silent time ratio by dividing the silence time by the difference between the reception end time and the reception start time. An embodiment for constructing a silent time ratio calculation unit for calculating the above and an operator evaluation process for outputting the silent section ratio as an operator evaluation value, and functioning as an operator evaluation device;
A reception start time detection unit that detects a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal, a voice separation unit that separates an operator voice and a user voice, and an utterance time for each operator voice and user voice A speech time recording unit for obtaining information, a reception end time detection unit for detecting a reception end time, and a superposition time calculation unit for calculating a superposition time in which the operator and the user's voice overlap from the speech time information of the operator voice and the user voice Implementation of a superposition time ratio calculation unit that calculates a superposition time ratio by dividing the superposition time by the reception time, and an operator evaluation unit that outputs the superposition time ratio as an operator evaluation value, and functions as an operator evaluation device Form,
A voice separation unit that separates the input voice signal into operator voice and user voice after the start of reception, an utterance time measurement part that measures the utterance time for each of the separated operator voice and user voice, and the operator utterance time length as the user utterance time. An embodiment is conceivable in which an utterance time ratio calculation unit that calculates an utterance time ratio by dividing by a length and an operator evaluation unit that outputs the utterance time ratio as an operator evaluation value are configured to function as an operator evaluation device.

In the first embodiment, the operator's proficiency level is evaluated using a section in which neither the operator nor the user speaks.
A functional configuration diagram of the first embodiment is shown in FIG. 1, and a processing procedure is shown in FIG.
In this embodiment, the reception start time detection unit 13 determines the reception start time from the volume level of the operator or user voice input from the voice input terminal 12, the keyboard / mouse input of the operator input from the input terminal 11, and the receiver operation signal. It detects (steps SP1-SP2).
Next, the silent time detection unit 14 detects the silent time during which neither the operator nor the user is speaking from the input voice (step SP3).

The reception end time detector 15 detects the reception end time (steps SP4 to SP5).
The silent time ratio calculation unit 16 calculates the silent time ratio by dividing the silent time by the difference between the reception end time and the reception start time (“receiving end time−receiving start time”) (step SP6). The operator evaluation unit 17 outputs the silent time ratio as an operator evaluation value (step SP7).
In this case, the operator evaluation value evaluates that the proficiency level of the operator is lower as the silent time is longer. Accordingly, when the silent time ratio value is output as it is, it is evaluated that the greater the value, the lower the proficiency level of the operator. The reciprocal of the silent time ratio may be output as the operator evaluation value. In this case, the greater the value, the higher the operator proficiency level.

Further, the operator evaluation value output destination may be an operation manager called a supervisor or the like. It is also possible to score operator evaluation values using a sigmoid function or the like. Moreover, the operator evaluation value output by the operator evaluation unit 17 can be used for self-study by the operator himself / herself.
The silent time ratio detection unit 14 can also separate the voice of the operator and the user by providing an operator / user voice separation unit 18. Also, by providing the utterance recognition unit 19, it is possible to recognize the utterances of the operator and the user and obtain the silent time more accurately. At this time, the denominator of the silent time ratio may be the time of the section where the operator or the user obtained by the speech recognition unit 19 is speaking.

In the second embodiment, the proficiency level of the operator is evaluated using the utterance time ratio between the operator and the user.
FIG. 3 shows a functional configuration example of the second embodiment, and FIG. 4 shows a processing procedure. In the second embodiment, the input voice signal is separated into the operator voice and the user voice by the operator / user voice separation unit 18 after the start of reception (FIG. 4, step SP8). The utterance time is measured by the utterance time measuring unit 21 for each of the separated operator voice and user voice (step SP41). The reception time detection unit 15 detects the reception time (steps SP4 and SP5). The operator / user utterance time ratio calculation unit 22 calculates the operator / user utterance time ratio by dividing the operator utterance time by the user utterance time (step SP42). The operator evaluation unit 17 outputs this operator utterance time ratio as an operator evaluation value.

  When the time ratio obtained by dividing the operator utterance time by the user utterance time is used as the operator evaluation value as it is, the greater the time ratio, the lower the skill level of the operator. That is, the evaluation becomes lower as the operator talks for a longer time than the user. The reciprocal of the previous time ratio may be output as the operator evaluation value. The operator evaluation value can be sent to, for example, an operation manager called a supervisor. It is also possible to score operator evaluation values using a sigmoid function or the like. Moreover, the operator evaluation value output by the operator evaluation unit 17 can be used for self-study by the operator himself / herself.

In the third embodiment, the proficiency level of the operator is evaluated using the volume superposition time of the operator and the user.
FIG. 5 shows a functional configuration example of the third embodiment, and FIG. 6 shows a processing procedure. In this embodiment, the reception start time detection unit 13 detects the reception start time from the volume level of the operator or user voice, the operator's keyboard / mouse input or the receiver operation signal (steps SP1 and SP2).
Next, the input voice signal is separated into operator voice and user voice by the operator / user voice separator 18 (step SP8).

The operator / user superimposition time recording unit 31 obtains utterance time information for each operator and user (step SP61).
The operator / user superimposition time calculation unit 32 detects the superposition state of the operator voice and the user voice from the utterance time information of the operator voice and the user voice, and calculates the superposition time of the operator voice and the user voice (step SP62).
The reception end time detector 15 detects the reception end time (steps SP4 and SP5).
The operator / user superimposition time ratio calculation unit 33 calculates the operator / user superimposition time ratio by dividing the operator / user superimposition time by the reception time ("reception end time-reception start time") (step SP63).

The operator evaluation unit 17 outputs the operator / user overlap time ratio as an operator evaluation value. In this case, it is determined that the proficiency level of the operator is higher as the superposition time is shorter. The operator evaluation unit 17 outputs the superimposition time ratio calculated by the operator / user superimposition time ratio calculation unit 33 as it is as an operator evaluation value or outputs the reciprocal, for example, to the supervisor (step SP7).
When the superimposition time ratio calculated by the operator / user superimposition time ratio calculation unit 33 is output as an operator evaluation value as it is, it is determined that the mastery of the operator is higher as the superimposition time ratio is smaller. When outputting the reciprocal of the superposition time ratio, it is determined that the greater the value, the higher the skill level of the operator. It is also possible to score operator evaluation values using a sigmoid function or the like. Moreover, the operator evaluation value output by the operator evaluation unit 17 can be used for self-study by the operator himself / herself.

In the present embodiment, the silent section length in which neither the operator nor the user described in the first embodiment is speaking, the operator / user speaking time ratio described in the second embodiment, and the voice superimposition between the operator and the user described in the third embodiment. Integrate section lengths to assess operator proficiency. FIG. 7 shows a functional configuration example of the fourth embodiment.
In this embodiment, the silent time ratio calculation unit 16, the operator / user utterance time ratio calculation unit 22, and the operator / user superimposition time ratio calculation unit 33 are respectively similar to the silent time ratio described in the first to third embodiments. The operator / user utterance time ratio and the operator / user superposition time ratio are measured.

  In the integrated operator evaluation unit 41, for example, two or more reciprocals of the silent time ratio, the operator / user utterance time ratio, and the operator / user overlapping time ratio are integrated by weighted linear sums W1, W2, and W3 (see FIG. 7). An integrated operator evaluation value is calculated and output as an operator evaluation value. It is also possible to score operator evaluation values using a sigmoid function or the like. Moreover, the operator evaluation value output by the operator evaluation unit 17 can be used for self-study by the operator himself / herself.

In the above-described first to fourth embodiments, the processing of the operator / user voice separation unit 18 is realized by hardware that separates the transmission signal and the reception signal, and also uses the sound source separation technique to determine the operator from the power level of each signal. -A method of separating the user voice section is conceivable.
As another voice separation method, since user voice is telephone voice through a telephone line, the power level in a frequency band of 4 kHz or higher is very small. Further, the telephone voice also has a low level in the frequency band of 3.4 kHz to 4 kHz. When the target voice is a voice branched before passing through the telephone line, the operator and user sections can be separated by a power level of 4 kHz or more. Further, even in the case of a telephone voice from a recording device or the like through a telephone line, the operator voice and user voice sections can be separated at a level in the frequency band of 3.4 kHz to 4 kHz.

  In addition, a narrowband speech model created from speech features using only speech information in the frequency band below 3.4 kHz and a wideband speech model based on speech features using speech information in the frequency band above 3.4 kHz are also available. If the target speech is recognized by both speech models and the likelihood difference between the narrowband speech model and the wideband speech model (“the likelihood of the broadband speech model−the likelihood of the narrowband speech model”) is greater than the threshold A, the operator When the voice interval is smaller than the threshold value A and greater than or equal to the threshold value B (A> B), the operator / user speech segment is separated as the user / speaker segment when the operator / user simultaneous speech segment is less than B. It is done.

  The operator evaluation method and apparatus according to the present invention described above can be realized by installing the operator evaluation program according to the present invention in a computer, causing the CPU provided in the computer to decode the program, and executing the program on the computer. The operator evaluation program according to the present invention is written in a computer-readable program language, and is recorded in a computer-readable recording medium such as a magnetic disk, a CD-ROM, or a semiconductor memory. The computer is installed from these recording media or through a communication line.

  It is incorporated into the voice reception system and used.

The function block diagram for demonstrating Example 1 of this invention. The flowchart for demonstrating the process sequence of Example 1 shown in FIG. The function block diagram for demonstrating Example 2 of this invention. The flowchart for demonstrating the process sequence of Example 2 shown in FIG. The function block diagram for demonstrating Example 3 of this invention. The flowchart for demonstrating the process sequence of Example 3 shown in FIG. The function block diagram for demonstrating Example 4 of this invention.

Explanation of symbols

DESCRIPTION OF SYMBOLS 11 Keyboard and mouse input terminal 19 Speech recognition part 12 Voice input terminal 21 Operator / user speech time measurement part 13 Reception start time detection part 22 Operator / user speech time comparison calculation part 14 Silent time detection part 31 Operator / user speech time recording part 15 Reception end time detection unit 32 Operator / user superimposition time calculation unit 16 Silent time ratio calculation unit 33 Operator / user superimposition time ratio calculation unit 17 Operator evaluation unit 41 Integrated operator evaluation unit 18 Operator / user voice separation unit

Claims (10)

  1. A reception start time detection process for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal;
    Silence time detection processing that detects silence time from the input voice,
    A reception end time detection process for detecting a reception end time;
    Silence time ratio calculation processing for calculating the silence time ratio by dividing the silence time by the difference between the reception end time and the reception start time;
    An operator evaluation process for outputting the silent time ratio as an operator evaluation value;
    An operator evaluation method comprising:
  2. A reception start time detection process for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal;
    Voice separation processing for separating operator voice and user voice;
    An utterance time recording process for obtaining utterance time information for each operator voice and user voice;
    A reception end time detection process for detecting a reception end time;
    A superposition time calculation process for calculating a superposition time in which the operator voice and the user voice overlap from the utterance time information of the operator voice and the user voice;
    A superposition time ratio calculation process for calculating a superposition time ratio by dividing the superposition time by the reception time;
    An operator evaluation process for outputting the superposition time ratio as an operator evaluation value;
    An operator evaluation method comprising:
  3. A voice separation process for separating an input voice signal into an operator voice and a user voice after the start of reception;
    An utterance time measurement process for measuring the utterance time for each of the separated operator voice and user voice;
    An utterance time ratio calculation process for calculating an utterance time ratio by dividing an operator utterance time length by a user utterance time length;
    Operator evaluation processing for outputting the utterance time ratio as an operator evaluation value;
    An operator evaluation method comprising:
  4. The silent time ratio calculation process according to claim 1;
    A superposition time ratio calculation process according to claim 2;
    Utterance time ratio calculation processing according to claim 3,
    An integrated operator evaluation process that integrates at least two of the calculation results by a weighted linear sum;
    An operator evaluation method comprising:
  5. A reception start time detection unit for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal;
    A silent time detector for detecting the silent time from the input voice;
    A reception end time detection unit for detecting a reception end time;
    A silent time ratio calculation unit for calculating a silent time ratio by dividing the silent time by the difference between the reception end time and the reception start time;
    An operator evaluation process for outputting the silent time ratio as an operator evaluation value;
    An operator evaluation device comprising:
  6. A reception start time detection unit for detecting a reception start time from a volume level of an input voice or a keyboard / mouse input or a receiver operation signal;
    A voice separation unit for separating operator voice and user voice;
    An operator voice, an utterance time recording unit for obtaining utterance time information for each user voice,
    A reception end time detection unit for detecting a reception end time;
    A superposition time calculation unit for calculating a superposition time in which the voice of the operator and the user overlaps from the utterance time information of the operator voice and the user voice;
    A superposition time ratio calculation unit that calculates a superposition time ratio by dividing the superposition time by the reception time;
    An operator evaluation unit that outputs the superposition time ratio as an operator evaluation value;
    An operator evaluation device comprising:
  7. A voice separation unit that separates an input voice signal into an operator voice and a user voice after starting reception;
    An utterance time measuring unit for measuring the utterance time for each of the separated operator voice and user voice;
    An utterance time ratio calculation unit that calculates an utterance time ratio by dividing an operator utterance time length by a user utterance time length;
    An operator evaluation unit that outputs the utterance time ratio as an operator evaluation value;
    An operator evaluation device comprising:
  8. The silent time ratio calculation unit according to claim 5;
    The superposition time ratio calculation unit according to claim 6,
    An utterance time ratio calculation unit according to claim 7;
    An integrated operator evaluation unit that integrates at least two or more calculation results among the results of these calculation processes with a weighted linear sum;
    An operator evaluation device comprising:
  9.   An operator evaluation program that is written in a computer-readable program language and causes the computer to execute the operator evaluation method according to any one of claims 1 to 4.
  10.   A recording medium comprising a computer-readable recording medium, wherein the operator evaluation program according to claim 9 is recorded on the recording medium.
JP2006109752A 2006-04-12 2006-04-12 Operator evaluation method, device, operator evaluation program, and recording medium Pending JP2007288242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006109752A JP2007288242A (en) 2006-04-12 2006-04-12 Operator evaluation method, device, operator evaluation program, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006109752A JP2007288242A (en) 2006-04-12 2006-04-12 Operator evaluation method, device, operator evaluation program, and recording medium

Publications (1)

Publication Number Publication Date
JP2007288242A true JP2007288242A (en) 2007-11-01

Family

ID=38759646

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006109752A Pending JP2007288242A (en) 2006-04-12 2006-04-12 Operator evaluation method, device, operator evaluation program, and recording medium

Country Status (1)

Country Link
JP (1) JP2007288242A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009216840A (en) * 2008-03-07 2009-09-24 Internatl Business Mach Corp <Ibm> System, method and program for processing voice data of dialogue between two persons
JP2010021681A (en) * 2008-07-09 2010-01-28 Fujitsu Ltd Reception flow creation program, reception flow creation method, and reception flow creating apparatus
JP2011109333A (en) * 2009-11-16 2011-06-02 Hitachi Kokusai Electric Inc System for managing operator operation situation
JP2011142381A (en) * 2010-01-05 2011-07-21 Fujitsu Ltd Operator selection device, and operator selection program
WO2012124104A1 (en) * 2011-03-17 2012-09-20 富士通株式会社 Operator evaluation support device, operator evaluation support method, and storage medium having operator evaluation support program recorded therein
JP2013211926A (en) * 2013-07-01 2013-10-10 Fujitsu Ltd Abnormal conversation detecting apparatus, abnormal conversation detecting method, and abnormal conversation detecting program
JP5633638B2 (en) * 2011-03-18 2014-12-03 富士通株式会社 Call evaluation device and call evaluation method
US8908856B2 (en) 2011-03-17 2014-12-09 Fujitsu Limited Operator evaluation support device and operator evaluation support method
JP2017207988A (en) * 2016-05-19 2017-11-24 テクマトリックス株式会社 Customer service system, operator skill assessment method and operator skill assessment program

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009216840A (en) * 2008-03-07 2009-09-24 Internatl Business Mach Corp <Ibm> System, method and program for processing voice data of dialogue between two persons
JP2010021681A (en) * 2008-07-09 2010-01-28 Fujitsu Ltd Reception flow creation program, reception flow creation method, and reception flow creating apparatus
JP2011109333A (en) * 2009-11-16 2011-06-02 Hitachi Kokusai Electric Inc System for managing operator operation situation
JP2011142381A (en) * 2010-01-05 2011-07-21 Fujitsu Ltd Operator selection device, and operator selection program
CN103444160A (en) * 2011-03-17 2013-12-11 富士通株式会社 Operator evaluation support device, operator evaluation support method, and storage medium having operator evaluation support program recorded therein
US8908856B2 (en) 2011-03-17 2014-12-09 Fujitsu Limited Operator evaluation support device and operator evaluation support method
WO2012124104A1 (en) * 2011-03-17 2012-09-20 富士通株式会社 Operator evaluation support device, operator evaluation support method, and storage medium having operator evaluation support program recorded therein
US8731176B2 (en) 2011-03-17 2014-05-20 Fujitsu Limited Operator evaluation support device and operator evaluation support method
JP5585720B2 (en) * 2011-03-17 2014-09-10 富士通株式会社 Operator evaluation support device, operator evaluation support method, and storage medium storing operator evaluation support program
JP5633638B2 (en) * 2011-03-18 2014-12-03 富士通株式会社 Call evaluation device and call evaluation method
US9288314B2 (en) 2011-03-18 2016-03-15 Fujitsu Limited Call evaluation device and call evaluation method
JP2013211926A (en) * 2013-07-01 2013-10-10 Fujitsu Ltd Abnormal conversation detecting apparatus, abnormal conversation detecting method, and abnormal conversation detecting program
JP2017207988A (en) * 2016-05-19 2017-11-24 テクマトリックス株式会社 Customer service system, operator skill assessment method and operator skill assessment program

Similar Documents

Publication Publication Date Title
US8554560B2 (en) Voice activity detection
EP0625774B1 (en) A method and an apparatus for speech detection
Li et al. Robust endpoint detection and energy normalization for real-time speech and speaker recognition
US6850887B2 (en) Speech recognition in noisy environments
US10431213B2 (en) Recognizing speech in the presence of additional audio
JP2004527006A (en) System and method for transmitting voice active status in a distributed voice recognition system
Zhu et al. On the use of variable frame rate analysis in speech recognition
US20140156276A1 (en) Conversation system and a method for recognizing speech
CN204029371U (en) communication device
Zhou et al. Efficient audio stream segmentation via the combined T/sup 2/statistic and Bayesian information criterion
JP2007264473A (en) Voice processor, voice processing method, and voice processing program
CN101149928B (en) Sound signal processing method, sound signal processing apparatus and computer program
US7610199B2 (en) Method and apparatus for obtaining complete speech signals for speech recognition applications
Shi et al. On the importance of phase in human speech recognition
US5970451A (en) Method for correcting frequently misrecognized words or command in speech application
JP2006079079A (en) Distributed speech recognition system and its method
US20060069557A1 (en) Microphone setup and testing in voice recognition software
JP2005055668A (en) Speech processing device
US9009047B2 (en) Specific call detecting device and specific call detecting method
JP4246703B2 (en) Automatic speech recognition method
RU2439716C2 (en) Detection of telephone answering machine by voice recognition
US9530401B2 (en) Apparatus and method for reporting speech recognition failures
US20060100866A1 (en) Influencing automatic speech recognition signal-to-noise levels
EP2770750B1 (en) Detecting and switching between noise reduction modes in multi-microphone mobile devices
CN102388416B (en) Signal processing apparatus and signal processing method