WO2012124120A1 - オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 - Google Patents
オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 Download PDFInfo
- Publication number
- WO2012124120A1 WO2012124120A1 PCT/JP2011/056462 JP2011056462W WO2012124120A1 WO 2012124120 A1 WO2012124120 A1 WO 2012124120A1 JP 2011056462 W JP2011056462 W JP 2011056462W WO 2012124120 A1 WO2012124120 A1 WO 2012124120A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- operator
- utterance
- time
- call
- display screen
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5175—Call or contact centers supervision arrangements
Definitions
- the present invention relates to an operator evaluation support apparatus, an operator evaluation support method, and a storage medium in which an operator evaluation support program is recorded that supports the evaluation of a call center operator.
- a supervisor who supervises the operator evaluates the call quality of each operator.
- the supervisor for example, by selecting and playing back speech files one by one from the graph showing the customer's and operator's utterance time zones (speech time zone graph), the contents of each utterance are grasped, and the operator's call quality was evaluated.
- speech files are not subjected to speech recognition processing, it is difficult to grasp the contents of the operator's utterance and the relationship between multiple voice files from the utterance time zone graph. Have difficulty.
- the supervisor cannot easily grasp the problem part of the operator's utterance without reproducing the utterance voice file.
- the following conventional techniques are known in terms of selecting and playing back an utterance audio file.
- Patent Document 1 A technique for interlocking playback of question and answer audio files has been known (see Patent Document 1).
- Patent Document 2 A technique for reproducing a plurality of audio files without recognizing the contents of the audio files in advance has been known (see Patent Document 2). JP 2006-171579 A JP 2007-58767 A
- the technique of the above-mentioned patent document 1 recognizes the contents of an audio file and accepts selective reproduction after creating a tree structure of a question and an answer, since the conventional technique for interlocking reproduction of an audio file of a question and an answer There is a problem that it is necessary to prepare a recognition technique for recognizing the contents of an audio file in advance, and an expensive device is required.
- the conventional technique for reproducing a plurality of audio files without recognizing the contents of the audio file in advance creates an audio file for each utterance and includes it in a specified time zone. Therefore, it is not easy to grasp the problem part of the operator's utterance.
- An embodiment of the present invention has an object to provide an operator evaluation support apparatus, an operator evaluation support method, and a storage medium in which an operator evaluation support program is recorded, which can reduce operations for selecting voice files of customer and operator utterances. .
- an operator evaluation support apparatus is an operator evaluation support apparatus that supports an evaluation of the operator based on a call that corresponds to an operator's telephone call.
- Display recording means for recording, content recording means for recording screen content information explaining the content of the display screen in association with display screen identification information of the display screen displayed on the operator terminal, and Refer to the utterance recording means, the display recording means, and the content recording means, and the customer and the operator Talk time, display time of the display screen displayed on the operator terminal, and call information indicating screen content information corresponding to the display screen are created, and the administrator terminal used by the administrator who evaluates the operator
- an operator evaluation support apparatus an operator evaluation support method, and a storage medium in which an operator evaluation support program is recorded that can reduce operations for selecting voice files of utterances of customers and operators.
- FIG. 1 is a configuration diagram of an example of a call center system according to the present embodiment.
- the call center system shown in FIG. 1 includes a server 1, an operator terminal 2, an administrator terminal 3, and a customer terminal 4.
- the server 1 is an example of an operator evaluation support device.
- the server 1 is connected to the operator terminal 2, the administrator terminal 3, and the customer terminal 4 via a predetermined network such as the Internet, a LAN (Local Area Network), or a public network.
- a predetermined network such as the Internet, a LAN (Local Area Network), or a public network.
- Server 1 includes customer DB 11, question DB 12, FAQ DB 13, product DB 14, operator DB 15, voice DB 16, speech time DB 17, manual DB 18, page switching DB 19, threshold DB 20, call program 21, OP (operator) evaluation program 22, information search program 23.
- the operator terminal 2 has an operator program 31.
- the administrator terminal 3 has an administrator terminal program 41. The administrator terminal 3 is operated by a supervisor as an example of an administrator.
- the server 1 executes a call program 21, an utterance reproduction program 22, and an information search program 23.
- the server 1 executes the call program 21 to connect the customer terminal 4 and the operator terminal 2, record a call, and transmit customer information to the operator terminal 2.
- the server 1 executes the utterance reproduction program 22 to create information for the supervisor to evaluate the operator, and to transmit information to the manager terminal 3 for the supervisor to evaluate the operator.
- the server 1 executes the information search program 23 to search for information from the manual DB 18 or the like, to transmit the search result to the operator terminal 2, the contents of the information transmitted to the operator terminal 2, the information at the operator terminal 2 Record the time of display and non-display.
- the operator terminal 2 executes an operator program 31.
- the operator terminal 2 executes the operator program 31 to display the customer information received from the server 1 and the information retrieval result from the manual DB 18 or the like.
- the administrator terminal 3 executes the administrator terminal program 41.
- the administrator terminal 3 executes the administrator terminal program 41 so that the supervisor received from the server 1 displays information for evaluating the operator.
- the customer terminal 4 may be any device having a telephone function such as a telephone or a PC having a telephone function.
- Customer DB 11 records information about customers.
- the question DB 12 records information related to inquiries from customers.
- the FAQ DB 13 records information relating to frequently asked questions (FAQs).
- the product DB 14 records information about products.
- the operator DB 15 records information on the operator status (busy, available, etc.).
- the voice DB 16 records information related to the voice file.
- the utterance time DB 17 records information about the utterances of the customer and the operator such as the timing of the utterances of the customer and the operator.
- the manual DB 18 records screen data and standard time required for each section (page) of the manual.
- the page switching DB 19 records information related to manual display or non-display operations by the operator.
- the threshold DB 20 records a threshold time described later.
- FIG. 2 is a hardware configuration diagram of an example of a server.
- the server 1 illustrated in FIG. 2 includes an input device 51, a display device 52, and a main body 53.
- the main body 53 includes a main storage device 61, an arithmetic processing device 62, an interface device 63, a recording medium reading device 64, and an auxiliary storage device 65 that are connected to each other via a bus 67.
- An input device 51 and a display device 52 are connected to the bus 67.
- the input device 51, the display device 52, the main storage device 61, the arithmetic processing device 62, the interface device 63, the recording medium reading device 64, and the auxiliary storage device 65 that are connected to each other via the bus 67 are managed by the arithmetic processing device 62. Data can be sent and received between each other.
- the arithmetic processing unit 62 is a central processing unit that controls the operation of the entire server 1.
- the interface device 63 receives data from the operator terminal 2, the administrator terminal 3, the customer terminal 4, etc., and passes the contents of the data to the arithmetic processing device 62. Further, the interface device 63 transmits data to the operator terminal 2, the manager terminal 3, the customer terminal 4, etc. in response to an instruction from the arithmetic processing device 62.
- the auxiliary storage device 65 stores an operator evaluation support program that causes a computer to execute at least processing in the operator evaluation support device as part of a program that causes the server 1 to exhibit the same function as the operator evaluation support device.
- the operator evaluation support program includes a call program 21, an utterance reproduction program 22, and an information search program 23.
- the server 1 functions as an operator evaluation support device.
- the operator evaluation support program may be stored in the main storage device 61 accessible to the arithmetic processing device 62.
- the input device 51 receives data input under the control of the arithmetic processing device 62.
- the operator evaluation support program can be recorded in a recording medium 66 that can be read by the server 1.
- the recording medium 66 readable by the server 1 includes a magnetic recording medium, an optical disk, a magneto-optical recording medium, and a semiconductor memory.
- Magnetic recording media include HDDs, flexible disks (FD), magnetic tapes (MT) and the like.
- Optical discs include DVD (Digital Versatile Disc), DVD-RAM, CD-ROM (Compact Disc-Read Only Memory), CD-R (Recordable) / RW (ReWriteable), and the like.
- Magneto-optical recording media include MO (Magneto-Optical disk).
- a portable recording medium 66 such as a DVD or a CD-ROM in which the operator evaluation support program is recorded is sold.
- the server 1 that executes the operator evaluation support program reads the operator evaluation support program from, for example, the recording medium 66 on which the recording medium reader 64 has recorded the operator evaluation support program.
- the arithmetic processing device 62 stores the read operator evaluation support program in the main storage device 61 or the auxiliary storage device 65.
- the server 1 reads the operator evaluation support program from the main storage device 61 or the auxiliary storage device 65, which is its own storage device, and executes processing according to the operator evaluation support program.
- FIG. 3 is a hardware configuration diagram of an example of an operator terminal and an administrator terminal.
- the hardware configurations of the operator terminal 2 and the administrator terminal 3 are the same. Therefore, here, the hardware configuration of the operator terminal 2 will be described. A description of the hardware configuration of the administrator terminal 3 is omitted.
- the main body 70 includes a main storage device 81, a CPU 82, an auxiliary storage device 83, an image processing unit 84, an I / O processing unit 85, an audio processing unit 86, and a network card 87 that are connected to each other via an internal bus 88.
- the headset 74 has a speaker 91 and a microphone 92.
- the main storage device 81, auxiliary storage device 83, image processing unit 84, I / O processing unit 85, audio processing unit 86, and network card 87 connected to each other via the internal bus 88 are mutually controlled by the CPU 82. You can send and receive.
- the CPU 82 is a central processing unit that controls the operation of the entire operator terminal 2.
- the image processing unit 84 performs various processes necessary for displaying an image on the display device 71.
- the I / O processing unit 85 processes data input / output with the pointing device 72 and the keyboard 73.
- the audio processing unit 86 processes audio data exchanged between the speaker 91 and the microphone 92 of the headset 74.
- the network card 87 receives data from the server 1 and the like, and passes the contents of the data to the CPU 82. Further, the network card 87 transmits data to the server 1 or the like in response to an instruction from the CPU 82.
- an operator program 31 is installed in the operator terminal 2.
- the operator terminal 2 executes an operator program 31.
- the main storage device 81 stores at least an operator program 31 as part of a program for operating the operator terminal 2.
- the CPU 82 reads the operator program 31 from the main storage device 81 and executes it.
- the operator program 31 can be recorded on a recording medium readable by the operator terminal 2.
- a recording medium readable by the operator terminal 2.
- the operator terminal 2 that executes the operator program 31 reads the operator program 31 from the recording medium on which the operator program 31 is recorded.
- the CPU 82 stores the operator program 31 in the main storage device 81.
- the CPU 82 reads the operator program 31 from the main storage device 81 and executes processing according to the operator program 31.
- FIG. 4 is a functional configuration diagram of an example of the server.
- the server 1 has a customer DB 11, a question DB 12, a FAQ DB 13, a product DB 14, an operator DB 15, a voice DB 16, an utterance time DB 17, a manual DB 18, a page switching DB 19, and a threshold DB 20.
- the server 1 executes the customer information transmission unit 101, customer information acquisition unit 102, call connection unit 103, operator determination unit 104, call reception unit 105, customer voice recording unit 106, OP voice recording unit 107. , Implements the voice separation means 108.
- the server 1 executes the utterance reproduction program 22 to realize an utterance voice transmission unit 109, a reproduction utterance determination unit 110, an utterance table sending unit 111, an utterance table creation unit 112, and an utterance table request reception unit 113.
- the server 1 realizes an information transmission unit 114, an information search unit 115, and an information search reception unit 116.
- Customer information transmission means 101 transmits customer information to operator terminal 2.
- the customer information acquisition unit 102 acquires customer information from the customer DB 11.
- Call connection means 103 connects operator terminal 2 and customer terminal 4.
- the operator determining means 104 determines an operator who is not in a call from the operator DB 15.
- the call receiving unit 105 receives a call from the customer terminal 4.
- Customer voice recording means 106 writes customer voice data into a voice file.
- the customer voice recording means 106 writes the utterance start time and utterance end time of the customer in the utterance time DB 17.
- the OP voice recording means 107 writes the operator's voice data in the voice file.
- the OP voice recording means 107 writes the utterance start time and utterance end time of the operator in the utterance time DB 17.
- Voice separation means 108 separates customer voice data and operator voice data.
- the voice separation unit 108 separates, for example, the right channel of voice data as customer voice data and the left channel of voice data as operator voice data.
- the utterance voice transmission means 109 transmits a voice file to the administrator terminal 3.
- the reproduction utterance determining means 110 determines the utterance to be reproduced and notifies the manager terminal 3 of it.
- the utterance table sending unit 111 transmits the utterance table to the manager terminal 3.
- the utterance table creation means 112 creates an utterance table.
- the utterance table request receiving means 113 receives an utterance table display request from the administrator terminal 3.
- the information transmission unit 114 transmits information retrieved from the FAQ DB 13, the product DB 14, and the manual DB 18 to the operator terminal 2.
- the information retrieval unit 115 retrieves information from the FAQ DB 13, the product DB 14, and the manual DB 18.
- the information search accepting unit 116 accepts an information search request from the operator terminal 2.
- FIG. 5 is a configuration diagram of an example of the customer DB.
- the customer DB 11 includes, as data items, a customer ID, a telephone number, a customer, an address, a purchased product model number, and a purchase store.
- the data item “customer ID” is identification information for uniquely identifying a customer.
- the data item “phone number” is the customer's phone number.
- the data item “customer” is the name of the customer.
- the data item “address” is the address of the customer.
- the data item “purchased product model number” is the model number of the product purchased by the customer.
- the data item “purchase store” is a store where the customer purchased the product.
- FIG. 6 is a configuration diagram of an example of the question DB.
- the question DB 12 includes a call ID, an inquiry date, an inquiry time, an inquiry customer, and an operator ID as data items.
- the data item “call ID” is identification information for uniquely identifying a call.
- the data item “inquiry date” is the date on which the call from the customer is received.
- the data item “inquiry time” is the time when the call from the customer is received.
- the data item “inquiry customer” is the customer ID of the customer who made the inquiry.
- the data item “operator ID” is an operator ID of an operator corresponding to an inquiry from a customer.
- FIG. 7 is a configuration diagram of an example of FAQDB.
- the FAQ DB 13 includes product categories, series, question genres, and answers as data items.
- the FAQ DB 13 records a text sentence of an answer for each product category, series, and question genre.
- FIG. 8 is a configuration diagram of an example of the product DB.
- the product DB 14 includes product categories, series, release year, and manual data as data items.
- the product DB 14 records manual data for each product category, series, and release year of the product.
- the manual data is, for example, a file name for uniquely identifying an image file in the pdf format.
- FIG. 9 is a configuration diagram of an example of the operator DB.
- the operator DB 15 includes an operator ID, an operator name, and a situation as data items.
- the operator DB 15 records a situation such as busy or empty for each operator.
- the data item “situation” is information indicating whether the operator can respond to an inquiry from a customer.
- FIG. 10 is a configuration diagram of an example of the voice DB.
- the voice DB 16 includes a call ID, a voice file name, a left channel speaker, and a right channel speaker as data items.
- the voice DB 16 records a voice file name, a left channel speaker, and a right channel speaker for each call ID.
- the data item “voice file name” is the file name of the voice file that records the call of the call corresponding to the call ID.
- the data item “left channel speaker” is a speaker of voice data written in the left channel.
- the data item “right channel speaker” is the speaker of the voice data written in the right channel.
- FIG. 11 is a configuration diagram of an example of the utterance time DB.
- the utterance time DB 17 includes a call ID, time, and contents as data items.
- the utterance time DB 17 records the operator utterance start time, the operator utterance end time, the customer utterance start time, and the customer utterance end time in association with the call ID.
- FIG. 12 is a configuration diagram of an example of the manual DB.
- the manual DB 18 includes manual names, pages, and screen data as data items.
- the data item “manual name” is the name of the manual.
- the data item “page” is a manual page.
- the data item “screen data” is screen data of a manual page.
- the screen data is preferably in a data format (pdf format or the like) that allows text search.
- FIG. 13 is a configuration diagram of an example of the page switching DB.
- the page switching DB 19 includes time, call ID, summary, page, and operation as data items.
- the page switching DB 19 records operations for displaying and hiding information represented by the summary and the page in association with the call ID and time.
- FIG. 14 is a configuration diagram of an example of the threshold DB.
- the threshold DB 20 includes a threshold time as a data item.
- the threshold value DB 20 records a threshold value used in processing to be described later.
- FIG. 15 is a flowchart of an example of processing by the calling program.
- the call receiving means 105 of the server 1 waits until a call from the customer terminal 4 is received.
- the call receiving means 105 generates a call ID in step S2.
- step S3 the operator determination unit 104 refers to the operator DB 15 and determines whether or not there is an operator whose data item “situation” is not in a call (is empty). If there is no operator who is not talking, the operator determining means 104 queues the customer who has received the call in step S4.
- the operator determination means 104 waits until it can be determined that there is an operator who is not in a call. If it can be determined that there is an operator who is not talking, the call connection means 103 selects one operator who is not talking and connects the operator terminal 2 and the customer terminal 4 of the selected operator.
- step S6 the operator determination means 104 sets the data item “situation” of the operator selected in the operator DB 15 to a call.
- step S7 the call connection means 103 starts call recording. The recording process in step S8 is executed after call recording is started.
- the customer information acquisition unit 102 acquires customer information from the customer DB 11 based on the telephone number of the call in step S9.
- the customer information transmitting unit 101 transmits customer information to the operator terminal 2.
- step S10 the operator determining means 104 waits until the call is finished.
- the operator determination unit 104 adds a record relating to the ended call to the question DB 12.
- step S12 the operator determination unit 104 empties the data item “situation” of the operator who has finished the call in the operator DB 15.
- step S13 the call connection unit 103 closes the audio file.
- step S ⁇ b> 14 the call connection unit 103 registers a record relating to the voice file of the terminated call in the voice file DB 16.
- FIG. 16 is a flowchart of an example of the recording process.
- the call connection means 103 opens the audio file in step S21.
- the customer voice recording means 106 sets the customer's utterance state as unspeaked.
- the OP voice recording means 107 sets the utterance state of the operator as unspeaked.
- the customer voice recording unit 106 and the OP voice recording unit 107 repeat the processes in steps S24 to S26 until the call is finished.
- step S24 the customer voice recording means 106 receives the customer voice data for 20 milliseconds out of the customer voice data carved by the voice carving means 108 and writes it into the voice file.
- the OP voice recording means 107 receives the voice data of the operator for 20 milliseconds from the voice data of the operator carved by the voice carving means 108 and writes it to the voice file.
- step S25 the OP voice recording means 107 performs an operator utterance confirmation process described later, confirms the utterance start time and utterance end time of the operator, and writes them in the utterance time DB 17.
- step S26 the customer voice recording means 106 performs a customer utterance confirmation process described later, confirms the customer's utterance start time and utterance end time, and writes them in the utterance time DB 17.
- FIG. 17 is a flowchart of an example of operator utterance confirmation processing.
- the OP voice recording means 107 acquires the maximum volume (v1) of the voice data of the operator for 20 milliseconds.
- step S32 the OP voice recording means 107 compares the maximum volume (v1) of the voice data of the operator with the volume (v0) determined to be silent, and determines whether or not v1> v0.
- the OP voice recording means 107 determines in step S33 whether or not the operator's utterance state is unspoken. If the utterance state of the operator is not yet uttered, the OP voice recording means 107 records the current time in the utterance time DB 17 as the operator's utterance start time in step S34.
- step S35 the OP voice recording means 107 finishes the operator utterance confirmation process in FIG. If the operator's utterance state is not yet uttered in step S33, the OP voice recording means 107 ends the operator utterance confirmation process of FIG.
- the OP voice recording unit 107 determines in step S36 whether or not the operator's speech state is speaking. If the operator's speech state is speaking, the OP voice recording means 107 records the current time in the speech time DB 17 as the operator's speech end time in step S37.
- step S38 the OP voice recording means 107 terminates the operator utterance confirmation process in FIG. If the operator's utterance state is not speaking in step S36, the OP voice recording means 107 ends the operator utterance confirmation process of FIG.
- FIG. 18 is a flowchart of an example of a customer utterance confirmation process.
- the customer voice recording means 106 acquires the maximum volume (v1) of customer voice data for 20 milliseconds.
- step S42 the customer voice recording means 106 compares the maximum volume (v1) of the customer's voice data with the volume (v0) determined to be silent, and determines whether or not v1> v0.
- the customer voice recording unit 106 determines whether or not the customer's utterance state is unspoken in step S43. If the customer's utterance state is not yet uttered, the customer voice recording means 106 records the current time in the utterance time DB 17 as the customer's utterance start time in step S44.
- step S45 the customer voice recording means 106 finishes the customer utterance confirmation process in FIG. If the customer's utterance state is not yet uttered in step S43, the customer voice recording means 106 ends the customer utterance confirmation process of FIG.
- the customer voice recording means 106 determines in step S46 whether or not the customer's speech state is speaking. If the customer's utterance state is speaking, the customer voice recording means 106 records the current time in the utterance time DB 17 as the customer's utterance end time in step S47.
- step S48 the customer voice recording means 106 sets the customer's utterance state to unspoken, and then ends the customer utterance confirmation process of FIG.
- step S46 the customer voice recording means 106 ends the customer utterance confirmation process in FIG. 18 if the utterance state of the customer is not uttering.
- FIG. 19 is a flowchart of an example of processing by the information search program.
- the information search accepting unit 116 determines whether or not a manual, FAQ, and product information display request has been received from the operator terminal 2.
- the information search unit 115 searches for information according to the display request in step S52. For example, when a manual display request is received, the information search means 115 searches the manual DB 18 for information according to the display request.
- the information transmission unit 114 transmits information in accordance with the display request to the operator terminal 2.
- step S53 the information search unit 115 records the information record in accordance with the display request transmitted to the operator terminal 2 in the page switching DB 19, and then ends the process of the flowchart of FIG.
- the record of information in accordance with the display request includes the time when the display request is made and the target for which the display request is made.
- an object for which a display request has been made is represented by an outline and a page.
- step S54 the information search means 115 determines whether or not a manual, FAQ, and product information non-display request has been received from the operator terminal 2.
- the information search unit 115 records the information record in accordance with the non-display request in the page switching DB 19 in step S55, and then ends the process of the flowchart of FIG.
- the record of information in accordance with the non-display request includes the time when the non-display request was made and the target for which the non-display request was made.
- the page switching DB 19 in FIG. 13 an object for which a non-display request has been made is represented by an outline and a page. If the information retrieval unit 115 does not receive the non-display request, the process of the flowchart of FIG. 19 is terminated.
- FIG. 20 is a flowchart of an example of processing by the utterance reproduction program.
- the utterance table request receiving unit 113 stands by until an utterance table display request is received from the administrator terminal 3.
- the utterance table request receiving means 113 transmits a processing range selection screen to be described later to the administrator terminal 3 in step S62.
- the administrator terminal 3 that has received the processing range selection screen displays a processing range selection screen described later and allows the administrator to input processing range selection information.
- the administrator terminal 3 transmits the selection information of the processing range input from the administrator to the server 1.
- the utterance table request receiving unit 113 receives processing range selection information from the administrator terminal 3.
- step S64 the utterance table creation unit 112 obtains a record of the first call ID corresponding to the processing range selection information received from the administrator terminal 3 from the question DB 12.
- step S65 the utterance table creation unit 112 acquires the operator name from the operator DB 15 using the operator ID included in the record acquired from the question DB 12.
- step S66 the utterance table creation unit 112 uses the call ID included in the record acquired from the question DB 12 to acquire the time (operation time) when the display request or non-display request is made from the page switching DB 19.
- step S67 the utterance table creation unit 112 acquires the start time and the end time of each utterance from the utterance time DB 17, using the call ID included in the record acquired from the question DB 12.
- step S68 the utterance table creation means 112 performs an utterance table creation process to be described later.
- step S69 the utterance table sending unit 111 transmits the utterance table to the administrator terminal 3 that has received the utterance table display request.
- step S70 the utterance voice transmitting unit 109 acquires a voice file name from the voice DB 16 using the call ID included in the record acquired from the question DB 12, and sends the voice file having the voice file name to the administrator terminal 3. Send.
- step S71 the reproduction utterance determination unit 110 determines whether or not selection of a reproduction utterance has been received from the administrator terminal 3.
- the reproduction utterance determination unit 110 performs a reproduction utterance determination process described later in step S72. If the selection of the reproduction utterance is not accepted from the administrator terminal 3, the reproduction utterance determination unit 110 does not perform the reproduction utterance determination process described later.
- step S73 the utterance table creation unit 112 determines whether or not the record of the next call ID corresponding to the processing range selection information received from the administrator terminal 3 exists in the question DB 12.
- the utterance table creation unit 112 acquires the record of the next call ID from the question DB 12 in step S64, and then performs the processing from step S65. If there is no record of the next call ID, the speech table creation unit 112 ends the process of the flowchart of FIG.
- FIG. 21 is an image diagram of an example of a processing range selection screen.
- the processing range selection screen 200 is used to allow the administrator to input processing range selection information.
- a call ID or a condition is specified as processing range selection information.
- FIG. 22 is a flowchart of an example of an utterance table creation process.
- the utterance table creation unit 112 draws the operator name acquired in step S65 on the utterance table.
- the utterance table creation unit 112 selects one utterance for which the start time and end time have been acquired in step S67.
- step S83 the utterance table creation unit 112 repeats the processing in steps S84 to S87 until it determines that the utterance having the start time and end time acquired in step S67 cannot be selected. If one utterance having the start time and end time acquired in step S67 can be selected, the utterance table creation means 112 determines the display position from the utterance start time in step S84.
- step S85 the utterance table creation means 112 determines the size of the box (speech frame) indicating the utterance on the utterance table based on the utterance time (utterance end time ⁇ utterance start time).
- step S86 the speech table creation unit 112 draws a speech frame on the speech table.
- step S87 the utterance table creation unit 112 selects the next utterance that acquired the start time and end time in step S67.
- step S83 If it is determined in step S83 that one of the utterances for which the start time and end time have been acquired cannot be selected, the utterance table creation unit 112 performs the process of step S88. In step S88, the utterance table creation unit 112 selects one operation time acquired in step S66.
- the utterance table creation unit 112 draws a vertical broken line at the position of the display request time (display start time) and the non-display request time (display end time) in step S90.
- step S91 the speech table creation unit 112 draws an arrow between the vertical broken lines drawn in step S90, and draws a page and an outline.
- step S92 the utterance table creation unit 112 selects the next operation time acquired in step S66.
- the speech table creating means 112 repeats the processing of steps S90 to S92 until it determines in step S89 that the next operation time cannot be selected. If it is determined in step S89 that the next operation time cannot be selected, the speech table creation unit 112 ends the process of the flowchart shown in FIG.
- FIG. 23 is a screen image example of an utterance table.
- the utterance table 300 in FIG. 23 includes an operator name, a vertical broken line 301 indicating the position of the display start time and end time, an arrow 302 drawn between the vertical lines 301, a page and an overview 303, an operator and a customer.
- the utterance frame 304 is displayed.
- the utterance table 300 can reproduce the utterance selected as described later from the recorded operator and customer utterances by clicking any of the operator and customer utterance frames 304 with a mouse or the like.
- the elapsed time is not an actual time but a relative time. Switching between real time and relative time can be easily performed.
- the width of the threshold time used in the reproduction utterance determination process described later is indicated by a vertical broken line 305. Note that the vertical broken line 305 may not be displayed.
- utterances corresponding to the utterance frame 306 and the utterance frame 307 are selected and reproduced by a reproduction utterance determination process described later. .
- the utterance table 306 since the utterance frame 306 included in the threshold time range is selected from the vertical broken line 301 indicating the position of the display start time and end time, the utterance table 306 has the same threshold time range as the utterance frame 306. In the example, an included utterance frame 307 is also selected, and utterances corresponding to the utterance frame 306 and the utterance frame 307 are selected and reproduced.
- FIG. 24 is a screen image diagram of another example of the speech table. 24 is an example in which the utterance frame 307 is selected in the utterance table 300 of FIG.
- the utterance table 300 in FIG. 24 shows that utterances corresponding to the utterance frame 306 and the utterance frame 307 are selected and reproduced by the reproduction utterance determination process described later.
- the utterance table 307 since the utterance frame 307 included in the threshold time range is selected from the vertical broken line 301 indicating the position of the display start time and end time, the utterance table 307 has the same threshold time range as the utterance frame 307. An example is shown in which an included utterance frame 306 is also selected, and utterances corresponding to the utterance frame 306 and the utterance frame 307 are selected and reproduced.
- FIG. 25 is a screen image diagram of another example of the speech table.
- the utterance table 300 in FIG. 25 includes an operator name, a vertical broken line 301 indicating the position of the display start time and end time, an arrow 302 drawn between the vertical lines 301, and a page And an outline 303 and a frame 304 of an operator and customer utterance.
- FIG. 25 shows that when an utterance frame 308 is selected, utterances corresponding to the utterance frame 308 and the utterance frame 309 are selected and reproduced by the reproduction utterance determination process described later.
- the utterance frame 308 included in the threshold time range is selected from the vertical broken line 301 indicating the position of the display start time and end time, the utterance frame 308 has the same threshold time range as the utterance frame 308.
- the included utterance frame 309 is also selected, and the utterances corresponding to the utterance frame 308 and the utterance frame 309 are selected and reproduced.
- FIG. 26 is a screen image diagram of another example of the speech table.
- the utterance table 300 in FIG. 26 is an example in which the utterance frame 309 is selected in the utterance table 300 in FIG.
- the utterance table 300 in FIG. 26 indicates that utterances corresponding to the utterance frame 308 and the utterance frame 309 are selected and reproduced by the reproduction utterance determination process described later.
- the utterance table 309 since the utterance frame 309 included in the threshold time range is selected from the vertical broken line 301 indicating the position of the display start time and end time, the utterance table 309 has the same threshold time range as the utterance frame 309. In the example, an included utterance frame 308 is also selected, and utterances corresponding to the utterance frame 308 and the utterance frame 309 are selected and reproduced.
- the utterance table 300 in FIGS. 23 to 26 determines that the utterances of the operator and the customer included in the threshold time range from the page switching time are related, and one of the utterances of the operator and the customer is selected. Also, by playing both, the utterance selection operation is reduced.
- FIG. 27 is a flowchart of an example of the playback utterance determination process.
- the reproduction utterance determining unit 110 acquires the reproduction utterance start time and end time selected from the administrator terminal 3 from the utterance time DB 17.
- step S102 the reproduction utterance determining unit 110 sets the reproduction utterance start time as the utterance start time and the reproduction utterance end time as the utterance end time.
- the reproduction utterance determining means 110 obtains the utterance center time from the following equation (1). Note that the utterance time is the time from the utterance start time to the utterance end time.
- step S103 the reproduction utterance determination unit 110 acquires the page switching operation time closest to the utterance center time from the page switching DB 19.
- step S104 the reproduction utterance determining unit 110 determines whether or not the page switching time provided with a threshold time width before and after the page switching operation time overlaps with the reproduction utterance.
- the playback utterance determining means 110 determines in step S105 whether or not the other party's utterance is included between the page switching time start time and the utterance start time.
- the partner is an operator when the reproduction utterance is the customer's utterance, and a customer when the reproduction utterance is the operator's utterance.
- step S105 it is determined that the other party's utterance is included between the start time of the page switching time and the utterance start time. If the other party's utterance is included between the start time of the page switching time and the utterance start time, the reproduction utterance determining means 110, in step S106, starts the utterance of the other party whose utterance start time is included in the page switching time. And the process of step S107 is performed. If the other party's utterance is not included between the start time of the page switching time and the utterance start time, the reproduction utterance determining unit 110 performs the process of step S107 without performing the process of step S106.
- step S107 the reproduction utterance determining unit 110 determines whether or not the other party's utterance is included between the utterance end time and the page switching time end time.
- step S105 it is determined in step S105 that the other party's speech is included between the speech end time and the page switching time end time. If the utterance of the other party is included between the utterance end time and the end time of the page switching time, the reproduction utterance determining means 110 in step S108, the utterance end time of the other party whose utterance end time is included in the page switching time. Then, the process of the flowchart shown in FIG.
- step S104 if the page switching time and the reproduction utterance do not overlap, the reproduction utterance determination unit 110 ends the process of the flowchart shown in FIG.
- FIG. 28 is a functional configuration diagram of an example of an operator terminal.
- the operator terminal 2 includes information search requesting means 121, information receiving means 122, information display means 123, customer information acquisition means 124, customer information display means 125, voice output means 126, voice input means 127, and call communication means 128.
- the operator terminal 2 executes information search request means 121, information receiving means 122, information display means 123, customer information acquisition means 124, customer information display means 125, voice output means 126, voice input means. 127 and call communication means 128 are realized.
- the information search request means 121 requests the server 1 for an information search request.
- the information receiving unit 122 receives information search results from the server 1.
- the information display means 123 displays the search result on the display device 71 or the like.
- Customer information acquisition means 124 receives customer information from server 1.
- the customer information display means 125 displays customer information on the display device 71 and the like.
- the sound output means 126 outputs sound through the speaker 91 or the like.
- the voice input unit 127 inputs voice from the microphone 92 or the like.
- the call communication means 128 communicates with the customer terminal 4.
- FIG. 29 is a functional configuration diagram of an example of the administrator terminal.
- the administrator terminal 3 includes an utterance reproduction unit 131, an utterance selection unit 132, an utterance table reception unit 133, and an utterance table request unit 134.
- the utterance reproduction means 131 reproduces the reproduction utterance selected by the server 1.
- the utterance selection unit 132 accepts the selection of the utterance by clicking on either the operator utterance frame 304 of the utterance table 300 or the customer's utterance frame 304 with a mouse or the like.
- the utterance table receiving unit 133 receives the utterance table from the server 1.
- the utterance table requesting unit 134 requests the server 1 to display an utterance table.
- the utterance table display means 135 displays the utterance table 300 as shown in FIGS. 23 to 26 on the display device 71 or the like.
- FIG. 30 is a flowchart of an example of processing of the administrator terminal.
- the utterance table requesting unit 134 of the manager terminal 3 requests the server 1 to create an utterance table.
- the utterance table request unit 134 receives the processing range selection screen 200 from the administrator terminal 3.
- step S123 the utterance table requesting unit 134 displays the processing range selection screen 200 and allows the administrator to input processing range selection information.
- the utterance table requesting unit 134 repeats the processing in step S123 until it determines that the processing range selection information has been input from the administrator.
- the utterance table requesting unit 134 receives the processing range selection information (call ID designation or condition (date range or operator ID) designation) as a server. 1 to send.
- the utterance table receiving unit 133 repeats the process of step S125 until the data and voice files of the utterance table 300 are received from the server 1.
- the utterance table display means 135 Upon receiving the data and voice file of the utterance table 300 from the server 1, the utterance table display means 135 displays the utterance table 300 on the display device 71 and the like in step S126. Further, the utterance table display means 135 temporarily stores an audio file.
- step S127 the utterance selection unit 132 determines whether an utterance has been selected by clicking any one of the frames 304 of the utterance table 300 with a mouse or the like. If it is determined that the utterance has been selected, the utterance selection unit 132 notifies the server 1 of the selected reproduction utterance. In step S129, the utterance reproduction unit 131 repeats the process of step S129 until the utterance start time and utterance end time of the reproduction utterance are received from the server 1.
- the utterance playback means 131 Upon receiving the utterance start time and utterance end time of the playback utterance from the server 1, the utterance playback means 131 seeks and plays back the playback start position of the audio file in accordance with the utterance start time in step S130. In step S131, the utterance reproduction means 131 stops the reproduction of the audio file when the utterance end time has elapsed, and ends the processing of the flowchart shown in FIG. In step S127, also when the speech selection unit 132 determines that the speech is not selected, the processing of the flowchart shown in FIG.
- the utterance table 300 as shown in FIGS. 23 to 26 can be created and sent to the administrator terminal 3.
- the utterance table 300 is linked to a graph showing each utterance time zone of the operator / customer, and shows the objects displayed on the screen of the operator terminal 2 by the operator.
- the server 1 of the present embodiment it is determined that the utterances of the operator and the customer included in the range of the threshold time from the page switching time are related, and either the utterance of the operator or the customer is selected. Can play both.
- the administrator uses the utterance table 300 to display the display time zone and the display contents in which information such as a manual is displayed. Therefore, the administrator can predict the content of the utterance to some extent before reproducing the recorded operator and customer utterances.
- the administrator selects only one of the operator and customer utterances that are likely to be related within the range of the threshold time from the page switching time, and the operator and customer utterances that are likely to be relevant. Since both can be reproduced, the operation of selecting an utterance is reduced.
- the utterance recording means described in the claims corresponds to the utterance time DB 17.
- the display recording means corresponds to the page switching DB 19.
- the content recording means corresponds to the manual DB 18.
- the providing means corresponds to the utterance table sending means 111.
- the determining means corresponds to the reproduction utterance determining means 110.
- the call information corresponds to the utterance table 300.
- the threshold recording means corresponds to the threshold DB 20.
- the operator evaluation support program corresponds to the utterance reproduction program 22.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- Physics & Mathematics (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Economics (AREA)
- Tourism & Hospitality (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
2 オペレータ端末
3 管理者端末
4 顧客端末
11 顧客DB
12 質問DB
13 FAQDB
14 製品DB
15 オペレータDB
16 音声DB
17 発話時刻DB
18 マニュアルDB
19 ページ切り替えDB
20 閾値DB
21 通話プログラム
22 発話再生プログラム
23 情報検索プログラム
31 オペレータ用プログラム
41 管理者端末用プログラム
51 入力装置
52 表示装置
53 本体
61 主記憶装置
62 演算処理装置
63 インターフェース装置
64 記録媒体読取装置
65 補助記憶装置
66 記録媒体
67 バス
70 本体
71 ディスプレイ装置
72 ポインティングデバイス
73 キーボード
74 ヘッドセット
81 主記憶装置
82 CPU
83 補助記憶装置
84 画像処理部
85 I/O処理部
86 音声処理部
87 ネットワークカード
88 内部バス
91 スピーカ
92 マイク
101 顧客情報送信手段
102 顧客情報取得手段
103 呼接続手段
104 オペレータ決定手段
105 呼受信手段
106 顧客音声記録手段
107 OP音声記録手段
108 音声切り分け手段
109 発話音声送信手段
110 再生発話決定手段
111 発話表送付手段
112 発話表作成手段
113 発話表依頼受付手段
114 情報送信手段
115 情報検索手段
116 情報検索受付手段
121 情報検索依頼手段
122 情報受信手段
123 情報表示手段
124 顧客情報取得手段
125 顧客情報表示手段
126 音声出力手段
127 音声入力手段
128 呼通信手段
131 発話再生手段
132 発話選択手段
133 発話表受信手段
134 発話表依頼手段
135 発話表表示手段
200 処理範囲選択画面
300 発話表
301、305 縦破線
302 矢印
303 ページ及び概要
304-309 発話の枠
Claims (12)
- 顧客からの電話にオペレータが対応する通話に基づく該オペレータの評価を支援するオペレータ評価支援装置であって、
通話における、顧客およびオペレータの発話時間を記録する発話記録手段と、
オペレータが使用するオペレータ端末に、前記通話において表示された表示画面を識別する表示画面識別情報に関連づけて、該表示画面が表示された表示時間を記録する表示記録手段と、
前記オペレータ端末に表示される表示画面の表示画面識別情報に関連づけて、該表示画面の内容を説明する画面内容情報を記録する内容記録手段と、
前記通話について、前記発話記録手段、前記表示記録手段、および、前記内容記録手段を参照し、前記顧客および前記オペレータの発話時間、前記オペレータ端末に表示された表示画面の表示時間、および、該表示画面に対応する画面内容情報を示す通話情報を作成し、オペレータを評価する管理者が使用する管理者端末に前記通話情報、および、該通話情報に含まれる前記顧客および前記オペレータの発話の音声ファイルを送信する提供手段と、
前記管理者端末から該通話情報に含まれる前記顧客および前記オペレータの発話の選択を受け付け、選択された前記発話が前記オペレータ端末に表示された表示画面の切り替えタイミングから所定時間内に含まれる場合に、選択された前記発話と共に前記所定時間内に含まれる他の発話も再生発話に決定する決定手段と
を有することを特徴とするオペレータ評価支援装置。 - 前記決定手段は、前記発話記録手段を参照し、選択された前記発話の発話時間を読み出すと共に、前記表示記録手段を参照し、前記表示画面の切り替えタイミングを読み出し、選択された前記発話の発話時間が前記表示画面の切り替えタイミングから所定時間内に含まれ、且つ、前記表示画面の切り替えタイミングから所定時間内に、選択された前記発話の相手の通話が含まれている場合に、選択された前記通話と共に前記相手の通話を再生通話に決定する
ことを特徴とする請求項1に記載のオペレータ評価支援装置。 - 閾値時間を記録する閾値記録手段を更に有し、
前記決定手段は前記閾値記録手段を参照し、前記オペレータ端末に表示された表示画面の切り替えタイミングの前記閾値時間前から前記閾値時間後までの範囲を所定時間とする
ことを特徴とする請求項1又は2に記載のオペレータ評価支援装置。 - 前記提供手段は、前記通話について、前記顧客とオペレータの発話時間、および、前記表示画面の表示時間を、時間軸を基準に表示する
ことを特徴とする請求項1乃至3何れか一項に記載のオペレータ評価支援装置。 - コンピュータが、顧客からの電話にオペレータが対応する通話に基づく該オペレータの評価を支援するオペレータ評価支援方法であって、
前記オペレータが使用するオペレータ端末に、前記通話において表示された表示画面を識別する表示画面識別情報に関連づけて、該表示画面が表示された表示時間を記録する表示記録手段を参照して、該通話において該オペレータ端末に表示された表示画面識別情報および表示時間を特定し、
前記オペレータ端末に表示される表示画面の表示画面識別情報に関連づけて、該表示画面の内容を説明する画面内容情報を記録する内容記録手段を参照して、前記特定された表示画面識別情報に対応する画面内容情報を特定し、
通話における前記顧客およびオペレータの発話時間を記録した発話記録手段を参照して、前記通話における顧客およびオペレータの発話時間を特定し、
前記通話において特定した前記顧客および前記オペレータの発話時間、前記オペレータ端末に表示された表示画面の表示時間、および、該表示画面に対応する画面内容情報を示す通話情報を作成し、オペレータを評価する管理者が使用する管理者端末に前記通話情報、および、該通話情報に含まれる前記顧客および前記オペレータの発話の音声ファイルを送信し、
前記管理者端末から該通話情報に含まれる前記顧客および前記オペレータの発話の選択を受け付け、選択された前記発話が前記オペレータ端末に表示された表示画面の切り替えタイミングから所定時間内に含まれる場合に、選択された前記発話と共に前記所定時間内に含まれる他の発話も再生発話に決定する
ことを特徴とするオペレータ評価支援方法。 - 前記コンピュータは、前記発話記録手段を参照し、選択された前記発話の発話時間を読み出すと共に、前記表示記録手段を参照し、前記表示画面の切り替えタイミングを読み出し、選択された前記発話の発話時間が前記表示画面の切り替えタイミングから所定時間内に含まれ、且つ、前記表示画面の切り替えタイミングから所定時間内に、選択された前記発話の相手の通話が含まれている場合に、選択された前記通話と共に前記相手の通話を再生通話に決定する
ことを特徴とする請求項5に記載のオペレータ評価支援方法。 - 前記コンピュータは、
閾値時間を記録する閾値記録手段を参照し、前記オペレータ端末に表示された表示画面の切り替えタイミングの前記閾値時間前から前記閾値時間後までの範囲を所定時間とする
ことを特徴とする請求項5又は6に記載のオペレータ評価支援方法。 - 前記コンピュータは、
前記通話について、前記顧客とオペレータの発話時間、および、前記表示画面の表示時間を、時間軸を基準に表示する
ことを特徴とする請求項5乃至7何れか一項に記載のオペレータ評価支援方法。 - 顧客からの電話にオペレータが対応する通話に基づく該オペレータの評価を支援するオペレータ評価支援方法をコンピュータに実行させるオペレータ評価支援プログラムを記録した記憶媒体であって、
前記オペレータ評価支援プログラムは前記コンピュータに、
前記オペレータが使用するオペレータ端末に、前記通話において表示された表示画面を識別する表示画面識別情報に関連づけて、該表示画面が表示された表示時間を記録する表示記録手段を参照して、該通話において該オペレータ端末に表示された表示画面識別情報および表示時間を特定し、
前記オペレータ端末に表示される表示画面の表示画面識別情報に関連づけて、該表示画面の内容を説明する画面内容情報を記録する内容記録手段を参照して、前記特定された表示画面識別情報に対応する画面内容情報を特定し、
通話における前記顧客およびオペレータの発話時間を記録した発話記録手段を参照して、前記通話における顧客およびオペレータの発話時間を特定し、
前記通話において特定した前記顧客および前記オペレータの発話時間、前記オペレータ端末に表示された表示画面の表示時間、および、該表示画面に対応する画面内容情報を示す通話情報を作成し、オペレータを評価する管理者が使用する管理者端末に前記通話情報、および、該通話情報に含まれる前記顧客および前記オペレータの発話の音声ファイルを送信し、
前記管理者端末から該通話情報に含まれる前記顧客および前記オペレータの発話の選択を受け付け、選択された前記発話が前記オペレータ端末に表示された表示画面の切り替えタイミングから所定時間内に含まれる場合に、選択された前記発話と共に前記所定時間内に含まれる他の発話も再生発話に決定する
ことを実行させることを特徴とする記憶媒体。 - 前記コンピュータに、
前記発話記録手段を参照し、選択された前記発話の発話時間を読み出すと共に、前記表示記録手段を参照し、前記表示画面の切り替えタイミングを読み出し、選択された前記発話の発話時間が前記表示画面の切り替えタイミングから所定時間内に含まれ、且つ、前記表示画面の切り替えタイミングから所定時間内に、選択された前記発話の相手の通話が含まれている場合に、選択された前記通話と共に前記相手の通話を再生通話に決定する
ことを実行させる前記オペレータ評価支援プログラムを記録した請求項9に記載の記憶媒体。 - 前記コンピュータに、
閾値時間を記録する閾値記録手段を参照し、前記オペレータ端末に表示された表示画面の切り替えタイミングの前記閾値時間前から前記閾値時間後までの範囲を所定時間とする
ことを実行させる前記オペレータ評価支援プログラムを記録したことを特徴とする請求項9又は10に記載の記憶媒体。 - 前記コンピュータに、
前記通話について、前記顧客とオペレータの発話時間、および、前記表示画面の表示時間を、時間軸を基準に表示する
ことを実行させる前記オペレータ評価支援プログラムを記録した請求項9乃至11何れか一項に記載の記憶媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013504494A JP5621910B2 (ja) | 2011-03-17 | 2011-03-17 | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 |
PCT/JP2011/056462 WO2012124120A1 (ja) | 2011-03-17 | 2011-03-17 | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 |
CN201180069294.8A CN103443810B (zh) | 2011-03-17 | 2011-03-17 | 话务员评价支援装置及话务员评价支援方法 |
US14/020,194 US8908856B2 (en) | 2011-03-17 | 2013-09-06 | Operator evaluation support device and operator evaluation support method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/056462 WO2012124120A1 (ja) | 2011-03-17 | 2011-03-17 | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/020,194 Continuation US8908856B2 (en) | 2011-03-17 | 2013-09-06 | Operator evaluation support device and operator evaluation support method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012124120A1 true WO2012124120A1 (ja) | 2012-09-20 |
Family
ID=46830247
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/056462 WO2012124120A1 (ja) | 2011-03-17 | 2011-03-17 | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 |
Country Status (4)
Country | Link |
---|---|
US (1) | US8908856B2 (ja) |
JP (1) | JP5621910B2 (ja) |
CN (1) | CN103443810B (ja) |
WO (1) | WO2012124120A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018045639A (ja) * | 2016-09-16 | 2018-03-22 | 株式会社東芝 | 対話ログ分析装置、対話ログ分析方法およびプログラム |
JP2019139276A (ja) * | 2018-02-06 | 2019-08-22 | ニューロネット株式会社 | Web共有システム、プログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10464386B2 (en) * | 2016-12-21 | 2019-11-05 | Kawasaki Jukogyo Kabushiki Kaisha | Suspension structure of utility vehicle |
JP2019101385A (ja) * | 2017-12-08 | 2019-06-24 | 富士通株式会社 | 音声処理装置、音声処理方法及び音声処理用コンピュータプログラム |
US10917524B1 (en) | 2019-10-30 | 2021-02-09 | American Tel-A-Systems, Inc. | Methods for auditing communication sessions |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005072896A (ja) * | 2003-08-22 | 2005-03-17 | Fujitsu Ltd | 音声記録装置 |
JP2005293180A (ja) * | 2004-03-31 | 2005-10-20 | Fujitsu Ltd | オペレータ支援プログラム,オペレータ支援システム,及び、オペレータ支援方法 |
JP2006171579A (ja) * | 2004-12-17 | 2006-06-29 | Fujitsu Ltd | 音声再生プログラムおよびその記録媒体、音声再生装置ならびに音声再生方法 |
JP2007058767A (ja) * | 2005-08-26 | 2007-03-08 | Canon Inc | 発話記録作成システム |
JP2007184699A (ja) * | 2006-01-05 | 2007-07-19 | Fujitsu Ltd | 音声データの聞き出し部分特定処理プログラムおよび処理装置 |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7305082B2 (en) * | 2000-05-09 | 2007-12-04 | Nice Systems Ltd. | Method and apparatus for quality assurance in a multimedia communications environment |
JP2001331624A (ja) | 2000-05-22 | 2001-11-30 | Hitachi Ltd | 顧客対応者の技術レベル評価方法及びそのシステム |
JP3872263B2 (ja) | 2000-08-07 | 2007-01-24 | 富士通株式会社 | Ctiサーバ及びプログラム記録媒体 |
JP2003122890A (ja) | 2001-10-16 | 2003-04-25 | Hochiki Corp | 相談情報処理装置、相談情報処理方法、および、プログラム |
JP4115114B2 (ja) * | 2001-10-24 | 2008-07-09 | 興和株式会社 | ネットワーク通話の品質評価設備 |
JP4308047B2 (ja) | 2004-03-09 | 2009-08-05 | 富士通株式会社 | 業務スキル推定装置、業務スキル推定方法および業務スキル推定プログラム |
JP2006208482A (ja) | 2005-01-25 | 2006-08-10 | Sony Corp | 会議の活性化を支援する装置,方法,プログラム及び記録媒体 |
US8130937B1 (en) * | 2005-06-21 | 2012-03-06 | Sprint Spectrum L.P. | Use of speech recognition engine to track and manage live call center calls |
JP4522345B2 (ja) | 2005-09-06 | 2010-08-11 | 富士通株式会社 | 電話業務検査システム及びそのプログラム |
CN100566360C (zh) * | 2006-01-19 | 2009-12-02 | 北京讯鸟软件有限公司 | 实现坐席服务水平评价的呼叫中心服务方法 |
US8160233B2 (en) | 2006-02-22 | 2012-04-17 | Verint Americas Inc. | System and method for detecting and displaying business transactions |
JP2007288242A (ja) | 2006-04-12 | 2007-11-01 | Nippon Telegr & Teleph Corp <Ntt> | オペレータ評価方法、装置、オペレータ評価プログラム、記録媒体 |
JP2007286097A (ja) | 2006-04-12 | 2007-11-01 | Nippon Telegr & Teleph Corp <Ntt> | 音声受付クレーム検出方法、装置、音声受付クレーム検出プログラム、記録媒体 |
CN101662549B (zh) * | 2009-09-09 | 2013-02-27 | 中兴通讯股份有限公司 | 一种基于语音的客户评价系统及客户评价方法 |
-
2011
- 2011-03-17 WO PCT/JP2011/056462 patent/WO2012124120A1/ja active Application Filing
- 2011-03-17 CN CN201180069294.8A patent/CN103443810B/zh not_active Expired - Fee Related
- 2011-03-17 JP JP2013504494A patent/JP5621910B2/ja not_active Expired - Fee Related
-
2013
- 2013-09-06 US US14/020,194 patent/US8908856B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005072896A (ja) * | 2003-08-22 | 2005-03-17 | Fujitsu Ltd | 音声記録装置 |
JP2005293180A (ja) * | 2004-03-31 | 2005-10-20 | Fujitsu Ltd | オペレータ支援プログラム,オペレータ支援システム,及び、オペレータ支援方法 |
JP2006171579A (ja) * | 2004-12-17 | 2006-06-29 | Fujitsu Ltd | 音声再生プログラムおよびその記録媒体、音声再生装置ならびに音声再生方法 |
JP2007058767A (ja) * | 2005-08-26 | 2007-03-08 | Canon Inc | 発話記録作成システム |
JP2007184699A (ja) * | 2006-01-05 | 2007-07-19 | Fujitsu Ltd | 音声データの聞き出し部分特定処理プログラムおよび処理装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018045639A (ja) * | 2016-09-16 | 2018-03-22 | 株式会社東芝 | 対話ログ分析装置、対話ログ分析方法およびプログラム |
JP2019139276A (ja) * | 2018-02-06 | 2019-08-22 | ニューロネット株式会社 | Web共有システム、プログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012124120A1 (ja) | 2014-07-17 |
US8908856B2 (en) | 2014-12-09 |
CN103443810A (zh) | 2013-12-11 |
CN103443810B (zh) | 2016-05-04 |
JP5621910B2 (ja) | 2014-11-12 |
US20140010362A1 (en) | 2014-01-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5585720B2 (ja) | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 | |
JP5621910B2 (ja) | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラムを記録した記憶媒体 | |
US9288314B2 (en) | Call evaluation device and call evaluation method | |
WO2007091453A1 (ja) | モニタリング装置、評価データ選別装置、応対者評価装置、応対者評価システムおよびプログラム | |
CN109844859A (zh) | 使用所分离的对象编辑音频信号的方法和相关联的装置 | |
JP5790757B2 (ja) | オペレータ評価支援装置、オペレータ評価支援方法及びオペレータ評価支援プログラム | |
US20020038214A1 (en) | Automatic distribution of voice files | |
JPWO2004023772A1 (ja) | オペレータ支援装置、オペレータ支援端末、オペレータ支援プログラムおよびその記録媒体、ならびにオペレータ支援方法 | |
KR101063261B1 (ko) | 핵심키워드를 이용하여 통화 내용을 녹취하는 인터넷 프로토콜 컨텍트 센터 녹취 시스템 및 그 방법 | |
Wincott et al. | Telling stories in soundspace: Placement, embodiment and authority in immersive audio journalism | |
KR101399581B1 (ko) | 텔러 참여에 의한 자동 응답 시스템, 방법 및 컴퓨터 판독 가능한 기록 매체 | |
US20030046181A1 (en) | Systems and methods for using a conversation control system in relation to a plurality of entities | |
US8468027B2 (en) | Systems and methods for deploying and utilizing a network of conversation control systems | |
JP4191221B2 (ja) | 記録再生装置、同時記録再生制御方法、および同時記録再生制御プログラム | |
KR20000024660A (ko) | 네트워크와 웹기술을 이용한 매장내의 양방향 전자 주문시스템 | |
WO2021084720A1 (ja) | 音声録音プログラム、音声録音方法および音声録音システム | |
JP5376232B2 (ja) | コミュニケーションプレイバックシステム、コミュニケーションプレイバック方法、プログラム | |
KR20000019259A (ko) | 외국어 말하기 평가 시스템 | |
KR102171479B1 (ko) | 디지털 오디오 공동 재생 서비스 방법 및 시스템 | |
WO2021084719A1 (ja) | 音声再生プログラム、音声再生方法および音声再生システム | |
WO2021084721A1 (ja) | 音声再生プログラム、音声再生方法および音声再生システム | |
JP5626426B2 (ja) | コミュニケーションプレイバックシステム用端末装置、サーバ、プログラム | |
KR20010090669A (ko) | 인터넷을 통한 다중모드음악 시스템 및 그 판매방법 | |
KR20020022145A (ko) | 광기록재생기에서의 디스크 재생방법 | |
JP6385049B2 (ja) | 学習支援装置、学習支援プログラム、学習支援方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180069294.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11860876 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013504494 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11860876 Country of ref document: EP Kind code of ref document: A1 |