WO2002046872A2 - Automated call center monitoring system - Google Patents

Automated call center monitoring system Download PDF

Info

Publication number
WO2002046872A2
WO2002046872A2 PCT/US2001/046646 US0146646W WO0246872A2 WO 2002046872 A2 WO2002046872 A2 WO 2002046872A2 US 0146646 W US0146646 W US 0146646W WO 0246872 A2 WO0246872 A2 WO 0246872A2
Authority
WO
WIPO (PCT)
Prior art keywords
customer
customer service
conversation
stream
audio
Prior art date
Application number
PCT/US2001/046646
Other languages
French (fr)
Other versions
WO2002046872A3 (en
Inventor
David Holzer
Shalom A. Holzer
Original Assignee
David Holzer
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by David Holzer filed Critical David Holzer
Priority to AU2002239523A priority Critical patent/AU2002239523A1/en
Publication of WO2002046872A2 publication Critical patent/WO2002046872A2/en
Publication of WO2002046872A3 publication Critical patent/WO2002046872A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/002Applications of echo suppressors or cancellers in telephonic connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2218Call detail recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2281Call monitoring, e.g. for law enforcement purposes; Call tracing; Detection or prevention of malicious calls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/51Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
    • H04M3/5175Call or contact centers supervision arrangements

Definitions

  • the present invention relates to a system and method for monitoring the communication between customers and a company's customer service representatives at customer reguests, or telemarketing. More particularly, the present invention relates to monitoring, via a data network, the conversations between customers and a company's customer service representatives, to provide a real time analysis of the questions and responses.
  • the problem is further exacerbated since an increasing number of companies have established a plurality of remote customer service centers respectively located in different physical locations. For example, a company may have customer service centers in each of several countries. Each of the customer service centers may be dedicated to a particular function, such as sales or technical support, or may be dedicated to a particular geographical area, such as North America. Each customer service center typically has a separate telephone number.
  • a significant disadvantage of the standard approach to responding to customer requests for communication with customer service representatives is the difficulty of monitoring the customer service representatives located at remote customer service centers. The current approach of having one or more monitors at each customer service center is inefficient and expensive. Remote monitoring with a variety of systems still adds a considerable burden on the system.
  • monitoring assistance is normally provided after the fact as it is not practical to have a monitor sit thru each call with the customer service representative during the entire call. Additionally, when calls need to be transferred to another customer service representative there is usually a delay in which the customer is put on hold until the situation has been discussed with the second customer service representative and the whole situation is explained to the second customer service representative. A method to reduce the hold time for customers would greatly improve customer service, especially since calls are often routed through several customer service representatives.
  • the system of the present invention comprises a method for automated call monitoring, analysis and feedback, comprising the steps of:
  • a customer uses a customer communication system to access a company computer system via a data network and requests contact with a customer service representative. The customer may then select one or more customer preferences from a list of customer preferences provided by the company computer system.
  • the company computer system then automatically selects a customer service center and a customer service representative in accordance with the customer preferences selected by the customer (if any) , and a set of selection criteria stored in the one or more company computer system databases.
  • a customer service representative initiates a call based on a computer generated customer list.
  • the computer software normally contains database categories for storing customer information.
  • An automated monitoring system is connected to the company computer system via the data network.
  • the AMS is notified by the company computer system when a customer service representative initiates a conversation with a potential customer - outbound call centers, or when a customer seeking customer service calls a customer service center and a customer service representative is selected by the company computer system- inbound call centers.
  • the AMS is also notified of the selection criteria (if any) used by the company computer system in making the selection. The AMS will then choose to monitor or to not monitor the communication.
  • the communication of the customer service representative and the customer is separated into streams of customer service representative data and customer data and both data streams are then converted to text via an attached voice recognition device, and the conversation is analyzed and transcribed as exchanges, differentiating the customer service representative' s communication from the customer communication.
  • the separation of streams and analysis is performed by alternative methods as outlined in the detailed description below.
  • Figure 2 is a subsequent flow chart showing the operative analysis of the audio streams shown in Figure 1;
  • Figures 3 and 4 are alternative flow charts showing the analysis of the conversation of participants with regard to initiating and keyword using parties. DETAILED DESCRIPTION OF THE DRAWINGS AND THE PREFERRED
  • a "company” is any organization which sells, markets, or distributes products or services to customers.
  • the company may be a wholesale or retail business, an educational institution, a government office or a multi- national corporation.
  • a "customer service center” is a subset of the company which is dedicated to providing information to customers about the company's products or services, and to providing other forms of assistance to customers.
  • Most companies have one or more CSC s organized, for example, by geographical area and/or by an area of expertise.
  • one CSC may be a technical support center for the company's products, while another CSC may provide all customer services in a particular country.
  • CSC's could, in theory be independent companies, not merely subsets and divisions of other companies, which provide outsourced services to other companies - should we mention this?
  • the CSC's are staffed by one or more "customer service representatives" (CSR's) who are trained to: answer questions about the company's products or services; provide other forms of assistance to customers (inbound call centers) ; or who are trained to market products to customers (outbound call centers) , usually following a predetermined script.
  • CSR's customer service representatives
  • a "monitor” is an individual who serves as a mentor or administrator to the CSR's, and who evaluates the performance of the company's CSR's.
  • both the CSR' s and the monitor can be assisted by an "Automated Monitoring System," or AMS.
  • the AMS interprets and transcribes one or more textual versions of the conversation between the CSR and the customer, allowing it to provide such functions as: suggesting conversation topics; providing context-sensitive state-based information; providing formatted results from database queries based on data extracted from the conversation; providing predefined scripted text; providing an independent instant analysis of the efficiency, efficacy, success, and comprehensiveness of the conversation; and keyword, conversation length, conversation style, or conversation format based cues with which to flag or identify a given conversation. All of these functions can be autonomously triggered by the AMS without requiring direct user interaction.
  • a voice network is preferably a switched telephone network, but may be any switched telecommunications medium suitable for voice communication, such as a cellular or a satellite-based network.
  • a customer voice communication device is preferably a telephone. The customer voice communication device is connected to the voice network via a telecommunication medium, such as a standard phone line or a wireless (e.g., cellular) link for placing and receiving telephone calls.
  • An AMS is comprised of both an AMS voice-monitoring device and an AMS voice-analysis device, which are connected by an electronic communication device to allow for data to be passed from the AMS voice-monitoring device to the AMS voice-analysis device, and for control messages to be passed from the AMS voice-analysis device to the AMS voice-monitoring device.
  • an AMS voice-monitoring device is shown as a telephone- enabled device such as a hybrid coupler (2) which is capable of using techniques, such as echo cancellation or others available to those knowledgeable in the art, to extract the incoming and outgoing segments of a single telephonic conversation into two distinct audio streams.
  • the voice-monitoring device is also connected to the voice network via a telecommunication medium such a standard phone line or a wireless link for intercepting or monitoring telephone calls (1) .
  • the two streams can then be converted by analog/digital conversion hardware or software (4), written to two distinct individual text streams (5), and individually time-stamped (6) . In the next stage these textual streams can be passed to the AMS voice-analysis device independent of each other.
  • the AMS voice-monitoring device can support a dual interface, employing two or more telephonic and analog audio sources (1) , wherein at least one source is comprised solely of either a CSR or customer audio stream (2), and at least one other source is comprised of both CSR and customer audio multiplexed into a single audio stream (3) .
  • the single audio source is classified as the master or override audio stream, while the multiplexed, or combined, audio stream, comprising two or more audio sources, is classified as the slave, or deprecated, audio stream.
  • A will be constantly monitored for voice data (4), which will be transcribed and placed into the textual stream representing A' s voice data (5) by resources allocated for the interpretation of voice data.
  • A/B will be monitored for voice data also, and, should voice data become available on A/B, resources will be reallocated from interpreting A' s voice data to interpreting A/B (7), and A/B' s voice data will begin to be transcribed and placed into the textual stream representing B' s voice data (8) .
  • a particular beneficial effect of this system when applied to a dual-interface is that if A contains audio data from only one source, and A/B contains audio data from two sources, one of which is identical to the audio data of A, the non-A audio data in A/B (i.e. audio data B) can be isolated by time-dependent elimination techniques.
  • A contains CSR data only
  • A/B contains both CSR and customer data
  • A is extracted from microphone data and A/B is extracted from a direct phone line connection
  • a and A/B are reasonably in synchronous coordination
  • an application monitoring both A and A/B would be able to determine at which times audio data from A/B should be processed, such as by only applying processing to A/B' s data when there is no available data to be processed from A' s audio stream.
  • the actual audio of source B can be separated from the audio of A in the multiplexed source A/B.
  • all textual streams will also be appended with a corresponding timestamp.
  • a specialized telephone headset may be employed by CSR's at the CSC, which splits the audio output into two identical signals, one of which can be provided to the voice- monitoring device and the other which passes through to a standard telephone device for communication with the customer.
  • these textual streams can be analyzed for keywords or patterns using either predetermined or heuristic functions, and functions can be performed based on the results. This analysis is made not based on the keyword alone, but also may be based on which conversation participant is using the keywords - the customer or the CSR.
  • the keywords "plan”, “my bill” or “billing" in the customer's textual stream could prompt the system to instantly call up relevant information about the company's new billing plan, and display that to the CSR, while different keywords would cause it to suggest and retrieve other information.
  • the analysis system can initiate a procedure wherein the buffer's data is supplied to a database lookup or query component, which can then search a database, linked list, or other such data structure so as to retrieve relevant customer information based on the account number, in this scenario, and provide it to the CSR.
  • a database lookup or query component can then search a database, linked list, or other such data structure so as to retrieve relevant customer information based on the account number, in this scenario, and provide it to the CSR.
  • Other fields such as the customer's phone number, name, zip code, or the like, can also be retrieved from the textual stream or other sources and used in the query function by the database component .
  • the conversation as a whole can be monitored for key words or phrases, for instance to ensure that the CSR has not begun discussing irrelevant or unauthorized topics, or is using unacceptable language. If the AMS determines that this is the case, it will raise a flag at or send an alert to the monitor' s computer terminal, station, or communication device, indicating that special attention should be paid to the marked conversation. The monitor could then have the option of reading through the textual stream or log, or even listening in on the call in progress, or an archived version if the call has been terminated.
  • the CSR's statements can be compared against a predefined or dynamically generated script in order to determine whether or not the CSR has followed the conversation guidelines.
  • analysis of the customer' s responses can provide a clue as to the efficacy of the CSR' s sales pitch and delivery.
  • the analysis component can be linked to a terminal display component which prompts the CSR with the next statement in the script, either based on a predefined sequence or one dynamically generated in response to the customer' s responses in the textual stream, and can visually indicate the completion of various stages or steps in the script, based on information provided by the analysis component of the textual stream.
  • raw storage of both the textual stream and audio stream could occur, either locally or remotely on a network for later recall. If this storage takes place on an electronic network it can very easily be made available in real-time to the monitor, who can use selective criteria defined to the AMS in order to help filter, categorize, or identify components or events of interest within the textual stream.
  • the time-stamping of the stream is valuable here in that it allows the monitor to synch the conversation with an audio log of the same stream based on chronological cues, for instance to confirm the validity and accuracy of the algorithm' s automatic transcription and analysis.
  • the separation of the audio streams now also provides a key advantage in enabling the software to perform a far more effective voice recognition and analysis on the CSR stream (A stream) .
  • the A stream will contain only the voice of the customer service representative, so that the software can easily and accurately analyze the CSR's audio stream. Voice recognition of an untrained, multiple-voice audio stream would prove extremely ineffective.
  • Chronological cues derived from the analysis of the A stream can now be used to traverse the full conversation contained in the A/B stream, which is far more valuable for playback.
  • a common function of CSR assisting systems is to allow the CSR to initiate the playback of selected pre-recorded audio over the communication medium directly to the customer, freeing the CSR from a repetitive task while at the same time providing them an opportunity to enter notes, comments, or other customer information into a database or other program uninterrupted.
  • the AMS could be instructed to temporarily disregard incoming audio data, which would be arriving solely from the pre-recorded audio, in order to both conserve resources and allow for a more accurate overall transcription by instantly substituting the known textual values of the pre-recorded audio for any interpreted audio that is converted during its playback, which will almost always exhibit some degree of variance, distortion, or interference which will limit and reduce the accuracy of the transcription.
  • the AMS captures and logs raw audio data or samplings in addition to timestamp or start- time/offset data on each audio segment
  • the textual data is also logged and also encompasses or is conceptually linked or referenced with a timestamp or start-time/offset system similar to that which is employed by the audio logging
  • these two logging sources are synchronized and have synchronized timestamp or start-time/offset mechanisms
  • a user viewing the textual data would be able to easily call up the precise audio segment represented by the corresponding textual data by using the common timestamp as an automatically available reference point.
  • This method could be useful for confirming customer information, checking the pronunciation of a difficult word, or otherwise accurately identifying any particular segment of the audio without being forced to manually examine lengthy audio files and segments thereof.
  • the system can provide a mechanism wherein a copy of the textual and/or audio data is forwarded or made available to a multiplicity of third party systems over an electronic network, using distribution technologies such as peering or uploaded client-server streaming.
  • distribution technologies such as peering or uploaded client-server streaming.
  • additional CSR's, ' monitors, technical advisors, or supervisors can listen in on and track a call's progress, either when alerted by keywords or conversation patterns, or at the request of the initial CSR.
  • the third-party can be brought "up to date" with the current conversation in real-time without requiring a break or hold period in the initial conversation while the third party confers and discusses the call with the initial CSR, thus facilitating a faster call turnover should the CSR decides to pass the call on to the third party for any reason, without the annoying long hold. All of this ultimately translates into a briefer, more efficient, and more enjoyable customer experience.
  • the CSR could be advised by the third party in real-time, without ever requiring direct call interference on the part of the third party into the initial conversation, for instance by means of sending messages or carrying on other communication through an electronic network.
  • an approximation can be made as to when the individual CSR conversation and call will be completed, which can be forwarded over an electronic information and can prove valuable for monitors, administrators, customers, or other systems.
  • the textual stream data can be passed to a voice-analysis device which will interpret them as a series of events (1) including customer events (2) and CSR events (3) .
  • the evaluation of the customer events would be dependent on whether the prompt flag has been set, indicating that the customer is currently at an "IVR- style", CSR-initiated prompt section in the conversation (5) . If the customer is at such a section, any data received will be placed in a temporary buffer, and eventually dispatched to a database or data structure lookup component, database or data structure field setting or defining component, or other such component, as is dependent on and defined for the type of data expected and recorded (6) . In the event that the prompt flag is set but the received data type no longer matches the expected data type, or a maximum length, time, or other such criteria has been met, the prompt flag can be unset and the expected data type reset to null (7). Any and all data received from the customer, whether the prompt flag is set or not, will also always be written to the output customer textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (8) .
  • the evaluation of the CSR events can take three forms: generic text (9), list item completion keyword or phrase (10), and "IVR-style" prompt initiating keyword (11), as described as follows: a. If a prompt keyword or keywords is received, a keyword event is triggered which causes the prompt flag to be set (see above), and the expected data type variable to be set to the appropriate type or selection, depending on the precise predefined prompt type or parameters (12) . b. If an item keyword event is triggered, then the next step of the evaluation would be to determine if the list item which the keyword has triggered is the correct anticipated list item (13), as can be determined by examining the predefined item list and the current list placeholder.
  • the list item will be marked as complete, and the list placeholder incremented by one (14) . If the list position is unequal to n+1 , however, the list item will still be marked as complete, but an alert may be sent to the CSR or a third party or administrator informing them of the list-order discrepancy (15) . Manual corrections of the list placeholder or completion marking will still be allowed, but reported to the administrator (15) . c. If a generic text event is triggered, the prompt flag is automatically unset and the prompt type is reset to null (16) .
  • any and all data received from the CSR will also always be written to the output CSR textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (17).
  • the textual stream can be scanned for a call termination signal, such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device.
  • a call termination signal such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device.
  • This termination event would cause the AMS system to automatically tally and compile all alerts, completed list items, out of order list items, the current list placeholder, and any other relevant data, and perform a statistical analysis and summary which could be forwarded to an administrator or third party over an electronic network (18).
  • the textual stream data can be passed to a voice-analysis device which will interpret them as a series of events (1) including customer events (2) and CSR events (3) .
  • the evaluation of the customer events would be dependent on whether the prompt flag has been set, indicating that the customer is currently at an "IVR- style", CSR-initiated prompt section in the conversation (5) . If the customer is at such a section, any data received will be placed in a temporary buffer, and eventually dispatched to a database or data structure lookup component, database or data structure field setting or defining component, or other such component dependent on the type of data expected and recorded (6) . In the event that the prompt flag is set but the received data type no longer matches the expected data type, or a maximum length, time, or other such criteria has been met, the prompt flag can be unset and the expected data type reset to null (7) .
  • the evaluation of the CSR events can take four forms: a. Extended Silence - Send a visual, textual, audio or other alert to the CSR and/or an administrator or third party over an electronic network. (9) b. Prompt Keyword - Set the prompt flag, and set the prompt type accordingly, as determined by the keyword (based on lookup from database or other data structure) . (10) ⁇ c. List item Keyword - Mark list item as completed, or send an alert (see above) if the list item is not intended to be met. (11) d. Generic Text - Unset the prompt flag, reset the prompt type to null. (12)
  • any and all data received from the CSR will also always be written to the output CSR textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (13) .
  • the textual stream can be scanned for a call termination signal, such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device.
  • a call termination signal such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device.
  • This termination event would cause the AMS system to automatically tally and compile all alerts, completed list items, out of order list items, the current list placeholder, and any other relevant data, and perform a statistical analysis and summary which could be forwarded to an administrator or third party over an electronic network (14).

Abstract

A system and method for monitoring the communication between customers and a company's customer service representatives, be they representatives responding to customer requests or telemarketing representatives who initiate contact with existing or potential customers and provide relevant information to the customer service representatives in the course of the conversation, wherein the method comprises determining whether a data steam contains audio data (Step 4, Fig. 2) and converting the audio data, if it exists to a textual stream output (Step 5, Fig 2) and adding a timestamp to the textual stream (Step 9, Fig. 2). The system and method monitors, via a data network, the conversations between customers and a company's customer service representatives, provides real time analysis of the questions and responses, and thru interpreting that material provides the company's customer service representative with specific information useful for a particular customer and for advising the company's customer service representative as to relevant information that needs to be provided to the customer.

Description

AUTOMATED CALL CENTER MONITORING SYSTEM Field of the Invention
The present invention relates to a system and method for monitoring the communication between customers and a company's customer service representatives at customer reguests, or telemarketing. More particularly, the present invention relates to monitoring, via a data network, the conversations between customers and a company's customer service representatives, to provide a real time analysis of the questions and responses.
BACKGROUND OF THE INVENTION Traditionally, calls by customers to a company's customer service center are answered by customer service representatives who are trained to address customers' questions and problems. To improve customer satisfaction, some of the calls are monitored so that the performance of the customer service representatives may be evaluated. The monitoring is commonly performed by individuals specially trained to serve as mentors for the customer service representatives. However, management of a large number of representatives can prove to be an expensive and time- consuming task.
The problem is further exacerbated since an increasing number of companies have established a plurality of remote customer service centers respectively located in different physical locations. For example, a company may have customer service centers in each of several countries. Each of the customer service centers may be dedicated to a particular function, such as sales or technical support, or may be dedicated to a particular geographical area, such as North America. Each customer service center typically has a separate telephone number. A significant disadvantage of the standard approach to responding to customer requests for communication with customer service representatives is the difficulty of monitoring the customer service representatives located at remote customer service centers. The current approach of having one or more monitors at each customer service center is inefficient and expensive. Remote monitoring with a variety of systems still adds a considerable burden on the system.
With customer service centers experiencing high turnover rates - approximately 25% annually - a method of eliminating the dependence on human monitoring is a valuable asset. Additionally, human monitoring is highly inefficient in that only a small sampling of calls can be reviewed due to time and manpower constraints. Much greater benefit would result from a system that would actually provide support and guidance to the customer service representatives instead of just occasional monitoring.
Other limitations of the current system is that monitoring assistance is normally provided after the fact as it is not practical to have a monitor sit thru each call with the customer service representative during the entire call. Additionally, when calls need to be transferred to another customer service representative there is usually a delay in which the customer is put on hold until the situation has been discussed with the second customer service representative and the whole situation is explained to the second customer service representative. A method to reduce the hold time for customers would greatly improve customer service, especially since calls are often routed through several customer service representatives.
Further, even if voice recognition was applied to the customer / customer service representative conversation as it stands now, the analysis would be of little value in the standard conversation since the accuracy of voice recognition at the current state of the art is extremely sensitive to the training of the software to the specific user. If the voice recognition software is being applied to a standard telephonic conversation with a plurality of participants, there is no way for the software to isolate the participants and take advantage of the training.
SUMMARY OF THE INVENTION The disadvantages and limitations discussed above are overcome by the present invention. The system of the present invention comprises a method for automated call monitoring, analysis and feedback, comprising the steps of:
1. automatically monitoring all conversations between customer service representatives and customers;
2. separating the communications and interpreting them, using standard voice recognition systems, and creating a real time dialog of questions and responses;
3. using an established database of anticipated customer responses to specific customer service representatives questions whereby the data base can then provide on screen information specific to that customer; and
4. optionally, as the systems monitors the customer service representative's questions, ensuring that all the required questions are being asked in sequence, and prompting if desired or necessary, for the next question that should be asked. From the perspective of the human monitor, an evaluation record of each customer service representatives and each call may be kept for analyzing efficiency, using criteria such as whether the representative followed all the questions asked according to a given protocol, whether the questions were asked in the correct order, and whether certain questionable words or phrases appeared in the text. In accordance with the present invention, for inbound call centers a customer uses a customer communication system to access a company computer system via a data network and requests contact with a customer service representative. The customer may then select one or more customer preferences from a list of customer preferences provided by the company computer system. The company computer system then automatically selects a customer service center and a customer service representative in accordance with the customer preferences selected by the customer (if any) , and a set of selection criteria stored in the one or more company computer system databases. For outbound call centers a customer service representative initiates a call based on a computer generated customer list. The computer software normally contains database categories for storing customer information.
An automated monitoring system (AMS) is connected to the company computer system via the data network. The AMS is notified by the company computer system when a customer service representative initiates a conversation with a potential customer - outbound call centers, or when a customer seeking customer service calls a customer service center and a customer service representative is selected by the company computer system- inbound call centers. The AMS is also notified of the selection criteria (if any) used by the company computer system in making the selection. The AMS will then choose to monitor or to not monitor the communication. The communication of the customer service representative and the customer is separated into streams of customer service representative data and customer data and both data streams are then converted to text via an attached voice recognition device, and the conversation is analyzed and transcribed as exchanges, differentiating the customer service representative' s communication from the customer communication. The separation of streams and analysis is performed by alternative methods as outlined in the detailed description below. Once analyzed, the communications are now interpreted by the AMS using pre- established criterion to perform any or all of the following diagnostic functions.
1. Interpretation of specific customer service representative questions as prompt requests (found in IVR systems) and thereby using customer responses as the sources for searching data bases to provide the customer service representative with added information.
2. Tracking questions asked by the customer service representatives to see if they match and following the pattern required in many customer service and telemarketing applications. In turn it will advise the customer service representative by cues as to what inclusions or adjustments (if any) are necessary.
3. Tracking the performance of the customer service representatives and let them know where they are up to on their scripts, i.e. which tasks have been completed. 4. Searching the customer service representative's logs and/or customer logs seeking key words or phrases of interest or concern to the customer service center.
5. Creating a customer service representative/customer exchange text log which can then be evaluated by a data base and algorithm to determine efficiency.
6. Creating a customer service representative/customer exchange text log which can be transferred in real time to another customer service representative. The second customer service representative can then at any point make suggestions to the first customer service representative as to how to better handle the call, or alternatively can recommend direct transfer of the call without requiring the customer to be put on hold. Other objects, features and advantages of the present invention will become apparent from the following detailed description considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for purposes of illustration and not as a definition of the limits of the invention, for which reference should be made to the appended claims. SHORT DESCRIPTION OF THE DRAWINGS Figure 1 is a flow chart showing a first step of the present method with separation of a single telephone call into a dual source audio streams;
Figure 2 is a subsequent flow chart showing the operative analysis of the audio streams shown in Figure 1;
Figures 3 and 4 are alternative flow charts showing the analysis of the conversation of participants with regard to initiating and keyword using parties. DETAILED DESCRIPTION OF THE DRAWINGS AND THE PREFERRED
EMBODIMENTS
In describing the present invention the following various entities, as identified herein, which interact with the system, are defined as follows:
A "company" is any organization which sells, markets, or distributes products or services to customers. For example, the company may be a wholesale or retail business, an educational institution, a government office or a multi- national corporation.
A "customer service center" (CSC) is a subset of the company which is dedicated to providing information to customers about the company's products or services, and to providing other forms of assistance to customers. Most companies have one or more CSC s organized, for example, by geographical area and/or by an area of expertise. For example, one CSC may be a technical support center for the company's products, while another CSC may provide all customer services in a particular country. (CSC's could, in theory be independent companies, not merely subsets and divisions of other companies, which provide outsourced services to other companies - should we mention this? Or is it irrelevant, really?) The CSC's are staffed by one or more "customer service representatives" (CSR's) who are trained to: answer questions about the company's products or services; provide other forms of assistance to customers (inbound call centers) ; or who are trained to market products to customers (outbound call centers) , usually following a predetermined script. A "monitor" is an individual who serves as a mentor or administrator to the CSR's, and who evaluates the performance of the company's CSR's. In accordance with the present invention, both the CSR' s and the monitor can be assisted by an "Automated Monitoring System," or AMS. The AMS interprets and transcribes one or more textual versions of the conversation between the CSR and the customer, allowing it to provide such functions as: suggesting conversation topics; providing context-sensitive state-based information; providing formatted results from database queries based on data extracted from the conversation; providing predefined scripted text; providing an independent instant analysis of the efficiency, efficacy, success, and comprehensiveness of the conversation; and keyword, conversation length, conversation style, or conversation format based cues with which to flag or identify a given conversation. All of these functions can be autonomously triggered by the AMS without requiring direct user interaction.
A voice network is preferably a switched telephone network, but may be any switched telecommunications medium suitable for voice communication, such as a cellular or a satellite-based network. A customer voice communication device is preferably a telephone. The customer voice communication device is connected to the voice network via a telecommunication medium, such as a standard phone line or a wireless (e.g., cellular) link for placing and receiving telephone calls.
An AMS is comprised of both an AMS voice-monitoring device and an AMS voice-analysis device, which are connected by an electronic communication device to allow for data to be passed from the AMS voice-monitoring device to the AMS voice-analysis device, and for control messages to be passed from the AMS voice-analysis device to the AMS voice-monitoring device.
With specific reference to the drawings, in Figure 1, an AMS voice-monitoring device is shown as a telephone- enabled device such as a hybrid coupler (2) which is capable of using techniques, such as echo cancellation or others available to those knowledgeable in the art, to extract the incoming and outgoing segments of a single telephonic conversation into two distinct audio streams. The voice-monitoring device is also connected to the voice network via a telecommunication medium such a standard phone line or a wireless link for intercepting or monitoring telephone calls (1) . The two streams can then be converted by analog/digital conversion hardware or software (4), written to two distinct individual text streams (5), and individually time-stamped (6) . In the next stage these textual streams can be passed to the AMS voice-analysis device independent of each other.
Alternatively, referring now to Figure 2, the AMS voice-monitoring device can support a dual interface, employing two or more telephonic and analog audio sources (1) , wherein at least one source is comprised solely of either a CSR or customer audio stream (2), and at least one other source is comprised of both CSR and customer audio multiplexed into a single audio stream (3) . In this embodiment, the single audio source is classified as the master or override audio stream, while the multiplexed, or combined, audio stream, comprising two or more audio sources, is classified as the slave, or deprecated, audio stream.
In this latter embodiment (Figure 2) , after the analog/digital conversion has occurred in either hardware or software (10), additional hardware or software implemented algorithms can be applied to the audio streams in alternating succession, or simultaneously, in order to create two time-stamped textual streams, with each textual stream representing a distinct audio stream. If the application of this algorithm has been determined to operate in an alternating succession, for instance to reduce resource loads, and the AMS device employs a dual interface, the application may then be selective in such a way that one stream, specified as the override stream, will be the default analyzed audio stream in the event of a contention condition arising due to the presence of voice data on both the master and deprecated audio streams simultaneously. For example, given two audio streams "A" and "A/B", with "A" being the override stream and "A/B" the deprecated stream, "A" will be constantly monitored for voice data (4), which will be transcribed and placed into the textual stream representing A' s voice data (5) by resources allocated for the interpretation of voice data. In the event that no voice data is available on A (6), then A/B will be monitored for voice data also, and, should voice data become available on A/B, resources will be reallocated from interpreting A' s voice data to interpreting A/B (7), and A/B' s voice data will begin to be transcribed and placed into the textual stream representing B' s voice data (8) . However, as soon as voice data once again became available on A, even if A/B still contains the presence of additional voice data, the transcription of A/B' s voice data will cease, and those resources will be reallocated to begin interpreting A' s voice data again, to be transcribed and placed into the textual stream representing A' s voice data. A particular beneficial effect of this system when applied to a dual-interface is that if A contains audio data from only one source, and A/B contains audio data from two sources, one of which is identical to the audio data of A, the non-A audio data in A/B (i.e. audio data B) can be isolated by time-dependent elimination techniques. For instance, if A contains CSR data only, and A/B contains both CSR and customer data, such as in an embodiment wherein A is extracted from microphone data and A/B is extracted from a direct phone line connection, and if A and A/B are reasonably in synchronous coordination, an application monitoring both A and A/B would be able to determine at which times audio data from A/B should be processed, such as by only applying processing to A/B' s data when there is no available data to be processed from A' s audio stream. In this manner the actual audio of source B can be separated from the audio of A in the multiplexed source A/B. As in the previous embodiment, all textual streams will also be appended with a corresponding timestamp.
In addition to dual-interface audio sources derived from dedicated microphone and telephonic hardware, a specialized telephone headset may be employed by CSR's at the CSC, which splits the audio output into two identical signals, one of which can be provided to the voice- monitoring device and the other which passes through to a standard telephone device for communication with the customer.
With respect to Figures 3 and 4, once one or more versions of a textual stream or streams have been captured, as a whole, representing the combined audio streams of the conversation between A and B, these textual streams can be analyzed for keywords or patterns using either predetermined or heuristic functions, and functions can be performed based on the results. This analysis is made not based on the keyword alone, but also may be based on which conversation participant is using the keywords - the customer or the CSR.
For example, in a customer-service scenario, the keywords "plan", "my bill" or "billing" in the customer's textual stream could prompt the system to instantly call up relevant information about the company's new billing plan, and display that to the CSR, while different keywords would cause it to suggest and retrieve other information.
Alternatively terms used by the CSR and appearing in the CSR textual stream, such as "account", "number", or "personal code" could act in a manner similar to an IVR prompt, causing the system to go into a special numeric- aware state, wherein it explicitly waits and listens for textual representations of digits or variations thereof to appear in the textual stream or streams from where it will be captured to a separate numeric buffer. Once a certain amount of individual digits have been captured, the CSR has repeated them, a set amount of time has passed, or a set amount of non-numeric text has appeared in the stream, the analysis system can initiate a procedure wherein the buffer's data is supplied to a database lookup or query component, which can then search a database, linked list, or other such data structure so as to retrieve relevant customer information based on the account number, in this scenario, and provide it to the CSR. Other fields, such as the customer's phone number, name, zip code, or the like, can also be retrieved from the textual stream or other sources and used in the query function by the database component .
Alternatively, the conversation as a whole can be monitored for key words or phrases, for instance to ensure that the CSR has not begun discussing irrelevant or unauthorized topics, or is using unacceptable language. If the AMS determines that this is the case, it will raise a flag at or send an alert to the monitor' s computer terminal, station, or communication device, indicating that special attention should be paid to the marked conversation. The monitor could then have the option of reading through the textual stream or log, or even listening in on the call in progress, or an archived version if the call has been terminated. In a scripted telemarketing scenario, the CSR's statements can be compared against a predefined or dynamically generated script in order to determine whether or not the CSR has followed the conversation guidelines. Additionally, analysis of the customer' s responses can provide a clue as to the efficacy of the CSR' s sales pitch and delivery. The analysis component can be linked to a terminal display component which prompts the CSR with the next statement in the script, either based on a predefined sequence or one dynamically generated in response to the customer' s responses in the textual stream, and can visually indicate the completion of various stages or steps in the script, based on information provided by the analysis component of the textual stream.
Many other additional functions can be indicated by the analyzed data and suggested to the CSR, such as playing back pre-recorded messages to the customer, at the CSR' s discretion and command. In addition to this analysis, raw storage of both the textual stream and audio stream could occur, either locally or remotely on a network for later recall. If this storage takes place on an electronic network it can very easily be made available in real-time to the monitor, who can use selective criteria defined to the AMS in order to help filter, categorize, or identify components or events of interest within the textual stream. The time-stamping of the stream is valuable here in that it allows the monitor to synch the conversation with an audio log of the same stream based on chronological cues, for instance to confirm the validity and accuracy of the algorithm' s automatic transcription and analysis. The separation of the audio streams now also provides a key advantage in enabling the software to perform a far more effective voice recognition and analysis on the CSR stream (A stream) . The A stream will contain only the voice of the customer service representative, so that the software can easily and accurately analyze the CSR's audio stream. Voice recognition of an untrained, multiple-voice audio stream would prove extremely ineffective. Chronological cues derived from the analysis of the A stream can now be used to traverse the full conversation contained in the A/B stream, which is far more valuable for playback.A common function of CSR assisting systems is to allow the CSR to initiate the playback of selected pre-recorded audio over the communication medium directly to the customer, freeing the CSR from a repetitive task while at the same time providing them an opportunity to enter notes, comments, or other customer information into a database or other program uninterrupted. In such a scenario, the AMS could be instructed to temporarily disregard incoming audio data, which would be arriving solely from the pre-recorded audio, in order to both conserve resources and allow for a more accurate overall transcription by instantly substituting the known textual values of the pre-recorded audio for any interpreted audio that is converted during its playback, which will almost always exhibit some degree of variance, distortion, or interference which will limit and reduce the accuracy of the transcription.
Additionally, if the AMS captures and logs raw audio data or samplings in addition to timestamp or start- time/offset data on each audio segment, and if the textual data is also logged and also encompasses or is conceptually linked or referenced with a timestamp or start-time/offset system similar to that which is employed by the audio logging, and if these two logging sources are synchronized and have synchronized timestamp or start-time/offset mechanisms, then a user viewing the textual data would be able to easily call up the precise audio segment represented by the corresponding textual data by using the common timestamp as an automatically available reference point. This method could be useful for confirming customer information, checking the pronunciation of a difficult word, or otherwise accurately identifying any particular segment of the audio without being forced to manually examine lengthy audio files and segments thereof.
Additionally, the system can provide a mechanism wherein a copy of the textual and/or audio data is forwarded or made available to a multiplicity of third party systems over an electronic network, using distribution technologies such as peering or uploaded client-server streaming. In this fashion, additional CSR's,' monitors, technical advisors, or supervisors can listen in on and track a call's progress, either when alerted by keywords or conversation patterns, or at the request of the initial CSR. By reading through an archived transcript of the ongoing conversation, the third-party can be brought "up to date" with the current conversation in real-time without requiring a break or hold period in the initial conversation while the third party confers and discusses the call with the initial CSR, thus facilitating a faster call turnover should the CSR decides to pass the call on to the third party for any reason, without the annoying long hold. All of this ultimately translates into a briefer, more efficient, and more enjoyable customer experience. Alternatively, the CSR could be advised by the third party in real-time, without ever requiring direct call interference on the part of the third party into the initial conversation, for instance by means of sending messages or carrying on other communication through an electronic network.
Additionally, by analyzing the position of the CSR in a predefined, set length process or item list, an approximation can be made as to when the individual CSR conversation and call will be completed, which can be forwarded over an electronic information and can prove valuable for monitors, administrators, customers, or other systems.
In a preferred embodiment of the outbound, or telemarketing CSR scenario, as referred to in Figure 3, after a call has been initiated and voice data has been processed by a voice-monitoring device as described above, the textual stream data, currently consisting of two distinct textual streams, can be passed to a voice-analysis device which will interpret them as a series of events (1) including customer events (2) and CSR events (3) .
The evaluation of the customer events would be dependent on whether the prompt flag has been set, indicating that the customer is currently at an "IVR- style", CSR-initiated prompt section in the conversation (5) . If the customer is at such a section, any data received will be placed in a temporary buffer, and eventually dispatched to a database or data structure lookup component, database or data structure field setting or defining component, or other such component, as is dependent on and defined for the type of data expected and recorded (6) . In the event that the prompt flag is set but the received data type no longer matches the expected data type, or a maximum length, time, or other such criteria has been met, the prompt flag can be unset and the expected data type reset to null (7). Any and all data received from the customer, whether the prompt flag is set or not, will also always be written to the output customer textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (8) .
The evaluation of the CSR events can take three forms: generic text (9), list item completion keyword or phrase (10), and "IVR-style" prompt initiating keyword (11), as described as follows: a. If a prompt keyword or keywords is received, a keyword event is triggered which causes the prompt flag to be set (see above), and the expected data type variable to be set to the appropriate type or selection, depending on the precise predefined prompt type or parameters (12) . b. If an item keyword event is triggered, then the next step of the evaluation would be to determine if the list item which the keyword has triggered is the correct anticipated list item (13), as can be determined by examining the predefined item list and the current list placeholder. If the current list placeholder is termed n, then if the triggered list item is in the list position of n+1 the list item will be marked as complete, and the list placeholder incremented by one (14) . If the list position is unequal to n+1 , however, the list item will still be marked as complete, but an alert may be sent to the CSR or a third party or administrator informing them of the list-order discrepancy (15) . Manual corrections of the list placeholder or completion marking will still be allowed, but reported to the administrator (15) . c. If a generic text event is triggered, the prompt flag is automatically unset and the prompt type is reset to null (16) .
In all of these three CSR event scenarios, any and all data received from the CSR will also always be written to the output CSR textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (17).
After these events have been processed, the textual stream can be scanned for a call termination signal, such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device. This termination event would cause the AMS system to automatically tally and compile all alerts, completed list items, out of order list items, the current list placeholder, and any other relevant data, and perform a statistical analysis and summary which could be forwarded to an administrator or third party over an electronic network (18). In a preferred embodiment of the inbound textual streams, for inbound customer service, as referred to in Figure 4, after a call has been initiated and voice data has been processed by a voice-monitoring device as described above, the textual stream data, currently consisting of two distinct textual streams, can be passed to a voice-analysis device which will interpret them as a series of events (1) including customer events (2) and CSR events (3) .
The evaluation of the customer events would be dependent on whether the prompt flag has been set, indicating that the customer is currently at an "IVR- style", CSR-initiated prompt section in the conversation (5) . If the customer is at such a section, any data received will be placed in a temporary buffer, and eventually dispatched to a database or data structure lookup component, database or data structure field setting or defining component, or other such component dependent on the type of data expected and recorded (6) . In the event that the prompt flag is set but the received data type no longer matches the expected data type, or a maximum length, time, or other such criteria has been met, the prompt flag can be unset and the expected data type reset to null (7) . Any and all data received from the customer, whether the prompt flag is set or not, will also always be written to the output customer textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (8) . The evaluation of the CSR events can take four forms: a. Extended Silence - Send a visual, textual, audio or other alert to the CSR and/or an administrator or third party over an electronic network. (9) b. Prompt Keyword - Set the prompt flag, and set the prompt type accordingly, as determined by the keyword (based on lookup from database or other data structure) . (10) ι c. List item Keyword - Mark list item as completed, or send an alert (see above) if the list item is not intended to be met. (11) d. Generic Text - Unset the prompt flag, reset the prompt type to null. (12)
In all of these four CSR event scenarios, any and all data received from the CSR will also always be written to the output CSR textual stream, which can be made available to the CSR, administrators or other third parties over the network as such (13) .
After these events have been processed, the textual stream can be scanned for a call termination signal, such as extended silence, standard call-termination tones, or a specialized call-termination code received from the voice monitoring device or other such device. This termination event would cause the AMS system to automatically tally and compile all alerts, completed list items, out of order list items, the current list placeholder, and any other relevant data, and perform a statistical analysis and summary which could be forwarded to an administrator or third party over an electronic network (14). While fundamental novel features of the invention as applied to preferred embodiments thereof has been shown, it will be understood that various omissions and substitutions and changes in the form and details of the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements, methods, and/or steps which perform substantially the same functions in substantially the same way so as to achieve the same results are within the scope of this invention. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.

Claims

What is claimed is:
1. A method for automated call monitoring between customer service representatives and customers, interpretation, analysis, and feedback, comprising the steps of: a. automatically monitoring all conversations between customer service representatives and customers; b. utilizing a plurality of distinct, synchronized audio streams to facilitate the separation and attribution of the conversation into a plurality of distinct speakers, which can be interpreted using standard voice recognition systems, creating a real time dialog transcription of the conversation; c. analyzing the dialog transcription in order to provide for a plurality of feedback and informational or notification services to any of a monitor or an electronic network to a third part..
2. The method of claim 1 , wherein the plurality of distinct audio streams are arrived at through the use of specialized hardware, employing echo cancellation adapted to create a set of audio streams wherein each stream is comprised of not more than one audio source component.
3. The method of claim 1, wherein the plurality of distinct audio streams are arrived at through the use of specialized hardware comprising a "dual interface"; the dual interface employing two or more synchronized telephonic and analog audio sources in alternate succession, wherein individual source components in the multiplexed stream, one of which is identical to the non- multiplexed stream source, can be isolated by time- dependent elimination techniques.
4. The method of claim 3, wherein a time-stamped textual stream derived from the analysis of the non- multiplexed, single-source audio stream is used to traverse the multiplexed audio stream, representing the combined conversation, with the aid of chronological cues.
5. The method of claim 1, wherein the analysis comprises using an established database of anticipated customer responses to specific customer service representative's questions, whereby the database can then provide on screen information specific to that customer; or suggest further topics for the conversation.
6. The method of claim 1, wherein the analysis comprises determining that all the required questions and statements are being employed in sequence.
7. The method of claim 1, wherein the analysis comprises determining the accuracy, efficiency, efficacy, success, and comprehensiveness of the conversation; and of the customer service representative in particular relative to a pre-determined scripted text.
8. The method of claim 1, wherein the analysis comprises employing keyword, conversation length, conversation style, or conversation format based cues.
9. The method of claim 1, wherein the analysis differs depending on which participant triggered the analysis.
10. The method of claim 1, wherein the analysis comprises determining if the customer service representative has begun discussing irrelevant or unauthorized topics, or is using unacceptable language.
11. The method of claim 1, wherein the analysis comprises using key words, phrases to place the system in and out of a state similar to an IVR prompt, wherein it records specific customer information for future reference and use.
PCT/US2001/046646 2000-12-05 2001-12-05 Automated call center monitoring system WO2002046872A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002239523A AU2002239523A1 (en) 2000-12-05 2001-12-05 Automated call center monitoring system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US25136000P 2000-12-05 2000-12-05
US60/251,360 2000-12-05

Publications (2)

Publication Number Publication Date
WO2002046872A2 true WO2002046872A2 (en) 2002-06-13
WO2002046872A3 WO2002046872A3 (en) 2002-10-17

Family

ID=22951613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/046646 WO2002046872A2 (en) 2000-12-05 2001-12-05 Automated call center monitoring system

Country Status (2)

Country Link
AU (1) AU2002239523A1 (en)
WO (1) WO2002046872A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008053488A1 (en) * 2006-11-03 2008-05-08 E-Glue Software Technologies Ltd. Proactive system and method for monitoring and guidance of call center agent during the course of a call
US20100161604A1 (en) * 2008-12-23 2010-06-24 Nice Systems Ltd Apparatus and method for multimedia content based manipulation
US8370155B2 (en) 2009-04-23 2013-02-05 International Business Machines Corporation System and method for real time support for agents in contact center environments
WO2014152297A1 (en) * 2013-03-15 2014-09-25 Ad Giants, Llc Automated consultative method and system
US9870353B2 (en) 2013-10-31 2018-01-16 Entit Software Llc Pre-populating a form
US11341505B1 (en) * 2014-05-05 2022-05-24 United Services Automobile Association Automating content and information delivery

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535256A (en) * 1993-09-22 1996-07-09 Teknekron Infoswitch Corporation Method and system for automatically monitoring the performance quality of call center service representatives
US5854832A (en) * 1995-06-26 1998-12-29 Rockwell International Corp. Monitoring system and method used in automatic call distributor for timing incoming telephone calls
US6363145B1 (en) * 1998-08-17 2002-03-26 Siemens Information And Communication Networks, Inc. Apparatus and method for automated voice analysis in ACD silent call monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5535256A (en) * 1993-09-22 1996-07-09 Teknekron Infoswitch Corporation Method and system for automatically monitoring the performance quality of call center service representatives
US6058163A (en) * 1993-09-22 2000-05-02 Teknekron Infoswitch Corporation Method and system for monitoring call center service representatives
US5854832A (en) * 1995-06-26 1998-12-29 Rockwell International Corp. Monitoring system and method used in automatic call distributor for timing incoming telephone calls
US6363145B1 (en) * 1998-08-17 2002-03-26 Siemens Information And Communication Networks, Inc. Apparatus and method for automated voice analysis in ACD silent call monitoring

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008053488A1 (en) * 2006-11-03 2008-05-08 E-Glue Software Technologies Ltd. Proactive system and method for monitoring and guidance of call center agent during the course of a call
US8150021B2 (en) 2006-11-03 2012-04-03 Nice-Systems Ltd. Proactive system and method for monitoring and guidance of call center agent
US20100161604A1 (en) * 2008-12-23 2010-06-24 Nice Systems Ltd Apparatus and method for multimedia content based manipulation
US8370155B2 (en) 2009-04-23 2013-02-05 International Business Machines Corporation System and method for real time support for agents in contact center environments
WO2014152297A1 (en) * 2013-03-15 2014-09-25 Ad Giants, Llc Automated consultative method and system
US9870353B2 (en) 2013-10-31 2018-01-16 Entit Software Llc Pre-populating a form
US11341505B1 (en) * 2014-05-05 2022-05-24 United Services Automobile Association Automating content and information delivery
US11798006B1 (en) 2014-05-05 2023-10-24 United Services Automobile Association Automating content and information delivery

Also Published As

Publication number Publication date
WO2002046872A3 (en) 2002-10-17
AU2002239523A1 (en) 2002-06-18

Similar Documents

Publication Publication Date Title
US10129402B1 (en) Customer satisfaction analysis of caller interaction event data system and methods
US9692894B2 (en) Customer satisfaction system and method based on behavioral assessment data
US6603854B1 (en) System and method for evaluating agents in call center
US8379819B2 (en) Indexing recordings of telephony sessions
US8484042B2 (en) Apparatus and method for processing service interactions
US8094803B2 (en) Method and system for analyzing separated voice data of a telephonic communication between a customer and a contact center by applying a psychological behavioral model thereto
US7076427B2 (en) Methods and apparatus for audio data monitoring and evaluation using speech recognition
US6587556B1 (en) Skills based routing method and system for call center
US7469047B2 (en) Integrated ACD and IVR scripting for call center tracking of calls
JP6470964B2 (en) Call center system and call monitoring method
US8150020B1 (en) System and method for prompt modification based on caller hang ups in IVRs
US20010043697A1 (en) Monitoring of and remote access to call center activity
CN101460995A (en) Monitoring device, evaluation data selection device, reception person evaluation device, and reception person evaluation system and program
US6577713B1 (en) Method of creating a telephone data capturing system
CN110392168A (en) Call processing method, device, server, storage medium and system
WO2002046872A2 (en) Automated call center monitoring system
US20060093103A1 (en) Technique for generating and accessing organized information through an information assistance service
AU2003282940B2 (en) Methods and apparatus for audio data monitoring and evaluation using speech recognition
US20230132143A1 (en) System, method, or apparatus for efficient operations of conversational interactions

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase in:

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP