WO1999016229A1 - Dynamic distributions of applications and associated resource utilization - Google Patents

Dynamic distributions of applications and associated resource utilization Download PDF

Info

Publication number
WO1999016229A1
WO1999016229A1 PCT/US1998/019835 US9819835W WO9916229A1 WO 1999016229 A1 WO1999016229 A1 WO 1999016229A1 US 9819835 W US9819835 W US 9819835W WO 9916229 A1 WO9916229 A1 WO 9916229A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
applications
remote
nodes
node
Prior art date
Application number
PCT/US1998/019835
Other languages
French (fr)
Inventor
Michael J. Polcyn
Original Assignee
Intervoice Limited Partnership
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intervoice Limited Partnership filed Critical Intervoice Limited Partnership
Priority to AU95022/98A priority Critical patent/AU9502298A/en
Publication of WO1999016229A1 publication Critical patent/WO1999016229A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/36Statistical metering, e.g. recording occasions when traffic exceeds capacity of trunks

Definitions

  • This invention relates to Interactive Voice Response (IVR) systems and more particularly to an efficient IVR system having a large number of ports.
  • IVR Interactive Voice Response
  • IVR units In the voice processing industry, Interactive Voice Response (IVR) units typically have been implemented on a single processor that is capable of supporting only a finite number of ports.
  • the number of ports is primarily limited by the bandwidth of the processor and the associated storage media.
  • the processor can only support up to a finite number of communication ports before running out of bandwidth.
  • this problem was solved by building large systems in which a number of individual processors were connected together. For example, many individual processors supporting 72 to 96 ports can be connected together to build very large systems with thousands of ports.
  • problems with this type of system architecture there are problems with this type of system architecture.
  • IVR applications such as recorded voice messages
  • IVR applications must be readily accessible to the processors. Therefore, they are usually held by the storage media associated with each individual processor. For instance, in a system having 25 IVR applications and a number of processor nodes, there would need to be copies of each IVR application in every processor node to anticipate incoming calls. This is a very inefficient use of the processors and the storage media, especially in the case of very large service bureau applications which might have thousands of applications running at one time. There are other applications, such as voice recognition, which must be configured separately for each unit in order to operate properly. Accordingly, there is a need for a large port count system, such as an IVR system having thousands of ports, that can efficiently manage the IVR applications, media space and the associated signal processing resources and hardware. This will eliminate the requirement of having all of the IVR applications continuously resident in every system unit or node. Such a system would also preclude having the system resources tied to a particular processor unit at any given time.
  • the applications used in these large port count systems include IVR applications such as: customized long distance carrier services, 1-800 number call processing and routing, call directors, banking applications, medical applications and anything that is handled in large volumes at a network level or a large service bureau level where there are many customers calling the system.
  • IVR applications such as: customized long distance carrier services, 1-800 number call processing and routing, call directors, banking applications, medical applications and anything that is handled in large volumes at a network level or a large service bureau level where there are many customers calling the system.
  • application media that is used by the individual IVR applications in operation. This application media must also be stored in, or accessible to, each system unit or node.
  • IVR applications and the associated application media are IVR applications and the associated application media.
  • individual nodes are comprised of a processor, associated memory and voice and telecommunications hardware.
  • the nodes are treated as a set of resources that are managed by a monitoring program which operates as a statistical/predictive demand engine.
  • the demand engine monitors the system's use of the individual IVR applications and estimates the types of resources and applications that will be required to handle future callers. For instance, the demand engine may estimate the number of ports that will be required for peak capacity periods and the various voice recognition resources that will be needed by each port during those periods.
  • the demand engine also selects which processor nodes should run the applications. A resource manager will then assign incoming callers to available processor nodes containing the required IVR application.
  • the present invention anticipates the number of future callers and downloads the required IVR applications and associated media that are required to handle those calls. Occasionally, the system may fail to predict the needs of a caller. In those cases in which the needs of a caller have not been anticipated and, as a result, a specific IVR application has not been preloaded to a processor node, the present invention provides for real-time copying of the required application and associated media to an available node in order to process the call. In a preferred embodiment, the system monitors 10 to 15 minute periods to anticipate the demand level for future time periods and to determine the applications and associated voice media that will be required by the processor nodes during those periods. The system maintains a single master copy of each IVR application and the associated application media.
  • the system When it is predicted that an application will be needed in a future time period, the system provides a temporary copy to a processing node in anticipation of the future call.
  • the system will provide copies of the IVR applications to as many nodes as required to handle the predicted number of callers.
  • the system also removes the temporary application copies when they are no longer needed by the processor nodes to handle calls. This has the effect of freeing processor node memory and processor capability so that the node can accept other IVR applications to handle additional future callers requiring different applications. Accordingly, the system provides a much more efficient utilization of the memory and processor capacity of the voice telephony nodes thereby allowing for faster processing of calls.
  • a feature of the present invention allows for unanticipated IVR applications to be provided to a processor node in response to the specific requirements of a caller.
  • a copy of the unanticipated application is provided in real-time from the central storage location to an available node when no other node has a copy of the application or when all of the nodes having the application are at their maximum capacity.
  • Another feature of the present invention provides for removing unneeded applications from the nodes when the system predicts that the application will not be required to handle future calls.
  • the memory and processor capability made available by removing unneeded applications are then used to run other applications which are predicted to be needed to handle future calls or which are demanded by future calls.
  • FIGURE 1 is a block diagram of an IVR system embodying the present invention
  • FIGURE 2 is a block diagram of a prior art IVR system
  • FIGURE 3 is a block diagram of an alternate embodiment of a prior art IVR system
  • FIGURE 4 is a block diagram of a high port count prior art IVR system
  • FIGURE 5 is a system for monitoring and predicting the use of the system resources of the present invention.
  • FIGURE 6 is system for determining the availability of processors in the present invention
  • FIGURE 7 is an algorithm for handling calls in the system of the present invention.
  • FIGURES 2, 3 and 4 Before describing the operation of the invention, the prior art systems shown in FIGURES 2, 3 and 4 will be discussed.
  • Storage device 201 typically embodied as a disk drive or some other bulk memory storage device, contains media associated with the IVR applications, such as digitized voice information.
  • Application media delivery device 202 facilitates moving media from storage device 201 to voice/telephony hardware 204.
  • Voice/telephone hardware 204 provides the interface between callers 206 and the IVR applications that are running on system 20. Callers 206 are connected to hardware 204 through switch public network 205.
  • IVR application 203 is connected to callers 206 and storage media 201 through voice/telephony hardware 204.
  • the IVR applications are somewhat static in that they are typically memory resident in anticipation of incoming phone calls. If a thousand applications had to run on this particular system, then a copy of each application and its associated media would have to be resident on each system node, even though statistically only a small percentage of those applications would be likely to run at any given time.
  • the nodes would be required to store IVR applications and media that is used infrequently or, even if needed regularly, the application may be used by only a few callers.
  • a large IVR system of this type wastes much of its storage space by holding copies of these rarely used applications.
  • System 20 also has problems due to the bandwidth of link 210 between storage device 201 and application media delivery device 202. Only a finite amount of application media can be moved across link 210 at one time. This media restriction limits the port count for system 20. If link 210 had an infinite bandwidth and storage device 201 had infinite storage capabilities, then system 20 could facilitate an unlimited number of ports. However, present capabilities typically restrict IVR systems to 96 or 120 ports maximum.
  • system 30 illustrates another type of architecture for prior art IVR systems.
  • System 30 has application 303 which is linked to application media delivery hardware 302 and voice/telephony hardware 304. Application 303 is connected to callers 306 through voice/telephony hardware 304 over switched public network 305.
  • Media from storage device 301 moves from application media delivery hardware 302 to application 303 across link 310. Like system 20, media distribution is restricted on system 30 due to the bandwidth limitations of link 310. Also, system 30 stores infrequently used IVR applications and media like system 20. As a result, both systems 20 and 30 waste memory space holding rarely used applications and media and both systems are limited in port count.
  • FIGURE 4 another prior art IVR system is shown as system 40.
  • Storage device 401 holds all of the media for the IVR applications on the system.
  • Application media is distributed to nodes 402-1 to N across link 410 to application media delivery hardware 404 within each node 402.
  • Link 410 may be a LAN or some kind of data bus. The transferred application media is used in each node by application
  • Voice/telephony hardware 405 links each node to switched public network 406 and to callers 407.
  • the applications which typically require significantly less space than their associated media, are distributed to all of the nodes. This is a compromise system that attempts to better utilize the individual processor nodes 402.
  • system 40 there is still a media delivery problem between storage device 401 and nodes 402 due to the bandwidth limitations of link 410. If a high performance disk system is used, system 40 may support about 1500 ports if it handles voice media from a single disk drive. However, system 40 will not meet the requirements of a very large system having tens of thousands of ports. The most significant difference in the systems shown in FIGURES 2, 3 and 4 is the bandwidth limitation.
  • an IVR system should dynamically anticipate resource requirements and move the IVR application and associated media to the processor nodes during lull periods.
  • FIGURE 1 One embodiment of an Interactive Voice Response system incorporating the present invention is shown as system 10 in FIGURE 1.
  • Storage device 101 holds system 10's IVR applications and associated media.
  • the applications and application media stored on storage device 10 are master copies and they are the only permanently resident copies of the system's IVR applications.
  • Statistical demand engine 102 is linked to storage device 101 via link 110.
  • Demand engine 102 monitors the historical use of the IVR applications and from historical use data it anticipates future IVR application requirements. Based on the predicted requirements for the system, demand engine 102 proactively provides copies of the applications and their associated media to processor nodes 103-1 to N before they are needed by IVR system 10.
  • demand engine 102 downloads a copy of the required application and its associated media to an available node in real-time.
  • demand engine 102 performs both a predictive function to anticipate application demand levels and a real-time correction function to compensate if a demand was not properly anticipated.
  • Demand engine 102 also monitors when an IVR application is no longer needed by a node 103. Determining when an application is no longer needed can be accomplished by a variety of methods, including monitoring actual usage statistics or monitoring the elapsed time since the application was downloaded or last used. The system may also use the time of day to determine whether the application will be required again. As the utilization of the application decreases, demand engine 102 removes copies of the application from selected nodes in order to free memory space in local cache disk 104 and transient application 105.
  • Demand engine 102 can then load copies of other IVR applications and associated media into the available cache 104 and application 105 memory space in anticipation of future callers that will require different applications.
  • Each application can have different parameters for controlling how the application is handled by system 10. These parameters may be adjusted from time to time either by the system or by a user. For example, demand engine 102 can be set to keep only the lowest possible number of applications at the nodes in order to decrease memory retrieval time.
  • specific application criteria can be stored in database 101 and would be provided to demand engine 102 over link 110. These criteria could be used to vary application parameters. Thus, a user or system administrator could use the criteria to vary the applications that are loaded on the various processor nodes without regard to the current system statistics. For example, a user could instruct demand engine 102 that a certain type of application will be needed during a specified time period. Accordingly, these criteria would cause demand engine 102 to load that type of application in a specified number of nodes.
  • the criteria can be input to system 10 by a keyboard, such as in situations where a user knows that a certain application demand will occur in a future time period. In other circumstances, system 10 may develop criteria on its own. For instance, when a certain application requires an operator, system 10 could set the criteria for that application so that it is limited to only those nodes which have operators available at a particular time.
  • Voice/telephony hardware 106 in each node 103 provides an interface between transient application 105 and switched public network 107. Callers 108 are connected to system 10 through network 107.
  • demand engine 102 uses a statistical database, such as data array 50 as shown in FIGURE 5.
  • Data array 50 represents the system's IVR applications on the y-axis and time periods T 0 to T N on the x-axis. The time periods of arbitrary duration, such as a day, a year, an hour, a month or a week, depending upon the type of system, the duration of incoming calls and the rate at which the current IVR application are changed.
  • the user could populate the various time zones with an anticipated usage level for each IVR application.
  • the anticipated use could be derived from historical data or it could be an estimation.
  • this information will then be updated in real-time as the system is used and as the demand for applications varies over time.
  • the data in array 50 may only be inserted by a coordinator or supervisor so that the user can control which applications are readily available to future callers or which applications system 10 can use for outgoing messages.
  • Demand engine 102 would then be able to treat outgoing call applications differently from applications used for incoming calls.
  • blocks 501 and 502 measure the historical use of the system applications and resources.
  • Block 501 represents a source of historical or real time usage of various port and other resources in system 10. This historical usage is provided to block 502 where statistics are gathered in order to measures the actual resource demand levels.
  • Block 503 uses the statistical data on an actual application demand to predict future application demand levels.
  • Block 504 represents an algorithm that is used to update data array 50 to reflect application use in each time period.
  • the data provided by and the functions performed in blocks 501-504 form a closed loop adaptive system in which system 10 (FIGURE 1) can build an array of actual application use even though array 50 may have been initialized using estimated values.
  • the system can anticipate not only day-to-day changes, but also weekly changes. For instance, in a seven day cycle having five lull days and two busy days, the system would be able to anticipate the busy days proactively. Therefore, there could be one chart having several different versions of application demand for specific days of the week or times of the year. This capability depends upon the dimensions of the array, and how much data the user desires to maintain on the system. For example, if the system is tracking use over the course of a year and the user wants to anticipate usage during a specific period of time, such as during a holiday, that would require a lot of individual time slots or alternate tables to maintain the year's worth of data.
  • FIGURE 6 is a block diagram illustrating a system for assigning incoming calls to specific processor nodes 103.
  • Map 60 tracks processor and port availability. For each processor 103 there is a gauge of available resources 604 that indicates the level of resource use. As shown in map 60, gauge 604 is generally an empty/full measure of resource capacity.
  • Resource management and call assignment device 62 first looks at incoming call attributes, such as the originating number, ANI/DNIS or some particular message that is part of the incoming call signal which indicates that the call will require a specific IVR application. Resource management device 62 then selects an available processor node with the required application.
  • Block 61 illustrates a method of maintaining a pool of available processors, wherein there is a list 602 of available processors for each application 601. Device 62 issues a command back to the network or to a proprietary front end switch to direct the call to a particular processor node resource.
  • the incoming calls may be routed to general purpose processors wherein a menu is provided and, depending upon the program or application selected from the menu, resource manager 61 switches the call to a processor node loaded with the selected application. If the system is working properly, then histogram 50 will have served to load the required applications into a proper number of processors so that there is sufficient capacity to handle the call promptly using the proper applications. Otherwise, demand engine 102 will download the required application to an available node and the call will be connected to that node.
  • FIGURE 7 illustrates a program for resource management and call assignment. A call arrives in step 701 and it is presented to the resource manager in step 702.
  • Step 702 is an alerting signal, such as a ring signal along with some information about the call or, in the case of an ISDN call, it could be a call setup message or signalling system seven call.
  • step 703 it is determined whether the call is mapped to a specific application due to the call's ANI/DNIS information.
  • Step 703 uses a map based on ANI to identify a specific application or an originating area. There are two ways of handling incoming calls. If there is a specific mapping, the call is branched to step 706 and the application resource is selected from the available processor list, then the call is routed to the specific port or processor node via a redirect function or switching function.
  • step 703 If the system does not know how to handle the call in step 703, an application can be run that presents a selection menu which interacts with the caller and allows the caller to select a specific application. The call is then redirected or switched through the selected application by steps 705 and 706. Finally, the call is completed in step 707 when it is routed to a specific port or processor node.
  • inventive concept can be used in any type of communication system, including voice systems, data systems, such as the internet, and wireless systems.
  • the inventive concept disclosed herein can be used in the Advanced Intelligent Network (AIN) telecommunications environment (for example, between the service control point (SCP), intelligent peripheral (IP) or service node (SN)) as set out in Bell Core specification 1129 and available from Bell Core, which specification is hereby incorporated by reference herein.
  • SCP service control point
  • IP intelligent peripheral
  • SN service node

Abstract

A system (10) and a method for distributing Interactive Voice Response applications are disclosed. Callers (108) access a telecommunications system (107) having a plurality of telecommunications applications (115). A statistical/predictive demand engine (102) monitors historical application demand levels to predict future application requirements. Telecommunications applications are provided to processor nodes (103-1 to 103-N) in anticipation of future incoming calls that will require the applications. Applications that are not provided to the processor nodes in advance may be provided in real-time when an incoming call requires an unanticipated application. The applications are removed from the processor nodes when they are no longer needed by incoming calls.

Description

DYNAMIC DISTRIBUTIONS OF APPLICATIONS AND ASSOCIATED RESOURCE UTILIZATION
TECHNICAL FIELD OF THE INVENTION
This invention relates to Interactive Voice Response (IVR) systems and more particularly to an efficient IVR system having a large number of ports.
BACKGROUND OF THE INVENTION
In the voice processing industry, Interactive Voice Response (IVR) units typically have been implemented on a single processor that is capable of supporting only a finite number of ports. The number of ports is primarily limited by the bandwidth of the processor and the associated storage media. In other words, if the user is playing a unique voice message or other information that is stored in memory, the processor can only support up to a finite number of communication ports before running out of bandwidth. In the past, this problem was solved by building large systems in which a number of individual processors were connected together. For example, many individual processors supporting 72 to 96 ports can be connected together to build very large systems with thousands of ports. However, there are problems with this type of system architecture.
IVR applications, such as recorded voice messages, must be readily accessible to the processors. Therefore, they are usually held by the storage media associated with each individual processor. For instance, in a system having 25 IVR applications and a number of processor nodes, there would need to be copies of each IVR application in every processor node to anticipate incoming calls. This is a very inefficient use of the processors and the storage media, especially in the case of very large service bureau applications which might have thousands of applications running at one time. There are other applications, such as voice recognition, which must be configured separately for each unit in order to operate properly. Accordingly, there is a need for a large port count system, such as an IVR system having thousands of ports, that can efficiently manage the IVR applications, media space and the associated signal processing resources and hardware. This will eliminate the requirement of having all of the IVR applications continuously resident in every system unit or node. Such a system would also preclude having the system resources tied to a particular processor unit at any given time.
The applications used in these large port count systems include IVR applications such as: customized long distance carrier services, 1-800 number call processing and routing, call directors, banking applications, medical applications and anything that is handled in large volumes at a network level or a large service bureau level where there are many customers calling the system. In addition, there may be associated application media that is used by the individual IVR applications in operation. This application media must also be stored in, or accessible to, each system unit or node.
Today, most processors support 96 to 120 ports. Therefore, to implement a thousand port IVR, a system comprised of 10 individual 96 to 120 port processors would be required. Each individual processor would need its own disk storage media, voice and telephony hardware and copies of the system applications and associated media. These individual units would also require resources such as voice recognition, text to speech and any other resources the application may need at a given time. This implementation of a thousand port IVR would be inefficient and cumbersome. Accordingly, there is a need for an IVR system that has a large number of telecommunication ports and that can efficiently manage the distribution and use of the
IVR applications and the associated application media.
SUMMARY OF THE INVENTION
In the present invention, individual nodes are comprised of a processor, associated memory and voice and telecommunications hardware. The nodes are treated as a set of resources that are managed by a monitoring program which operates as a statistical/predictive demand engine. The demand engine monitors the system's use of the individual IVR applications and estimates the types of resources and applications that will be required to handle future callers. For instance, the demand engine may estimate the number of ports that will be required for peak capacity periods and the various voice recognition resources that will be needed by each port during those periods. The demand engine also selects which processor nodes should run the applications. A resource manager will then assign incoming callers to available processor nodes containing the required IVR application. Usually, the present invention anticipates the number of future callers and downloads the required IVR applications and associated media that are required to handle those calls. Occasionally, the system may fail to predict the needs of a caller. In those cases in which the needs of a caller have not been anticipated and, as a result, a specific IVR application has not been preloaded to a processor node, the present invention provides for real-time copying of the required application and associated media to an available node in order to process the call. In a preferred embodiment, the system monitors 10 to 15 minute periods to anticipate the demand level for future time periods and to determine the applications and associated voice media that will be required by the processor nodes during those periods. The system maintains a single master copy of each IVR application and the associated application media. When it is predicted that an application will be needed in a future time period, the system provides a temporary copy to a processing node in anticipation of the future call. The system will provide copies of the IVR applications to as many nodes as required to handle the predicted number of callers. The system also removes the temporary application copies when they are no longer needed by the processor nodes to handle calls. This has the effect of freeing processor node memory and processor capability so that the node can accept other IVR applications to handle additional future callers requiring different applications. Accordingly, the system provides a much more efficient utilization of the memory and processor capacity of the voice telephony nodes thereby allowing for faster processing of calls.
Accordingly, it is one object of the present invention to provide an IVR system with a very large number of ports that can provide various IVR applications to callers as required.
It is another object of the present invention to provide a system in which master copies of the IVR applications are stored in a central location from which they can be temporarily provided to one or more nodes in anticipation of the requirements of future callers or to be used by the nodes to process outbound calls.
A feature of the present invention allows for unanticipated IVR applications to be provided to a processor node in response to the specific requirements of a caller. A copy of the unanticipated application is provided in real-time from the central storage location to an available node when no other node has a copy of the application or when all of the nodes having the application are at their maximum capacity.
Another feature of the present invention provides for removing unneeded applications from the nodes when the system predicts that the application will not be required to handle future calls. The memory and processor capability made available by removing unneeded applications are then used to run other applications which are predicted to be needed to handle future calls or which are demanded by future calls.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and the specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: FIGURE 1 is a block diagram of an IVR system embodying the present invention;
FIGURE 2 is a block diagram of a prior art IVR system;
FIGURE 3 is a block diagram of an alternate embodiment of a prior art IVR system; FIGURE 4 is a block diagram of a high port count prior art IVR system;
FIGURE 5 is a system for monitoring and predicting the use of the system resources of the present invention;
FIGURE 6 is system for determining the availability of processors in the present invention; and FIGURE 7 is an algorithm for handling calls in the system of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Before describing the operation of the invention, the prior art systems shown in FIGURES 2, 3 and 4 will be discussed.
Referring to FIGURE 2, a typical prior art IVR system is shown as system 20. Storage device 201, typically embodied as a disk drive or some other bulk memory storage device, contains media associated with the IVR applications, such as digitized voice information. Application media delivery device 202 facilitates moving media from storage device 201 to voice/telephony hardware 204. Voice/telephone hardware 204 provides the interface between callers 206 and the IVR applications that are running on system 20. Callers 206 are connected to hardware 204 through switch public network 205.
IVR application 203 is connected to callers 206 and storage media 201 through voice/telephony hardware 204. In this type of system, the IVR applications are somewhat static in that they are typically memory resident in anticipation of incoming phone calls. If a thousand applications had to run on this particular system, then a copy of each application and its associated media would have to be resident on each system node, even though statistically only a small percentage of those applications would be likely to run at any given time. As a result, in a large IVR system comprised of many system 20 type nodes, the nodes would be required to store IVR applications and media that is used infrequently or, even if needed regularly, the application may be used by only a few callers. A large IVR system of this type wastes much of its storage space by holding copies of these rarely used applications.
System 20 also has problems due to the bandwidth of link 210 between storage device 201 and application media delivery device 202. Only a finite amount of application media can be moved across link 210 at one time. This media restriction limits the port count for system 20. If link 210 had an infinite bandwidth and storage device 201 had infinite storage capabilities, then system 20 could facilitate an unlimited number of ports. However, present capabilities typically restrict IVR systems to 96 or 120 ports maximum. In FIGURE 3, system 30 illustrates another type of architecture for prior art IVR systems. System 30 has application 303 which is linked to application media delivery hardware 302 and voice/telephony hardware 304. Application 303 is connected to callers 306 through voice/telephony hardware 304 over switched public network 305. Media from storage device 301 moves from application media delivery hardware 302 to application 303 across link 310. Like system 20, media distribution is restricted on system 30 due to the bandwidth limitations of link 310. Also, system 30 stores infrequently used IVR applications and media like system 20. As a result, both systems 20 and 30 waste memory space holding rarely used applications and media and both systems are limited in port count.
Turning to FIGURE 4, another prior art IVR system is shown as system 40. Storage device 401 holds all of the media for the IVR applications on the system. Application media is distributed to nodes 402-1 to N across link 410 to application media delivery hardware 404 within each node 402. Link 410 may be a LAN or some kind of data bus. The transferred application media is used in each node by application
403. Voice/telephony hardware 405 links each node to switched public network 406 and to callers 407. In this arrangement, the applications, which typically require significantly less space than their associated media, are distributed to all of the nodes. This is a compromise system that attempts to better utilize the individual processor nodes 402. However, in system 40 there is still a media delivery problem between storage device 401 and nodes 402 due to the bandwidth limitations of link 410. If a high performance disk system is used, system 40 may support about 1500 ports if it handles voice media from a single disk drive. However, system 40 will not meet the requirements of a very large system having tens of thousands of ports. The most significant difference in the systems shown in FIGURES 2, 3 and 4 is the bandwidth limitation. Systems 20 and 30 achieve their bandwidth limit quickly, at about 96 or 120 ports. In system 40, on the other hand, the bandwidth limit is extended to about 1500 ports where it finally reaches the storage capacity limit of disk 401 and the delivery capacity limit of link 410. Ideally, in order to avoid bandwidth limitations during periods of peak system use, an IVR system should dynamically anticipate resource requirements and move the IVR application and associated media to the processor nodes during lull periods.
One embodiment of an Interactive Voice Response system incorporating the present invention is shown as system 10 in FIGURE 1. Storage device 101 holds system 10's IVR applications and associated media. The applications and application media stored on storage device 10 are master copies and they are the only permanently resident copies of the system's IVR applications. Statistical demand engine 102 is linked to storage device 101 via link 110. Demand engine 102 monitors the historical use of the IVR applications and from historical use data it anticipates future IVR application requirements. Based on the predicted requirements for the system, demand engine 102 proactively provides copies of the applications and their associated media to processor nodes 103-1 to N before they are needed by IVR system 10. In cases where an incoming call requires an IVR application that has not been anticipated and, therefore, is not preloaded in a processor nodes 103-1 - 103-N, demand engine 102 downloads a copy of the required application and its associated media to an available node in real-time.
Accordingly, demand engine 102 performs both a predictive function to anticipate application demand levels and a real-time correction function to compensate if a demand was not properly anticipated. Demand engine 102 also monitors when an IVR application is no longer needed by a node 103. Determining when an application is no longer needed can be accomplished by a variety of methods, including monitoring actual usage statistics or monitoring the elapsed time since the application was downloaded or last used. The system may also use the time of day to determine whether the application will be required again. As the utilization of the application decreases, demand engine 102 removes copies of the application from selected nodes in order to free memory space in local cache disk 104 and transient application 105. Demand engine 102 can then load copies of other IVR applications and associated media into the available cache 104 and application 105 memory space in anticipation of future callers that will require different applications. Each application can have different parameters for controlling how the application is handled by system 10. These parameters may be adjusted from time to time either by the system or by a user. For example, demand engine 102 can be set to keep only the lowest possible number of applications at the nodes in order to decrease memory retrieval time.
If desired, specific application criteria can be stored in database 101 and would be provided to demand engine 102 over link 110. These criteria could be used to vary application parameters. Thus, a user or system administrator could use the criteria to vary the applications that are loaded on the various processor nodes without regard to the current system statistics. For example, a user could instruct demand engine 102 that a certain type of application will be needed during a specified time period. Accordingly, these criteria would cause demand engine 102 to load that type of application in a specified number of nodes. The criteria can be input to system 10 by a keyboard, such as in situations where a user knows that a certain application demand will occur in a future time period. In other circumstances, system 10 may develop criteria on its own. For instance, when a certain application requires an operator, system 10 could set the criteria for that application so that it is limited to only those nodes which have operators available at a particular time.
Voice/telephony hardware 106 in each node 103 provides an interface between transient application 105 and switched public network 107. Callers 108 are connected to system 10 through network 107. In a preferred embodiment, demand engine 102 uses a statistical database, such as data array 50 as shown in FIGURE 5. Data array 50 represents the system's IVR applications on the y-axis and time periods T0 to TN on the x-axis. The time periods of arbitrary duration, such as a day, a year, an hour, a month or a week, depending upon the type of system, the duration of incoming calls and the rate at which the current IVR application are changed.
To initialize the system, the user could populate the various time zones with an anticipated usage level for each IVR application. The anticipated use could be derived from historical data or it could be an estimation. In one embodiment, this information will then be updated in real-time as the system is used and as the demand for applications varies over time. In other embodiments, the data in array 50 may only be inserted by a coordinator or supervisor so that the user can control which applications are readily available to future callers or which applications system 10 can use for outgoing messages. Demand engine 102 would then be able to treat outgoing call applications differently from applications used for incoming calls.
In an adaptive system or in a system with real-time automatic updating, blocks 501 and 502 measure the historical use of the system applications and resources. Block 501 represents a source of historical or real time usage of various port and other resources in system 10. This historical usage is provided to block 502 where statistics are gathered in order to measures the actual resource demand levels. Block 503 then uses the statistical data on an actual application demand to predict future application demand levels. Block 504 represents an algorithm that is used to update data array 50 to reflect application use in each time period. The data provided by and the functions performed in blocks 501-504 form a closed loop adaptive system in which system 10 (FIGURE 1) can build an array of actual application use even though array 50 may have been initialized using estimated values.
Depending upon the time frame that is chosen for periods T0 to TN, the system can anticipate not only day-to-day changes, but also weekly changes. For instance, in a seven day cycle having five lull days and two busy days, the system would be able to anticipate the busy days proactively. Therefore, there could be one chart having several different versions of application demand for specific days of the week or times of the year. This capability depends upon the dimensions of the array, and how much data the user desires to maintain on the system. For example, if the system is tracking use over the course of a year and the user wants to anticipate usage during a specific period of time, such as during a holiday, that would require a lot of individual time slots or alternate tables to maintain the year's worth of data. If the system is managed on a narrower level, such as the typical use during a 24-hour day, it could rely on a fast predictor mode to update the histogram in near real-time. The length of the update cycle and the amount of data used to monitor usage are dependent upon the time period that is analyzed, the size of the system and, if desired, the particular application. FIGURE 6 is a block diagram illustrating a system for assigning incoming calls to specific processor nodes 103. Map 60 tracks processor and port availability. For each processor 103 there is a gauge of available resources 604 that indicates the level of resource use. As shown in map 60, gauge 604 is generally an empty/full measure of resource capacity. In the preferred embodiment, there would be some variable in the system software that reflects the actual usage of each processor resource in real-time. In the case of a real-time system, the goal is to anticipate application demand and to match applications with available processors. Because the applications are distributed dynamically among the processors, a specific application will not always be on the same processor node, therefore, the system will have to track the particular applications that are assigned to each processor node. This is accomplished by resource management and call assignment device 62.
Resource management and call assignment device 62 first looks at incoming call attributes, such as the originating number, ANI/DNIS or some particular message that is part of the incoming call signal which indicates that the call will require a specific IVR application. Resource management device 62 then selects an available processor node with the required application. Block 61 illustrates a method of maintaining a pool of available processors, wherein there is a list 602 of available processors for each application 601. Device 62 issues a command back to the network or to a proprietary front end switch to direct the call to a particular processor node resource.
The incoming calls may be routed to general purpose processors wherein a menu is provided and, depending upon the program or application selected from the menu, resource manager 61 switches the call to a processor node loaded with the selected application. If the system is working properly, then histogram 50 will have served to load the required applications into a proper number of processors so that there is sufficient capacity to handle the call promptly using the proper applications. Otherwise, demand engine 102 will download the required application to an available node and the call will be connected to that node. FIGURE 7 illustrates a program for resource management and call assignment. A call arrives in step 701 and it is presented to the resource manager in step 702. Step 702 is an alerting signal, such as a ring signal along with some information about the call or, in the case of an ISDN call, it could be a call setup message or signalling system seven call. In step 703, it is determined whether the call is mapped to a specific application due to the call's ANI/DNIS information. Step 703 uses a map based on ANI to identify a specific application or an originating area. There are two ways of handling incoming calls. If there is a specific mapping, the call is branched to step 706 and the application resource is selected from the available processor list, then the call is routed to the specific port or processor node via a redirect function or switching function. If the system does not know how to handle the call in step 703, an application can be run that presents a selection menu which interacts with the caller and allows the caller to select a specific application. The call is then redirected or switched through the selected application by steps 705 and 706. Finally, the call is completed in step 707 when it is routed to a specific port or processor node.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. For example, while the embodiment described is an IVR system for inbound and outbound calling, the inventive concept can be used in any type of communication system, including voice systems, data systems, such as the internet, and wireless systems. The inventive concept disclosed herein can be used in the Advanced Intelligent Network (AIN) telecommunications environment (for example, between the service control point (SCP), intelligent peripheral (IP) or service node (SN)) as set out in Bell Core specification 1129 and available from Bell Core, which specification is hereby incorporated by reference herein. Also, incorporated by reference herein is U.S. Patent No. 5,469,500 to Satter et al., issued November 21, 1995, and owned by a common assignee.
It will also be understood that in some applications the statistical engine of the present invention could, in fact, reside at one or more of the nodes or remote locations.

Claims

WHAT IS CLAIMED IS:
1. A communication system comprising: at least one node for interfacing between callers and said communication system; a storage device for holding application media, wherein said application media is used by at least one node for controlling interactions to and from said communication system; means for predicting when a particular application will be required for interfacing with a caller; and means for copying predicted ones of said application media to said at least one node for a period of time.
2. The system of claim 1 wherein said copying means further includes means for copying to said at least one node a requested one of said application media that has not been predicted.
3. The system of claim 1 further including: means for removing selected copies of said application media copied to said at least one node.
4. The system of claim 1 wherein said predicting means includes a statistical demand engine.
5. The system of claim 4 wherein said predicting means includes a statistical demand engine for maintaining a record of the historical use of each of said applications over a period of time.
6. The system of claim 5 wherein said statistical demand engine uses said record of historical use to predict a future demand for each of said application media.
7. The system of claim 5 wherein said period of time is variable.
8. The system of claim 1 having a plurality of said nodes and wherein said copying means can copy predicted application media to one or more of said nodes.
9. The system of claim 3 wherein said removing means includes a statistical demand engine.
10. The system of claim 1 having a plurality of said nodes and wherein each of said plurality of nodes comprises: means for temporarily holding a copy of said copied application media; and a processor for executing selected ones of said temporarily held application media.
11. The system of claim 10 wherein said application media is selected in response to a call incoming to one of said nodes from said communication system.
12. The system of claim 10 wherein said application media is selected in response to a call originated via said node to said communication system.
13. The system of claim 10 further comprising: means for monitoring the availability of each of said nodes and for assigning incoming calls to an available node.
14. The system of claim 13 wherein said monitoring means includes means for monitoring which of said applications are assigned to each of said nodes.
15. The system of claim 10 wherein said predicting means includes a statistical demand engine for maintaining a record of the historical use of each of said applications over a period of time.
16. The system of claim 15 wherein said statistical demand engine uses said record of historical use to predict a future demand for each of said application media.
17. The system of claim 15 wherein said period of time is variable.
18. A method of operating a communication system comprising the steps of: monitoring a demand level for each of a plurality of applications used by said system; predicting a future requirement for each of said applications from said demand levels; and providing a copy of certain applications to one or more of a plurality of communication nodes in response to predicted future application requirements with respect to each said node.
19. The method of claim 18 further comprising the step of: providing a copy of a requested application to one of said nodes when said node requires an application that has not been predicted for that node.
20. The method of claim 18 further comprising the step of: removing selected application copies from one or more nodes when selected application copies from one or more of said nodes when said selected application is no longer required by said node.
21. The method of claim 18 wherein said removing step is statistically controlled.
22. The method of claim 20 wherein said removing step occurs as a result of the passage of time.
23. The method of claim 20 wherein said removing step is under the control of selectable variables.
24. The method of claim 18 further comprising the steps of: monitoring the availability of each node; and assigning a communication connection to an available one of said nodes.
25. The method of claim 18 wherein said demand monitoring step further comprises: monitoring which of said applications are used by each of said nodes.
26. The method of claim 18 wherein each of said nodes comprise: a memory for holding a temporary copy of said copied application; and a processor coupled to said memory for executing selected ones of said copied applications.
27. The method of claim 18 wherein said monitoring step further comprises: maintaining a record of the historical use of each of said applications over a period of time.
28. The method of claim 27 wherein said record of historical use is used in said predicting step to predict a future demand for each of said applications.
29. The method of claim 27 wherein said period of time is variable.
30. A method of handling an incoming call in a communication system, wherein said system comprises a plurality of nodes for receiving incoming calls, said method comprising the steps of: identifying a specific application to handle a particular incoming call; if one of said nodes presently has a copy of said identified specific application, routing said incoming call to said node containing said specific application; and if no node presently has a copy of said specific application, providing a copy of said specific application to an available node and routing said incoming call to said available node.
31. The method of claim 30 wherein said identifying step further comprises: determining whether said particular incoming call is to be handled by a preselected specific application; and if no specific application has been preselected for said incoming call, providing a menu of available specific applications for selection by a caller.
32. The method of claim 30 wherein said determining step further comprises: matching said incoming call with said preselected specific application using automatic caller identification.
33. The method of claim 30 further comprising the step of: removing said copy of said specific application from said node after said incoming call is disconnected from said system.
34. A system for processing incoming calls comprising: means for storing communication applications and associated application media; means for processing said incoming calls using said communication applications; means for statistically predicting when certain of said applications will be required; means for providing a copy of said required application and associated application media to an available one of said processing means before the predicted time when said processing means will require said application; and means for routing said incoming calls to said available one of said processing means.
35. The system of claim 34 wherein said application providing means provides a copy of said required communication application to said processing means after said incoming call is connected to said system.
36. The system of claim 34 further comprising: means for removing selected ones of said applications from said processing means.
37. The system of claim 36 wherein said removing means is operable when said predicting means predicts that said application is no longer required at said processing means.
38. A method of predictably removing an application from a communication system processing node comprising the steps of: predicting a future demand for said application; and removing said application from said node based upon said predicted future demand.
39. The method of claim 38 wherein said processing node is remote from a central application storage location.
40. The method of claim 39 wherein said application is a copy of an application that is stored at said central location.
41. The method of claim 38 wherein said application is removed when it is predicted that there will be no future demand during a certain time period.
42. The method of claim 38 wherein said application is removed when said predicted future demand is below a certain level.
43. The method of claim 38 wherein said application is removed when said predicted future demand is less than a predicted future demand for a second application.
44. The method of claim 38 wherein said communication system is comprised of a plurality of processing nodes.
45. The method of claim 44 wherein one or more of said plurality of nodes each have a copy of said application.
46. The method of claim 45 wherein said copy of said application is removed from one or more of said plurality of nodes when said predicted future demand can be satisfied by other of said plurality of nodes having a copy of said application.
47. The method of claim 45 wherein said copies of said specific application are removed from each of said plurality of nodes when said predicted future demand is below a certain level.
48. A system for removing an application from a communication system processing node comprising: means for predicting a future demand for said application; and means for removing said application from said node based upon said predicted future demand.
49. The system of claim 48 wherein said node is remote from a central application storage location and wherein said application is a copy of an application stored at said central location.
50. The system of claim 48 wherein said removing means removes said application when said predicting means predicts that there will be no future demand for said application.
51. The system of claim 50 wherein said predicting means predicts that there will be no future demand during a certain time period.
52. The system of claim 51 wherein said application is removed when said predicting means predicts that said future demand will be below a certain level.
53. The system of claim 48 wherein said application is removed when said predicted future demand is less than a predicted future demand for a second application.
54. The system of claim 48 wherein said communication system is comprised of a plurality of processing nodes.
55. The system of claim 54 wherein one or more of said plurality of nodes each have a copy of said application.
56. The system of claim 55 wherein said copy of said application is removed from one or more of said plurality of nodes when said predicted future demand can be satisfied by other of said plurality of nodes having a copy of said application.
57. The system of claim 55 wherein said copies of said specific application are removed from each of said plurality of nodes when said predicted future demand is below a certain level.
58. A system for use in a telecommunication network, the network operating such that a central processor maintains control of applications that are required to be executed from time to time under control of one or more remote processors, each such remote processor operating, at least in part, from data obtained from duplicate applications residing temporarily at the physical location of the remote processor, said system comprising: means at each such remote location for temporarily storing thereat copies of certain of said applications that are under control of said central processor, said copies of said applications being stored in anticipation of being executed at said remote location; means for predicting which subset of said applications should be transferred for a period of time to said remote location storing means; and means for transferring applications from said central control to each remote storing means under control of said predicting means.
59. The system set forth in claim 58 wherein said transferring means is further operative in response to said determining means for removing certain of said temporarily stored applications from said remote location storage means.
60. The system set forth in claim 58 wherein said predicting means is located at each of said remote locations.
61. The system set forth in claim 58 wherein said predicting means is located at said central processor.
62. The system set forth in claim 58 wherein said predicting means comprises: means for monitoring which applications are actually used at each remote location for specific time periods.
63. The system set forth in claim 58 further comprising: means for copying a particular application, or set of applications, from said central processor to a particular remote location upon the determined necessity to execute a particular application at said remote location and said particular application had not already been copied to said remote location in response to said determining means.
64. The system set forth in claim 58 further including: means for transferring applications from said central control to a particular remote storing means without regard to said predicted periods of time, wherein said transferring means includes: means for transferring the same application to different remote locations.
65. The system set forth in claim 58 wherein said transferring means includes: means for transferring different applications to the same remote location.
66. The system set forth in claim 58 wherein said transferring means includes: means for transferring the same application to different remote locations.
67. The system set forth in claim 58 wherein the period of time that an application is stored at a particular location is dependent upon a predicted period of time for that particular application for that particular remote location.
68. A method of controlling applications for use in a telecommunication network, the network operating such that a central processor maintains control of the applications which are required to be executed from time to time under control of one or more remote processors, each such remote processor operating, at least in part, from data obtained from duplicate applications residing temporarily at the physical location of the remote processor, comprising the steps of: temporarily storing at the remote processor copies of certain of said applications that are under control of said central processor, said copies of said applications being stored in anticipation of being executed at said remote processor; predicting which subset of said applications should be transferred for a period of time to said remote processor; and transferring applications from said central control to each remote processor based upon a predicted future demand determined in said predicting step.
69. The method of claim 68 further comprising the step of: removing certain of said temporarily stored applications from said remote processor in response to said predicted future demand.
70. The method of claim 68 wherein each of said remote locations individually performs said predicting step.
71. The method of claim 68 wherein said central processor performs said predicting step.
72. The method of claim 68 wherein said predicting step further comprises: monitoring which applications are actually used at each remote processor for specific time periods.
73. The method of claim 68 further comprising the step of: copying a particular application, or set of applications, from said central processor to a particular remote processor upon the determined necessity to execute a particular application at said remote processor and said particular application had not already been copied to said remote location.
74. The method of claim 68 further including the step of: means for transferring applications from said central control to a particular remote storing means without regard to said predicted periods of time, wherein said transferring means includes: means for transferring the same application to different remote locations.
75. The method of claim 68 wherein said transferring step further comprises: transferring different applications to the same remote location.
76. The method of claim 68 wherein said transferring step further comprises: transferring the same application to different remote locations.
77. The method of claim 68 wherein the period of time that an application is stored at a particular location is dependent upon a predicted period of time for that particular application for that particular remote location. AMENDED CLAIMS
[received by the International Bureau on 10 March 1999 (10.03.99); original claims 32,59 and 63 amended remaining claims unchanged
(3 pages)] if one of said nodes presently has a copy of said identified specific application, routing said incoming call to said node containing said specific application; and if no node presently has a copy of said specific application, providing a copy of said specific application to an available node and routing said incoming call to said available node.
31. The method of claim 30 wherein said identifying step further comprises: determining whether said particular incoming call is to be handled by a preselected specific application; and if no specific application has been preselected for said incoming call, providing a menu of available specific applications for selection by a caller.
32. The method of claim 31 wherein said determining step further comprises: matching said incoming call with said preselected specific application using automatic caller identification.
33. The method of claim 30 further comprising the step of: removing said copy of said specific application from said node after said incoming call is disconnected from said system.
34. A system for processing incoming calls comprising: means for storing communication applications and associated application media; means for processing said incoming calls using said communication applications; means for statistically predicting when certain of said applications will be required;
57. The system of claim 55 wherein said copies of said specific application are removed from each of said plurality of nodes when said predicted future demand is below a certain level.
58. A system for use in a telecommunication network, the network operating such that a central processor maintains control of applications that are required to be executed from time to time under control of one or more remote processors, each such remote processor operating, at least in part, from data obtained from duplicate applications residing temporarily at the physical location of the remote processor, said system comprising: means at each such remote location for temporarily storing thereat copies of certain of said applications that are under control of said central processor, said copies of said applications being stored in anticipation of being executed at said remote location; means for predicting which subset of said applications should be transferred for a period of time to said remote location storing means; and means for transferring applications from said central control to each remote storing means under control of said predicting means.
59. The system set forth in claim 58 wherein said transferring means is further operative in response to said predicting means for removing certain of said temporarily stored applications from said remote location storage means.
60. The system set forth in claim 58 wherein said predicting means is located at each of said remote locations.
61. The system set forth in claim 58 wherein said predicting means is located at said central processor.
62. The system set forth in claim 58 wherein said predicting means comprises: means for monitoring which applications are actually used at each remote location for specific time periods.
63. The system set forth in claim 58 further comprising: means for copying a particular application, or set of applications, from said central processor to a particular remote location upon the determined necessity to execute a particular application at said remote location and said particular application had not already been copied to said remote location in response to said predicting means.
64. The system set forth in claim 58 further including: means for transferring applications from said central control to a particular remote storing means without regard to said predicted periods of time, wherein said transferring means includes: means for transferring the same application to different remote locations.
65. The system set forth in claim 58 wherein said transferring means includes: means for transferring different applications to the same remote location.
66. The system set forth in claim 58 wherein said transferring means includes: means for transferring the same application to different remote locations.
67. The system set forth in claim 58 wherein the period of time that an application is stored at a particular location is dependent upon a predicted period of time for that particular application for that particular remote location.
PCT/US1998/019835 1997-09-23 1998-09-23 Dynamic distributions of applications and associated resource utilization WO1999016229A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU95022/98A AU9502298A (en) 1997-09-23 1998-09-23 Dynamic distributions of applications and associated resource utilization

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US93592897A 1997-09-23 1997-09-23
US08/935,928 1997-09-23

Publications (1)

Publication Number Publication Date
WO1999016229A1 true WO1999016229A1 (en) 1999-04-01

Family

ID=25467902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/019835 WO1999016229A1 (en) 1997-09-23 1998-09-23 Dynamic distributions of applications and associated resource utilization

Country Status (2)

Country Link
AU (1) AU9502298A (en)
WO (1) WO1999016229A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10035869A1 (en) * 2000-07-14 2002-01-24 Deutsche Telekom Ag Procedure for simplifying the diaglog management in speech dialog systems
WO2002028077A2 (en) * 2000-09-25 2002-04-04 National Notification Center, Llc Call processing system with interactive voice response
DE19939057C2 (en) * 1999-08-18 2002-07-04 Siemens Ag Method for updating subscriber-related data in a telecommunications network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355406A (en) * 1991-02-21 1994-10-11 Vmx, Incorporated Integrated application controlled call processing and messaging system
US5479487A (en) * 1993-02-11 1995-12-26 Intervoice Limited Partnership Calling center employing unified control system
US5533115A (en) * 1994-01-31 1996-07-02 Bell Communications Research, Inc. Network-based telephone system providing coordinated voice and data delivery
US5572581A (en) * 1993-11-12 1996-11-05 Intervoice Limited Partnership Method and apparatus for delivering calling services

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355406A (en) * 1991-02-21 1994-10-11 Vmx, Incorporated Integrated application controlled call processing and messaging system
US5479487A (en) * 1993-02-11 1995-12-26 Intervoice Limited Partnership Calling center employing unified control system
US5572581A (en) * 1993-11-12 1996-11-05 Intervoice Limited Partnership Method and apparatus for delivering calling services
US5533115A (en) * 1994-01-31 1996-07-02 Bell Communications Research, Inc. Network-based telephone system providing coordinated voice and data delivery

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19939057C2 (en) * 1999-08-18 2002-07-04 Siemens Ag Method for updating subscriber-related data in a telecommunications network
DE10035869A1 (en) * 2000-07-14 2002-01-24 Deutsche Telekom Ag Procedure for simplifying the diaglog management in speech dialog systems
WO2002028077A2 (en) * 2000-09-25 2002-04-04 National Notification Center, Llc Call processing system with interactive voice response
WO2002028077A3 (en) * 2000-09-25 2003-07-03 Nat Notification Ct Llc Call processing system with interactive voice response

Also Published As

Publication number Publication date
AU9502298A (en) 1999-04-12

Similar Documents

Publication Publication Date Title
US5703940A (en) Method and apparatus for delivering calling services
US5469500A (en) Method and apparatus for delivering calling services
JP3547142B2 (en) Method and system for determining and using multiple object states in an integrated computer-telephony system
US5633924A (en) Telecommunication network with integrated network-wide automatic call distribution
CA2283861C (en) System and method for managing feature interaction of telephone services
US6047046A (en) Method and apparatus for automating the management of a database
EP0690602B1 (en) System and method for predictive outdialing
JP3877523B2 (en) Method for mixing phone calls
US7366291B2 (en) Call transfer service using service control point and service node
EP0559979A2 (en) Subscriber call routing process system
US6665393B1 (en) Call routing control using call routing scripts
KR20000005796A (en) Dynamic call vectoring
KR20030060076A (en) Computer-telephony integration that uses features of an automatic call distribution system
KR20020082457A (en) Contact routing system and method
US5550911A (en) Adjunct call handling for accessing adjunct-based capabilities platform
CN1166228C (en) Method and apparatus for providing calling service features within incopletely upgraded cellular telefone networks
US6847639B2 (en) Managing feature interaction among a plurality of independent feature servers in telecommunications servers
US5699412A (en) Systems and methods for statistical distribution of messages in a message recording system
US6041108A (en) Method and apparatus for intelligent network call handling in a telephone exchange
WO1999016229A1 (en) Dynamic distributions of applications and associated resource utilization
US5905775A (en) Statistical distribution of voice mail messages
US7076050B1 (en) Information correlation system
EP1107554B1 (en) Method and system for adaptively allocating call-related tasks
JP2003515994A (en) Realization of additional functions and service features of call distribution equipment
US7123711B1 (en) Call handling system and method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM HR HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase