GB2338144A - Predictive capacity management - Google Patents
Predictive capacity management Download PDFInfo
- Publication number
- GB2338144A GB2338144A GB9808349A GB9808349A GB2338144A GB 2338144 A GB2338144 A GB 2338144A GB 9808349 A GB9808349 A GB 9808349A GB 9808349 A GB9808349 A GB 9808349A GB 2338144 A GB2338144 A GB 2338144A
- Authority
- GB
- United Kingdom
- Prior art keywords
- traffic
- network
- data
- capacity
- predicted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3081—ATM peripheral units, e.g. policing, insertion or extraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q11/00—Selecting arrangements for multiplex systems
- H04Q11/04—Selecting arrangements for multiplex systems for time-division multiplexing
- H04Q11/0428—Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
- H04Q11/0478—Provisions for broadband connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q3/00—Selecting arrangements
- H04Q3/64—Distributing or queueing
- H04Q3/66—Traffic distributors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5614—User Network Interface
- H04L2012/5615—Network termination, e.g. NT1, NT2, PBX
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5619—Network Node Interface, e.g. tandem connections, transit switching
- H04L2012/562—Routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5619—Network Node Interface, e.g. tandem connections, transit switching
- H04L2012/5623—Network design, dimensioning, topology or optimisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5625—Operations, administration and maintenance [OAM]
- H04L2012/5626—Network management, e.g. Intelligent nets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/54—Store-and-forward switching systems
- H04L12/56—Packet switching systems
- H04L12/5601—Transfer mode dependent, e.g. ATM
- H04L2012/5629—Admission control
- H04L2012/5631—Resource management and allocation
- H04L2012/5632—Bandwidth allocation
- H04L2012/5634—In-call negotiation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Telephonic Communication Services (AREA)
Abstract
There is disclosed an apparatus and method for managing resources in a communications network in which a private corporate network links a plurality of customer premises networks at different corporate sites, the links being provided by a public backbone network in accordance with a traffic contract between customer equipment at the corporate sites and switches of the public backbone network. At the customer equipment, there is provided a management apparatus operating to predict required bitrate capacity over a plurality of virtual paths across the backbone network; a routing means operating to find an optimized route for end to end connections across said backbone network; and a resource management means operating to generate a capacity envelope limit data for each virtual path, and re-negotiate a traffic contact between the customer equipment and the public backbone network depending upon the generating capacity limit data of each virtual path. Traffic contract negotiation is made according to predetermined negotiating criteria data, which takes into account a confidence level of actual bitrate capacity across a virtual path staying within the predicted capacity envelope limit data predicted for that virtual path.
Description
j 2338144 PREDICTIVE CAPACITY MANAGEMENT
Field of the Invention
The present invention relates to managing resources in a communications network, and particularly, although not exclusively, to a communications network wherein a traffic contract applying limitations on usage of network resources exists.
Bar,kground to the Invention Large institutions, for example corporations, hospitals, and government institutions (hereinafter referred to as "corporate users") operating from a plurality of geographically separated premises at different site locations, require customized telecommunications services between their different sites, and within their premises. Such corporate users may purchase or lease customer premises networks (CPN) which can be installed at the corporate user's premises, eg in an office. Several CPN's may be installed in premises at geographically distant locations and connected together via an existing public communications network. Where the customer premises network supports asynchronous transfer mode (ATM) several customer premises networks may be connected together over narrow band voice connections across a backbone broadband public ATM network. Narrow band connections between several different corporate private networks are made by permanently assigning resources of the broadband public network to form a private corporate network connecting the different corporate premises networks. A person at a first site location in the corporate network making a connection between a telephone connected to one CPN and another telephone connected to another CPN at a second site location across the corporate network dials an internal extension number to make a call. The connection between the CPN's give the impression that the two telephones are connected across a dedicated private network whereas in fact the connection is made partly via the public network operated by a network operator. The private corporate network carries data for general communications purposes such as P221.spc US connections between fax machines or computer terminals, so that data transfers can be made between corporate sites in addition to voice telephone conversations.
To set up the private corporate network, and in order to maintain a quality of service within the private corporate network, a corporate user operating the CPN's connected across the public communications network enters into a traffic contract with a network operator who runs the public communications network.
The traffic contract ensures that a specified level of bitrate capacity for the corporate user's connections is always available. The bitrate capacity specifies the number of megabits per second that can be transferred between the connected CPN's. Details of traffic contracts are stored as contract data in a memory in a central office switch which allocates an available bitrate capacity according to the contract data.
The traffic contract can specify whether an agreed maximum bandwidth limit is a 'hard' limit or a 'soft' limit. With a 'hard' limit, the central office switch does not allow a corporate user to exceed their agreed bitrate over the broadband public network. Calls across the public network are blocked by the central office switches once the agreed capacity is utilized. With a 'soft' limit, the agreed bitrate capacity limit is allowed to be exceeded so no disruption of the corporate network occurs, but with stringent financial penalties on the corporate user.
However, a problem associated with narrowband traffic is that it can be difficult for the corporate user to judge the bitrate capacity they require. Network operators charge for connections across the broadband backbone network proportional to the bitrate capacity stated in the contract, ie the higher the bitrate capacity a customer specifies, the higher the cost of the connections. If a corporate user chooses too low a bitrate capacity, then they risk losing quality of -0 service across their private corporate network, which can lead to disrupted connections. However, choosing a bitrate capacity which is too high can lead to excessive communications costs and wastage of purchased capacity.
Summary of the Invention
An aim of the present invention is to provide an apparatus and method for enabling users of public backbone networks to negotiate traffic contracts having bitrate capacities close to the user's actual requirements.
In the preferred embodiments, traffic contract negotiation is performed substantially in real time over a period of hours or minutes. Traffic bitrate across a network is predicted in advance, and real time traffic negotiation is operated based on the predicted traffic bitrates. Negotiations of traffic contracts and prediction of required bitrate capacity is carried out at a customer premises equipment (CPE).
According to a first aspect of the present invention there is provided a network management apparatus for managing data traffic capacity resources available to a customer equipment switch, said apparatus comprising:
a traffic prediction means operating to predict data traffic capacity requirements of said customer equipment switch; and a resource management means operating to negotiate a traffic contract, depending on a result of said predicted data traffic capacity requirements.
The network management apparatus is configured for managing data traffic capacity resources between the customer equipment switch and a backbone communications network. The traffic prediction means may operate to predict data traffic capacity requirements of the customer equipment across the backbone network. The resource management means may operate to negotiate a traffic contract between the customer equipment and a switch of the backbone communications network.
Preferably the network management apparatus comprises a routing means operating to determine a corresponding route for each of a plurality of source to destination connections. Routing connections based on predicted traffic demands may enable improved network utilization and consequently either reduced communications costs for network users andfor improved profit for network operators.
Suitably said routing means inputs Predicted traffic data produced by said prediction means.
Said routing means may comprise a processor operating in accordance with a route finding algorithm to assign a respective route to each of a plurality of said connections on a least cost basis. The routing means in one implementation may comprise a processor operating in accordance with a genetic algorithm.
Said routing means may comprise a processor and memory operating in accordance with a route finding algorithm to assign a respective route to each of a plurality of connections on a shortest path basis.
Preferably said resource management means produces a capacity envelope describing predicted bitrate requirements of said connections over a period of time.
Said resource management means may operate a negotiation procedure at preset intervals.
Said traffic prediction means may comprise a processor and memory operating a neural network algorithm.
The traffic prediction means preferably uses historical network traffic data, including date and time to generate a traffic prediction data predicting bitrate capacity of one or more end-to-end connections.
In an asynchronous transfer mode implementation, the connections may be implemented by means of virtual paths.
The invention includes a customer equipment switch incorporating a network management apparatus according to the first aspect.
According to a second aspect of the present invention there is provided a management apparatus for managing transmission resources in a private communications network, said apparatus comprising:
a traffic prediction means operating to predict end to end traffic across said private network; a route finding means operating to determine routes between a plurality of source and destination end points of said private communications network; and a resource management means operating to allocate transmission resources for carrying said end to end traffic.
According to a third aspect of the present invention there is provided a network management apparatus for managing transmission resources in a private communications network utilizing resources provided in accordance with a traffic contract by a public backbone network, said management apparatus comprising:
a traffic prediction means operating end to end traffic requirements across said private networks; a route finding means operating to determine a plurality of routes between a plurality of source and destination end points of said end to end traffic of said public backbone network; means for generating a predicted capacity envelope data describing a predicted upper limit of required data traffic of said plurality of connections; and negotiation means operating to produce negotiation signals for negotiating bitrate capacity limits of said connections with said public backbone network.
According to a fourth aspect of the present invention there is provided a method of managing data traffic capacity resources in a communications network wherein in a traffic contract imposing limitations on said data traffic capacity resources used by a plurality of source-destination connections exists, said method comprising the steps of.
obtaining a prediction of resource requirements for future connections in said network; and negotiating a future traffic contract in response to said predicted resource requirements.
In a digital communications network, generally said traffic capacity requirement comprises a bitrate capacity requirement.
Suitably said step of obtaining a prediction of resource requirements and said step of negotiation occurs at preset intervals.
Suitably said step of obtaining a prediction of resource requirements of future connections comprises:
1 1 inputting network configuration data; inputting network traffic data; and operating a traffic prediction algorithm on said network traffic data and said network configuration data to produce a predicted traffic data.
Preferably said network traffic data comprises historical network traffic data.
Said prediction algorithm may operate to produce a said predicted traffic data having an associated confidence level data.
Preferably said step of obtaining a prediction of resource requirements comprises:
generating a predicted traffic data of bitrates over a plurality of connections; and generating a traffic capacity envelope data describing a predicted upper limit of required data traffic capacity of said plurality of connections, eg on a virtual path or virtual trunk route (VTR).
Said method may include the step of generating allocations of routes to predicted traffic means of a route finder means.
Said step of obtaining a prediction of resource requirements may comprise the step of producing a traffic capacity envelope data describing predicted bitrate requirements of said virtual path connections over a period of time.
Said step of negotiating a future traffic contract comprises the step of:
comparing a current bitrate requirement of a connection to a predicted bitrate requirement; and if said current connection's bitrate requirement exceeds said predicted bitrate requirement, renegotiating an increase in network resources available in said traffic contract.
Alternatively, the method may comprise the step of:
comparing a current connection's resource requirements with said predicted resource requirement; and if said current connection's resource requirement exceeds said predicted resource requirement, selecting a new route for said current connection.
According to a fifth aspect of the present invention there is provided a communications network comprising a private communications network supported by a public backbone network, a method of managing transmission resources of said private communications network, said method comprising the steps of:
collecting measured traffic data representing end to end communications traffic carried over said private communications network; generating predicted traffic data representing predicted future end to end communications traffic carded over said private communications network; and negotiating a traffic contract for providing transmission resource capable of supporting said predicted future end to end communications traffic.
Said method may further comprise the step of generating predicted traffic data representing predicted communications traffic over the public network.
Said step of generating predicted traffic data may comprise:
inputting traffic data into said prediction engine; training a neural network algorithm on said traffic data; and operating said neural network algorithm to produce a predicted traffic data.
The traffic data input into said neural network algorithm preferably comprises traffic demand data. Such traffic demand data typically may comprise historic amounts of originating traffic.
Said step of generating predicted traffic data may comprise predictions at various points in time generating a traffic envelope data representing an estimate of transmission resources required to support a predicted traffic demand.
Said method may comprise the steps of for a virtual path, by means of a route finder component:
comparing a current traffic demand data representing current end to end communications traffic carried across said private network with said traffic envelope data; and if said current traffic demand data exceeds said current traffic envelope data, renegotiating an increase in transmission resource.
In an asynchronous transfer mode (ATM) implementation, said step of renegotiating an increase in transmission resource may comprise negotiating a new virtual path with a broadband ATM network.
Said method may comprise the step of.
comparing a current traffic (demand) data representing a current end to end communications traffic carried across said private network with said traffic envelope data; and if said current traffic (demand) data exceeds said current traffic envelope data, selecting a new route.
Said method may comprise the step of.
determining a route between a source network element and a destination network element of each of a plurality of virtual paths.
According to a sixth aspect of the present invention there is provided in a communications network comprising a private network having a plurality of geographically separated customer premises equipment linked by a public backbone network, wherein a traffic contract exists between said private network and said backbone network, said traffic contract specifying limitations on available bitrate capacity linking said customer premises equipment, a method of managing said bitrate capacity over said backbone network comprising the steps of.
generating a traffic capacity prediction data describing a future bitrate capacity provision required by a said customer premises equipment for carrying a plurality of end to end connections; generating routing data describing allocations of traffic of said end to end connections over said backbone communications network; and determining a Capacity envelope data from said traffic capacity prediction data for each route described in said routing data.
Preferably said step of generating routing data comprises operating a routing algorithm in accordance with an optimization criteria data for obtaining said routing data describing an optimized allocation of traffic over said backbone communications network.
Said step of generating a traffic capacity prediction data may comprise generating a said traffic capacity prediction data for each of a plurality of virtual paths across said backbone network.
Said method may comprise the step of determining a capacity envelope data for each of a plurality of virtual paths across said backbone network.
Said step of determining a capacity envelope data may comprise determining a confidence level data describing a confidence of said traffic capacity envelope limit being exceeded.
The method may further comprise the step of negotiating a capacity limit for each of a plurality of virtual paths across said backbone network according to a predetermined negotiation criteria data.
According to a seventh aspect of the present invention there is provided in a communications network comprising a private network having a plurality of geographically separated customer premises equipment linked by a public backbone network, wherein a traffic contract exists between said private network and said backbone network, said traffic contract specifying limitations on available bitrate capacity linking said customer premises equipment, a method of managing said bitrate capacity over said backbone network comprising the steps of:
generating a traffic capacity prediction data describing a future bitrate capacity provision required by a said customer premises equipment for carrying a plurality of end to end connections; generating routing data describing allocations of traffic of said end to end connections over said backbone communications network; and negotiating a capacity availability over said backbone communications network according to said traffic capacity prediction data and said routing data.
Brief Description of the Drawings
For a better understanding of the invention and to show how the same may be carded into effect, there will now be described by way of example only, specific embodiments, methods and processes according to the present 25 invention with reference to the accompanying drawings in which:
Fig. 1 illustrates schematically a corporate private communications network comprising a plurality of customer premises networks each having an ATM switching device connected to a local exchange of a backbone broadband public 30 ATM network; Fig. 2 illustrates schematically a diagram of a virtual path between two customer premises networks connected over the backbone public network according to a traffic contract; Fig. 3 illustrates schematically a graph representing bitrate capacity used over a virtual path between first and second customer premises networks as a function of time over a seven day period, including a bitrate capacity upper limit imposed by the traffic contract; Fig. 4 illustrates schematically components of a network controller of a customer premises network, including a capacity manager module according to a specific implementation of the present invention; Fig. 5 illustrates schematically components of the capacity manager module identified in Fig. 4, including a data processing component, a network traffic prediction component, a route finder component and a resource manager component; Fig. 6 illustrates schematically data processing steps carried out by the capacity manager identified in Fig. 4, including processing a network model input data and network traffic profile input data; Fig. 7 illustrates schematically an example representation of the network model data identified in Fig. 6 showing a set of virtual trunk routes between host ATM customer premises switches; Fig. 8 illustrates schematically an example of a network traffic profile data identified in Fig. 6; Fig. 9 illustrates schematically data transfer and information flow between components of capacity manager identified in Fig. 4, and other network elements present in the network controller, one or more customer premises networks and a public backbone communications network; Fig. 10 illustrates schematically steps executed by the network traffic prediction component identified in Fig. 5; Fig. 11 herein illustrates schematically a screen display representing a communications network incorporating a private corporate network; Fig. 12 illustrates schematically a screen display resulting from the network traffic prediction component identified in Fig. 5; Fig. 13 illustrates schematically steps executed by the route finder component identified in Fig. 5; Fig. 14 illustrates schematically a node and link representation of network data describing network elements of a backbone broadband public network, which is used by route finder component of the capacity manager; Fig. 15 illustrates schematically components of the resource management component identified in Fig. 5, including a capacity envelope generator and a traffic management component; Fig. 16 illustrates schematically steps executed by the capacity envelope generator identified in Fig. 15; Fig. 17 illustrates schematically a screen display resulting from the steps executed in Fig. 16; Fig. 18. illustrates schematically steps executed by the traffic management component identified in Fig. 15; and Fig. 19 illustrates schematically a graph representing data describing a predicted bitrate requirement and a capacity envelope representing bitrate capacity limits for a virtual path between first and second customer premises 5 equipment over the backbone network.
Detailed Desrription of the Best Mode for Canying Out the Invention There will now be described by way of example the best mode contemplated by the inventors for carrying out the invention. In the following description numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent however, to one skilled in the art, that the present invention may be practiced without using these specific details. In other instances, well known methods and structures have not been described in detail so as not to unnecessarily obscure the present invention. Whilst a specific implementation of the present invention is described herein relating to an ATM network, the invention described herein is not to be taken as restricted to an ATM implementation, but is limited only by the features recited in the claims herein.
Fig. 1 of the accompanying drawings illustrates an example of a private virtual corporate network comprising a plurality of customer premises networks (CPN) 101, 106, 115, the customer premises networks linked over a broadband public ATM backbone network 105. A first customer premises network 101 at a first corporate site comprises a first customer premises equipment (CPE), eg an ATM switching device 102, such as a Vector ' or Passport ' switch manufactured by Northern Teiecom Limited. Connected to the first ATM switch 102 are a plurality of terminal equipment items 103. The terminal equipment items may comprise, for example, fax machines, computer terminals or telephone handsets.
The first customer premises network 101 accesses the public ATM network 105 via a first local exchange switch 104. A second CPN 106 at a second corporate site comprises a second ATM switch 108 and is connected to a second plurality of terminal equipment items 109. Second ATM switch 108 is connected to the broadband public ATM backbone network 105 via a second local exchange 107. Similarly for a third corporate site there is provided a third customer premises network 115, having a third customer premises equipment 116, eg an ATM switch.
Between the first plurality of terminal equipment and the second plurality of terminal equipment exists a plurality of virtual channels over which data andlor voice transmissions are carTied. A plurality of virtual channels between first and second customer premises equipment 102, 108 are grouped together into one or more virtual paths 113 between first and second local exchanges 104, 107. Similarly, one or more virtual paths exist between the first and third corporate sites and the second and third corporate sites. In general a virtual path exists between each corporate site customer premises network and each other 15 corporate site customer premises network. In order for terminal equipment items connected to first ATM switch 102 to connect with terminal equipment items connected to second ATM switch 108, a traffic contract 111 exists between the customer premises networks and the public backbone network. The traffic contract exists as a programmed configuration of the local exchange switches 20 104, 107 in the form of stored control and data signals comprising a contract data.
The traffic contract 111 imposes a preset limit on end to end bitrate capacity between first customer premises equipment 102 and second customer premises equipment 108 by placing a restriction in the bitrate of the virtual paths 113 across the public network. The traffic contract is negotiated beforehand between the corporate user and a network operator who operates the public ATM network.
Fig. 2 of the accompanying drawings illustrates a schematic diagram of a virtual path between first and second customer premises equipment 102, 108 at respective first and second corporate sites. The traffic contract established provides, for example, a bitrate capacity of 210 megabits per second (Mb/s) for a virtual path 113 between the two ATM switches 102 and 108. The actual bitrate available is specified in the contract. According to the traffic contract, this bitrate is guaranteed for a permanent virtual path. Thus, the network operator must ensure that enough capacity is available over the ATM backbone network to satisfy a permanent virtual path having this bitrate at ail times.
Fig. 3 of the accompanying drawings illustrates a graph with vertical axis representing bitrate capacity in Megabits per second (Mb/s) and horizontal axis representing a period of 7 days D1 - D7 for a single virtual path. Horizontal line 301 illustrates a contract traffic bitrate capacity limit of 210 Mbls. Graph line 302 illustrates an example of actual amount of bitrate capacity required by connections comprising the virtual path between two corporate premises networks 101, 106 over the 7 day period. In the example of Fig. 3, communications between first and second corporate sites experience a morning "busy hour", where there is peak demand for capacity between the two corporate sites at around 11.00 am, shown as capacity peak 306, followed by an afternoon "busy hout' peak 307. A general level of capacity utilization fails off on the sixth and seventh days, this being a weekend. An underlying capacity utilization of the virtual trunk group of the order of 50 - 60 Mb/s is present, due to data communications across the network, eg transfers of computer file data between corporate sites.
A corporate customer selects a value of bitrate capacity limit 301 which allows uninterrupted communications between corporate sites throughout the week. However, traffic demand over a virtual path is not constant, and to ensure adequate communications over a weekly period, a corporate user must ensure a maximum bitrate capacity 301 which will not be exceeded. From the graph in Fig. 3 it is apparent that at two periods denoted by arrows 302 and 303 the actual bitrate capacity usage across the virtual path exceeds the traffic contract capacity bitrate limit 301 of 210 Mbls. At these two times 302 and 303, depending on the terms of the contract, the customer would either be penalized financially for exceeding the traffic contract bitrate capacity or the customer would experience traffic loss at the two time periods 302 and 303 as the local exchange switch would overwrite ATM cells unable to be carried over the virtual path. As shown in Fig. 3, the actual bitrate capacity usage by the customer is much lower than the traffic contract bitrate capacity limit 301 of 210 Mb/s for a considerable amount of time, eg at times indicated by arrows 304 and 305. This difference between actual bitrate usage and the bitrate capacity paid for by the customer means that the customer is paying for an unnecessarily high amount of bitrate capacity for most of the time, but is still being penalized for exceeding the traffic contract limit 301 some of the time.
Traffic demand is influenced by a number of changing and unpredictable factors, which cause short and long term variations in traffic demand usage patterns, for example a natural disaster may cause the loss of communications on a particular network due to a fault, whilst causing adeluge of traffic on another network over a short term (hours to days) timescale. Over a longer term, evolving changes in telecommunications services and introduction of new technology, eg video phones, may alter usage patterns over a virtual path. Characteristics of narrow band traffic include the following:
the on-demand nature of narrow band traffic can cause potentially major and damaging load fluctuations quality of service and grade of service requirements for narrow band traffic are strict some variations in traffic follow trends, eg busy hour trends, daily trends, working week trends aggregate loading of virtual paths may be expected to vary significantly. Eg for a narrow band network, due to demand from more than one local exchange onto a virtual path network topology and routing algorithms may influence how traffic from a number of sources is aggregated onto fewer resources Specific implementations according to the present invention aim to provide a process and apparatus for accurately predicting required bitrate capacity between sources and destinations over a backbone network substantially in real time, over a look-ahead period of the order of hours or minutes. The specific implementation described herein aims to negotiate bitrate capacity provision with a network operator by raising or lowering a bitrate capacity limit according to a traffic contract, substantially in real time over a look-ahead period of the order of hours and minutes, with the object that a user, eg a corporate user may have increased control over an amount of bitrate capacity purchased from a network operator, and may be able to raise or lower a contracted bit rate provision to suit the corporate user's own predicted immediate requirement for communications capability across a public backbone network operated by a network operator.
From the corporate user's point of view, an advantage of the specific implementation presented herein may be that unnecessary capacity which would otherwise remain un-utilized, is not purchased from the network operator, and bitrate provision by the network operator can be negotiated in advance to account for a corporate user's peak utilization of public network resources, thereby avoiding traffic contract penalties incurred in the prior art case, where a fixed limit bitrate provision is exceeded. From a network operator's point of view, bitrate capacity which in the prior art case of a fixed limit traffic contract would remain allocated to a corporate user but would be un-utilized, in the specific implementation herein may be able to be released back into a general capacity resource of the network, and may be resold to other customers. By operating substantially real time traffic contract negotiation with a plurality of customers, a general level of utilization of the whole traffic capacity resources of the network may be increased, resulting in a combination of either lower costs for customers, andlor higher profits for network operators.
Specific implementations of the present invention may achieve the aims by:
predicting future end to end traffic demands based on current and historical narrow band traffic demand ascertaining an optimal virtual path traffic assignment based on predetermined routing strategies efficient management and realistic negotiation of usage of an underlying ATM broad band network resource based on the predicted virtual path traffic usage profiles Fig. 4 of the accompanying drawings illustrates a schematic diagram of an example of a network controller 110 of the narrowband customer premises network, the network controller configured to predict and manage traffic capacity required by a customer premises network 201 from a broadband public ATM backbone network 105, via a customer premises equipment. Network controller 110 comprises a general purpose computer operating on a UNIX platform, such as a Hewlett Packard 9000 sefles'workstation, comprising a memory 404; a processor 405; an operating system 403; one or more communications ports 406 through which data is transferred to and from customer premises equipment 102, 108, 116; a management information base (M113) 408 holding data describing physical resources of a corporate network, eg route, node equipment, link bitrate; a capacity manager application 402 operating to predict bitrate and negotiate traffic contracts based upon predicted bitrate requirements; and a graphical user 30 interface 401 which can be used by personnel to input and output data and configure the customer premises network 101.
Fig. 5 of the accompanying drawings illustrates a block diagram of components of the capacity manager 402. The capacity manager comprises a pre-processor component 501, a network traffic prediction component 502; a route finder component 503; and a resource manager component 504. The components are each constructed as separate modules which communicate with each other by passage of data signals to each other. Practically, each component is implemented as one or more processors, and an area of memory which stores programmed control signals for operating the processor to perform functionality as described hereinafter. The components interface with each other using the conventional interfaces provided in the HP workstation in the specific implementation herein. The components are capable of interfacing with the external local exchange switches 104, 107, 117 by conventional network management protocols, eg SNIVIP/CM1P or via a third party network management system such as HP Openview, via comms ports 406, and the customer equipment switches 102,108, 116.
Pre-processor component 501 receives as input from the management information base 408 raw data describing operation of the customer premises equipment and its interactions with the local exchange switch of the backbone network. Such data may comprise:
Quality of service data describing allocated calls, cell loss of buffers of the customer equipment which support various connections across the backbone network Call demand and utilization data, describing utilization of connections across the backbone network & Data describing sources and destinations of connections between customer premises equipment across the network Configuration data describing interconnections of customer premises equipment, customer premises networks and connections to local exchanges of a backbone network.
From the above data, the pre-processor generates current and historical traffic profile data describing current and historical utilization and call demand of connections across the private network.
Also fed into the pre-processor is a network model data describing a topology of a private network and its connections to the local exchanges of the backbone network.
The network traffic profile data comprises real time and historical data describing bitrate capacity usage by connections between a plurality of corporate sites over a period of time. Pre-processor component 501 processes the input call demand data to generate real time and historical network traffic profile data corresponding to network elements and topology described in the network model data. The historical traffic profile data describes bitrate capacity used by narrowband end to end connections across the broadband network over a time period.
Sequences of such current and historical traffic profile data are used as inputs to network traffic prediction component 502. The network traffic prediction component 502 generates a predicted traffic profile data representing a prediction of how much bitrate capacity will be required by connections between sources and destinations at predetermined future times based on an assessment of the historical network traffic profile data of the connections. In an ATM network a collection of these connections are known as Virtual Paths (VP). However, the network model data represents a plurality of connections between customer premises equipment as virtual trunk groups (VTG) connecting the customer premises equipments. The virtual trunk groups hide the complexity of the virtual paths over the broadband network. An output generated by traffic prediction component 502 comprises data describing a list of virtual trunk groups with their corresponding respective predicted traffic profiles over a period of time for each end to end virtual trunk group. The predicted traffic profile data for a virtual trunk group is independent of any route across the network which is to be taken by the virtual trunk group. In the best mode herein, the traffic prediction component 502 operates a neural network algorithm to predict future traffic profile data.
The predicted traffic profile data output of traffic predictor component 502 is input to route finder component 503. Route finder component 503 assigns a route for each end to end connection. The allocated routes may comprise links between network links, network nodes and local exchanges in the broadband public network. Each route also has an associated bitrate capacity requirement for the connection which uses the route. The route finder component finds optimal or near optimal permanent virtual path routes for service requests in the ATM broadband network, taking into account current and predicted traffic profiles on the network, user defined parameters, eg quality of service requirements, and predetermined routing constraints. The route finder component allocates traffic according to a routing algorithm and user specified optimization criteria eg balanced loading on a specified topology/route configuration using the predicted traffic demand data. The route finder component 503 can deal with a plurality of connection requests simultaneously,- and optimizes a choice of routes chosen for each connection request based on a current routing criteria and taking into account the other connection requests. In contrast, known prior art routing modules route one connection at a time rather than optimizing a load across a network as a whole. In the best mode herein, the route finder 503 is implemented as a genetic algorithm configured to optimally find a set of routes across a network. An example of such a genetic algorithm based route-finder is disclosed in GB 97 27163.9 filed 24 December 1997, a copy of which is filed with this disclosure.
Routed connections generated by route finder component 503 are input to resource manager component 504. Resource manager 504 component generates a capacity envelope data for each virtual trunk group which it receives as an input. The resource manager 504 uses these predicted capacity envelope data to negotiate a traffic contract with network manager 112 of the broadband public network 105 to reserve capacity on one or more virtual paths. Resource manager 504 can also feed back data describing connections across the backbone network to pre-processor component 501 at any time in order that the current and historical traffic profile data are updated, giving a more accurate prediction of bitrate capacity requirements. The traffic contract negotiation may also lead to some connections requiring re- routing, in which case data describing the re-routing is fed back to the route finder component 503 by route finder component 503. The routing module operates to re-route connections after traffic contract re-negotiation based on the network routing algorithm. The resource manager component 504 may also operate to re-sell unused capacity available within the traffic contract to other network users, or back to a network operator of the broadband backbone network.
Fig. 6 of the accompanying drawings illustrates schematically in a general overview, data processing operations carded out at the network controller 110 at a customer site. Traffic predictor 502 receives as an input the network model data 602 and network traffic profile data 603. The network traffic profile data comprises historical bitrate data over a repeating period, eg a day or week for each of a plurality of voice or data connections between network terminal equipment at a first site and network terminal equipment at a second site.
Historical data over a plurality of repeating time periods is examined for each connection, and the traffic predictor component 502 uses this input data to generate predicted required bitrate usage patterns for aggregates of connection over virtual trunk groups supported by the customer premises equipment over a future look-ahead period (ie generates predicted traffic profile data). The route finder allocates predicted traffic to virtual paths. The resource manager 504 generates a predicted capacity envelope data for virtual paths between a plurality of customer premises equipment over specified periods of time. The capacity envelopes generated are used as the basis for negotiating a traffic contract with a network manager of the broadband public backbone network. Capacity manager 402 may also compare actual bitrate capacity usage of a virtual path with predicted capacity bitrate requirements which it has predicted for the virtual paths, and update network traffic profile data 603 and the traffic contract if necessary. The traffic contract data representing the details of the traffic contract between the corporate user and the broadband network operator is stored in the MIB 408 andlor on the local exchange switch and is accessible for read and write operations by the capacity manager 402.
Fig. 7 of the accompanying drawings illustrates schematically a two dimensional representation of a network model data 602 stored in MIB 408, describing physical resources and connectivity of a narrowband private virtual corporate network using broadband ATM public backbone network resources. Fig. 7 schematically represents a configuration of a corporate network in data. The model comprises a number of node and link data, describing real physical resources of the private network eg nodes representing local exchanges 701, customer premises equipment, eg ATM switches 702 or routers 703, the nodes connected by a plurality of links 704, representing physical or vertical links between the nodes. Each of a plurality of customer premises equipment, eg ATM customer premises switches such as the Vector or Passport' switches of Northern Telecom Limited are represented in the network model data stored in the management information base 408, together with data describing a plurality of adaptive grooming routers 703 which physically connect the customer premises equipment 702 to a plurality of local exchange equipment 701 of the public ATM network. The adaptive grooming routers 703 provide a narrowband to broadband switching capability and may be replaced by any other equipment performing the same function. Functionality and features offered by the adaptive grooming routers 703 are represented in the data model. Such functionality may be represented by data describing quality of service, connectivity, cell loss, cell discard rate, maximum route capacities. Data describing bit rate capacity of physical links between the customer premises equipment 702 and the adaptive grooming routers 703 and between the adaptive grooming routers 703 and the local exchanges 701 is included in the network model data. Data describing real features are shown outside the dotted line in Fig. 7. The network model data also includes data describing virtual features of the private network, such as a plurality of virtual trunk group cross connects 705, 706 and a plurality of virtual paths (or virtual trunk routes), represented by lines 707 in Fig. 7, the virtual paths or virtual trunk groups connecting customer premises equipments 702 at different corporate sites across the corporate network. Data describing characteristics of each of the virtual paths and each of the virtual trunk group cross connects is contained in the MIB 408. A virtual trunk path or virtual trunk route (VTR) is used as a virtual equivalent to an ATM transmission path (TP), and a virtual trunk route cross connect (VTG-X) is used as a virtual equivalent to an ATM cross connect function. Traffic physically carried between source and destination AGRs 703 across the broadband network is represented in the model by the virtual trunks 707 between customer premises equipments 602. A virtual trunk route (VTR) 707 between customer premises equipments in the narrow band model is equivalent to a virtual path across the broadband ATM network. The virtual trunk routes and virtual trunk route cross connects are implemented by the customer premises equipment 702, the adaptive grooming routers 703, and local exchanges 701 comprising the public broadband ATM network, shown outside dotted line 708 in Fig 7. Virtual features shown within the dotted line 708 in Fig. 7 represent functionality provided by the physical resources.
Fig. 8 of the accompanying drawings illustrates a graphical representation of an example of a predicted network traffic profile data 603 as output from traffic prediction component 502. It will be understood that the representation of Fig. 8 is a graphical representation of a predicted traffic profile data of one virtual path between source and destination adaptive grooming routers 703 presented herein for ease of explanation. In the example shown, the predicted network traffic profile data is stored as a data table of bitrate values corresponding to time values over a seven day period, in the MIB 408. A vertical axis represents bitrate capacity in megabits per second and a horizontal axis represents a period of seven days divided into hourly or minute intervals. Graph line 801 represents predicted fluctuations in bitrate capacity usage between two corporate sites over one virtual path over a period of time. As can be seen from the graph in Fig. 8, the bitrate capacity usage varies considerably over the period of time, with some periods requiring a high bitrate capacity usage. The virtual trunk groups between customer premises equipment may comprise a concatenated plurality of such virtual paths, each having a separate predicted network traffic profile data similarly as illustrated schematically in Fig. 8 herein.
Referring to Fig. 9 herein, there is illustrated a logical information flow overview of components of capacity manager 402 relative to other network entities, the other network entities including a network system manager 900 resident on network controller 110, and the ATM broadband backbone network itself 105.
A predicted traffic demand profile 901 for each virtual trunk group is generated by traffic predictor component 502, from input current and historical network traffic profile data comprising network data 902, which is received from the management and information base of system manager 900 in real time. Resource manager component 504 also receives data concerning routing rules and policies from the system manager 900. Resource manager component 504 may also receive data and instructions 903 from a graphical user interface operated by human operator. Such user interaction data may allow for special case scenarios, for example special events on the network to be entered into resource manager component 504, and to manually override any routing data/policies or other network data 501. A human user can interact with the resource manager component 504 by inputting user interaction data 903 via graphical user interface 401. Predicted traffic demand profiles 901 are input into route finder component 503 in step 904 and into resource manager 504 in step 905.
Route finder component 503 views route selection as an optimization problem, where each set of routes has an associated value calculated by an evaluation function. The evaluation function is parameterized. The route finder component 503 may be controlled for selection or optimization of its performance, 10 using information selected from the following set:
a number of shortest paths to consider when routing traffic a constraints on routing objectives a weighting co-efficient associated with route costs link utilization 9 deviation from mean link utilization link utilization threshold violations route expected cell losses route delays delay threshold violations 1 expected cell loss threshold violations Using a balanced loading optimization criteria, the predicted traffic demand is allocated to virtual paths 707 which are routed between a source host ATM equipment and a destination host ATM equipment. A source - destination route is equivalent to an ATM virtual circuit over the broadband network. A predicted capacity envelope data is calculated for each virtual trunk route (M) on which a number of different virtual trunk groups may be aggregated. A virtual trunk route is equivalent to a virtual path or an ATM broadband backbone network.
Route finder component 503 produces sets of routing table data which are input into system manager 900 in real time. This routing table data may in turn be fed back to the route finder component 503 as part of network data 501.
Route finder component 503 also outputs virtual circuit and virtual path route provisioning data to the adaptive grooming routers 704 which provision bitrate capacity across the broadband ATM network 105, the adaptive grooming routers outputting adaptive grooming router network data 906. Resource manager 504, receiving predicted traffic demand profiles in step 904 merges the predicted traffic demand profiles for a set of virtual paths and generates a predicted capacity envelope data 907 for each virtual path, comparing the predicted capacity envelope for each virtual path with a current capacity envelope data. If current demand as indicated by the prediction demand profiles 904 is higher than present negotiated capacity, resource manager 504 may issue re- negotiation requests 908 for negotiating more capacity with ATM network 105. Actual negotiation is implemented through communications between network controller of the customer premises network and local exchange switch 104 of the broadband ATM network.
Resource manager 504 provides prediction envelope data 907 and re 0 negotiation request data 908 to the broadband ATM network 105 for requesting a bitrate capacity envelope, provisioning virtual paths across the broadband network, re-negotiating capacity on existing virtual paths, and re- negotiation of new virtual paths. Resource manager 504 receives answer data 909 from the local exchange switch 104 of the broadband ATM network 105. Transmission of answer data, prediction capacity envelope data and re-negotiation request data may be made by conventional means between network controller 110 and local exchange switch 104, eg by a CORBA interface or an HP OpenView protocol.
Similarly, re-negotiation request data 908 may specify re-negotiation of a lower bandwidth, or fewer number of virtual paths across the ATM network 105. The resource manager 504 may also send routing request data to component 503 for requesting one or more new virtual trunk groups and may send route re- selection request data 911 to the MIB of system manager 900 requesting re-selection of routes. Resource manager 504 is kept up to date with current configurations of the network in real time through two way transmission of control and current data 912 between resource manager 504 and the private network system manager 900.
Fig. 10 of the accompanying drawings illustrates examples of steps which may be executed by network traffic prediction component 502. At step 1001 traffic prediction component 502 receives historical network traffic data generated by pre-processor component 501 as input. At step 1002 the historical network traffic data is used as input to a traffic prediction algorithm. The algorithm may be implemented by means of a known neural network algorithm or other known prediction algorithm means. In the case of a neural network, the neural network algorithm is "trained" on successive historical traffic profile data for an end to end connection, and outputs a predicted traffic demand for each connection. The algorithm produces a prediction of network bitrate capacities for all end to end connections between source and desfination nodes at different corporate sites included in the input historical traffic profile data for a future time t. At step 1103 further historical traffic profile data may be input. In general, the more historical traffic profile data which is available to the prediction algorithm, the more accurate the bitrate capacity requirements it predicts will be. If more historical traffic profile data is available then control is passed back to step 1001 in order for more historical traffic profile data to be input for use by the prediction algorithm. Otherwise, control is passed on to step 1004 where the predicted network bitrate capacity requirement for future time t is output to route finder component 503. At step 1005 if another prediction is required for network bitrate capacity requirements at another future time t, then control is passed back to step 1001. Otherwise, network traffic prediction terminates at step 1006.
Fig. 11 herein illustrates a network manager screen display available at graphical user interface 401, showing network elements of a communications network including a corporate network. The network manager display comprises a plurality of icons representing customer premises equipment 1100, a plurality of icons representing routers 1101; a plurality of icons representing local exchanges 1102; these being entry ports to the broadband public ATM network; a plurality of icons representing links 1103 between the customer premises equipment and the routers and local exchanges; a plurality of icons representing virtual trunk groups 1104 between customer premises equipment of the corporate user's network; and a plurality of icons representing virtual cross connects 1105 for connecting the corporate user's separate customer premises equipment. A corporate user may select routes between individual customer premises networks of the private corporate network using a pointing device, eg mouse or trackball, which moves an electronic cursor icon across the screen. Individual network elements may be selected by pointing the electronic cursor icon at icons representing those network elements and pressing a switch on the pointing device.
Fig. 12 herein illustrates a screen display produced at graphical user interface 401 of network controller 110 summarizing predicted network bitrate capacity requirements. The display comprises a graph with horizontal axis representing time divided into 1 hour intervals. Vertical axis of the graph is divided into a plurality of lines, each of the lines, for example line denoted by reference numeral 1201, represents bitrate capacity requirements of a virtual trunk route between two corporate sites over the time period represented on the horizontal axis. The predicted bitrate capacity requirements may be color coded for clarity. For example, a relatively high bitrate capacity requirement may be displayed in black or red whilst relatively low bitrate capacity requirement may be displayed in yellow. A vertical scroll bar icon 1202 may be used to display predicted bitrate capacity requirement for other connections between other pairs of network elements.
By visual inspection of the information displayed on the display of predicted lo network bitrate capacity requirements shown in Fig. 12 herein, a corporate user may be able to see at a glance, by visual inspection of the display, which virtual paths in the corporate network are predicted to approach or exceed their current con tracted capacity limits, and therefore require contract re- negotiation before the predicted communications are made.
Fig. 13 herein illustrates steps executed by route finder component 503. At step 1301 predicted network bitrate requirement data between pairs of network elements generated by component 502 are input. At step 1302 the route finder component determines which routing criteria it will use to allocate network resources such as bitrate capacity for routing connections between nodes. Examples of routing criteria are shortest possible path or balanced loading wherein the shortest possible path is not always taken so that network traffic can be more evenly distributed across'network resources. At step 1303 a data describing a set ofconnections and their predicted traffic profile data is selected. At step 1304, a routing algorithm is applied to determine a number of possible route options for a plurality of connection requests, taking into account the selected routing criteria and the input predicted traffic profile data. In one implementation the route finder module may operate a known genetic algorithm. In other implementations, the routing algorithm used can be a conventional heuristic one such as the known Yen-Lawler routing algorithm. At step 1305 if there are no more connections to route, then route selection is optimized for each connection based on the selected routing criteria. Optimization of the routing takes into account a plurality of routes at the same time.
1 Fig. 14 of the accompanying drawings shows a graphical representation illustrating schematically a process carried out as a data processing operation of the routing algorithm of route finder component 503. The routefinder component may select routes on the basis of lowest cost criteria or on the basis of shortest path criteria. If, for example a lowest cost criteria is applied, and a connection is to be routed between a source network element at a source node S 1401 and destination network element of a destination node D2 1402, the connection may be achieved via network links 1406 - 1408 between the source and destination nodes, via intermediate nodes 1403, 1404. Each link has an associated cost 1405 which the routing algorithm may take into consideration. The associated costs may represent, for example physical distance between links or available 1.5 bitrate capacity. If the routing algorithm is set to select a 1owest cost" criteria then the connection between source and destination network elements S (1401) and D2 (1402) would be routed via a lowest cost route through links 1406 - 1408.
A shortest path route would force a route between source 1401 and destination 1402 through nodes T1 (1403), T2 (1404). Similarly, a shortest path route between source node S and destination node D1 (1408) may also pass through nodes T1, T2. However, low cost links attract traffic and can quickly become congested. Using a shortest path criteria for route selection, cannot avoid placing traffic on shortest path links and risks congesting the shortest path links. Thus, the route finder component 503 considers a plurality of number k shortest paths, taking into account congestion, and may alternatively route traffic between source S and destination D1 through nodes T3, T4. The route finder component 503 is not restricted to routing virtual paths over the shortest links.
The route finder component 503 may route data corresponding to different traffic types. A link cost can be assigned to a link specified on a traffic type basis.
By assigning each link a plurality of different cost data, depending on the type of traffic to be carded over the link, traffic between a single source and single destination of different types may be routed over different routes. For example, voice traffic data may be routed over a different route between source S and destination DI from bursty computer generated data between the same source and destination, and video traffic between the same source and destination may be routed by a third route. Allocation of different cost data to each link allows real networks to be modeled with improved accuracy by the route finder component 503. Further, traffic between source S and destination D1 can be distributed across a plurality of number m paths. Using the routing algorithm:
several routes can be used to carry the service request traffic distribution can be user-defined or optimized by route finder component 503 by default, a single route may carry all service traffic Further, for point to point (single source to single destination) traffic and point to multipoint (single source to multiple destination) traffic can be accommodated by the route finder component 503. Point to multipoint traffic is handled using the lowest cost criteria, since shortest path route is unsuitable for providing a solution for point to multipoint routing. Further, lowest cost routing criteria is preferred for dealing with mixed network traffic data type service requests.
Fig. 15 of the accompanying drawings details an architecture of resource management component 504. Routed connection data generated by route finder component 503 is input to a capacity envelope generator component 1501 which generates a capacity envelope data representing a capacity limit which closely follows a predicted bitrate requirement over a route specified in the predicted traffic profile data of that route. A capacity envelope data generated by component 1501 is used as an input to a traffic management component 1502. Traffic management component 1502 uses the capacity envelope data to negotiate traffic contracts with network manager 112 of the broadband public ATM backbone network. The traffic management component may re- negotiate the traffic contract at preset intervals, usually upon receipt of an updated predicted capacity envelope, in order that the bitrate capacity specified in the traffic contract and the capacity envelope's bitrate capacity for a virtual path differs by as little as possible. If for a virtual path, the bitrate capacity provided by the traffic contract is greater than the bitrate capacity requirements of the private network, at certain times, the resource management component 504 may "sell" the excess bitrate capacity to other network users. The traffic management component may also feed back data describing bitrate capacity usage on the network at a particular time, indicated by arrow 1503, to pre-processor component 901 in order to provide updated network traffic profiles.
Fig. 16 of the accompanying diagrams illustrates steps executed by capacity envelope generation component 1501. At step 1600 the routed predicted traffic profile data output by route finder component 503 are received as inputs by capacity envelope generator component 1501. At step 1601 the capacity envelope generator determines time intervals at which the capacity envelope will be calculated, for example every 15 minutes. The intervals may be a user-defined variable. At step 1602 the capacity envelope generator determines a confidence level upon which capacity envelope calculations Will be based. Prediction algorithms operated in the prediction component 502 will have varying degrees of accuracy, and may not precisely predict the actual bitrate capacity utilized by future connections. The confidence level may be a user defined variable which reflects the perceived accuracy of the prediction algorithm, for example a 90% confidence level may indicate that the actual bitrate has a 90% confidence of failing within the capacity envelope limit. In setting the confidence level, there is a trade-off between setting a capacity limit envelope which is unnecessarily high, but which stands a low chance of being exceeded by actual communications usage, thereby ensuring a high confidence of avoiding penalties for exceeding the traffic contract bitrate limit, or setting a lower capacity limit envelope which purchases only the minimum capacity required for communications, but with a higher risk that the actual communications used will exceed the contracted capacity limit envelope. At step 1603 a first route from the input data is selected. At step 1604 a bitrate capacity limit for the selected route is calculated, taking into account the confidence level. At step 1604 there is calculated the bitrate capacity limit over the interval determined. At step 1605 if the interval calculated represents the end of the period for which the capacity envelope is to be calculated then control is passed to step 1606. Otherwise, step 1604 is repeated for a next interval. Step 1606 generates the capacity envelope for the selected route by joining the bitrate capacity limits over all intervals calculated at step 1604. At step 1607 a logic question is asked whether there are more routes for which capacity envelopes are to be calculated. If the question asked at step 1607 is answered in the negative then the capacity envelope generator terminates at step 1608. If the question asked at step 1607 is answered in the affirmative then a next route for which a capacity envelope is to be generated is selected from the input data at step 1609 and control is passed back to step 1604.
Fig. 17 of the accompanying drawings represents a screen display produced at graphical user interface 401 of the network controller 110 showing a capacity envelope for a predicted trunk route. The display comprises a graph with horizontal axis representing time divided into time intervals, usually a same time interval as determined at step 1601 of Fig. 16. Vertical axis of the graph represents bitrate capacity in megabits per second. The capacity envelope data generated is represented by a solid graph line 1701. The predicted bitrate capacity requirement of the selected route is indicated by a broken graph line 1702. The display also comprises a horizontal scroll bar 1706 which may be used to display further time intervals or other route connections. The display also contains three icons 1703, 1704 and 1705, each corresponding to a different confidence level which may be selected by a user by manipulation of a pointing device, eg a mouse or trackball.
Fig. 18 of the accompanying drawings illustrates steps executed by a traffic management process 1502. The steps shown are executed for all connections implemented by the private corporate network at any time. At step 1801 a logic question is asked whether the current bitrate capacity required by the connection is greater than the capacity envelope limit predicted for that connection. If the question asked at step 1801 is answered in the negative then no action is taken for the particular connection under consideration and the traffic management terminates at step 1807. If the question asked at step 1801 is answered in the affirmative then control is passed onto step 1802 wherein a logic question is asked whether the network controller's traffic management component 1502 is to negotiate for a new bitrate capacity with the network manager. If the result of step 1802 is affirmative then network controller 110 or an operator at CPN 101 negotiates with network manager 112 for a new traffic contract with sufficient bitrate capacity to implement the connection under consideration. If the result of step 1802 is negative or if it is not possible to re-negotiate for a new traffic contract then control is passed on to step 1804. At step 1804 a question is asked whether the current traffic contract allows a bitrate capacity to be exceeded. If the result of step 1804 is answered in the affirmative then the network controller accepts a financial penalty or a risk of loss of quality of service at step 1805 and the traffic management process terminates for the connection currently under consideration at step 1807. If the result of step 1804 is answered in the negative then control is passed onto step 1806 wherein the connection may be rerouted before the traffic management process terminates at step 1807.
Fig. 19 herein illustrates schematically a predicted capacity demand envelope data 907 produced by resource manager 504. The predicted capacity envelope data 1902 is illustrated herein in the form of a graph with vertical axis representing bitrate capacity in megabits per second and horizontal axis representing time over a seven day period for use of a virtual path. Graph line 1901 illustrates predicted bitrate traffic demand requirement between two corporate sites. Graph line 1902 represents a required traffic contract bitrate capacity limit envelope which varies at 15 minute intervals over the seven day period shown on the graph for connections between first and second corporate sites.
As can be seen from the graph in Fig. 19, the difference between the predicted capacity demand envelope 1902, ie the traffic contract bitrate capacity the customer wishes to pay for, and the predicted bitrate requirement 1901 which the capacity envelope is based upon is, overall, much less than the differences between the utilized capacity 1901 and the fixed capacity limit 1903. As can be seen from the graph in Fig. 19 the predicted capacity limit envelope 1902 is greater than the predicted bitrate capacity 1901 at all times over the seven day period. Thus if the predicted bitrate capacity requirement 1901 is an accurate indication of what the customer's actual bitrate usage will be over the time period represented in the graph and the capacity limit envelope of the traffic contract can be negotiated with the network operator in advance, then the customer pays for a bitrate capacity limit in the traffic contract which they actually require rather than paying for bitrate capacity that they do not need. By predicting peak usage over a virtual path in advance, the customer can negotiate an increased capacity limit in the traffic contract for a period covering the peak bitrate usage of the virtual path, thereby avoiding a financial penalty for exceeding the capacity limit specified in the traffic contract.
Claims (39)
1. A network management apparatus for managing data traffic capacity resources available to a customer equipment switch, said apparatus comprising:
a traffic prediction means operating to predict data traffic capacity requirements of said customer equipment switch; and a resource management means operating to negotiate a traffic contract, depending on a result of said predicted data traffic capacity requirements.
2. A network management apparatus as claimed in claim 1 comprising a routing means operating to determine a corresponding route for each of a plurality of source to destination connections.
3. A network management apparatus according to claim 2, wherein said routing means inputs data produced by said prediction means.
4. A network management apparatus according to claim 2 or 3, wherein said routing means comprises a processor operating in accordance with a route finding algorithm to assign a respective route to each of a plurality of said connections on a least cost basis.
5. A network management apparatus according to any one of claims 2 to 4, wherein said routing means comprises a processor and memory operating in accordance with a route finding algorithm to assign a respective route to each of a plurality of connections on a shortest path basis.
6. A network management apparatus according to any one of claims 2 to 5, wherein said routing means comprises a processor operating a genetic algorithm.
7. A network management apparatus according to any one of the above claims, wherein said resource management means produces a capacity envelope describing predicted bitrate requirements of said connections over a 5 period of time.
8. A network management apparatus according to any one of the above claims, wherein said resource management means operates a negotiation procedure at preset intervals.
9. A network management apparatus according to any one of the above claims, wherein said traffic prediction means comprises a processor and memory operating a neural network algorithm.
10. A network management apparatus according to any one of the above claims, wherein said traffic prediction means uses historical network traffic data.
11. A network management apparatus according to any one of the above claims, wherein said connections are implemented by means of virtual paths in an ATM network.
12. A customer equipment switch incorporating a network management apparatus as claimed in any one of the above claims.
13. A management apparatus for managing transmission resources in a private communications network, said apparatus comprising:
a traffic prediction means operating to predict end to end traffic across said private network; a route finding means operating to determine routes between a plurality of source and destination end points of said private communications network; and a resource management means operating to allocate transmission resources for carrying said end to end traffic.
14. A network management apparatus for managing transmission resources in a private communications network utilizing resources provided in accordance with a traffic contract by a public backbone network, said management apparatus comprising:
a traffic prediction means operating end to end traffic requirements across said private networks; a route finding means operating to determine a plurality of routes between a plurality of source and destination end points of said end to end traffic of said public backbone network; means for generating a predicted capacity envelope data describing a predicted upper limit of required data traffic of said plurality of connections; and negotiation means operating to produce negotiation signals for negotiating bitrate capacity limits of said connections with said public backbone network.
15. A method of managing data traffic capacity resources in a communications network wherein in a traffic contract imposing limitations on said data traffic capacity resources used by a plurality of sou rce-d esti nation connections exists, said method comprising the steps of.
obtaining a prediction of resource requirements for future connections in said network; and negotiating a future traffic contract in response to said predicted resource requirements.
16. A method according to claim 15, wherein said traffic capacity requirement comprises a bitrate capacity requirement.
17. A method according to claim 15 or 16, wherein said step of obtaining a prediction of resource requirements and said step of negotiation occur at preset intervals.
18. A method according to any one of claims 15 to 17, wherein said step of obtaining a prediction of resource requirements of future connections comprises:
inputting network configuration data; inputting network traffic data; and operating a traffic prediction algorithm on said network traffic data to produce a predicted traffic data.
19. A method according t -0 claim 18, wherein said network traffic data comprises historical network traffic data.
20. A method according to claim 18 or 19, wherein said prediction algorithm operates to produce a said predicted traffic data having an associated confidencelevel.
21. A method as claimed in any one of claims 15 to 20, wherein said step of obtaining a prediction of resource requirements comprises:
generating a predicted traffic data of bitrates over a plurality of connections; and generating a traffic capacity envelope data describing a predicted upper limit of required data traffic capacity of said plurality of connections.
22. A method according to claim 21, wherein said step of obtaining a prediction of resource requirements comprises the step of.
producing a traffic capacity envelope data describing predicted bitrate requirements of said connections over a period of time.
23. A method according to any one of claims 15 to 22, wherein said step of negotiating a future traffic contract comprises the step of.
comparing a current bitrate requirement of a connection to a predicted bitrate requirement; and if said current connection's bitrate requirement exceeds said predicted bitrate requirement, renegotiating an increase in network resources available in said traffic contract.
24. A method according to any one of claims 15 to 23, comprising the step of.
comparing a current connection's resource requirements with said predicted resource requirement; and if said current connection's resource requirement exceeds said predicted resource requirement, selecting a new route for said current connection.
25. In a communications network comprising a private communications network supported by a public backbone network, a method of managing transmission resources of said private communications network, said method comprising the steps of:
collecting measured traffic data representing end to end communications traffic carded over said private communications network; generating predicted traffic data representing predicted future end to end communications traffic carried over said private communications network; and negotiating a traffic contract for providing transmission resource capable of supporting said predicted future end to end communications traffic.
26. A method according to claim 25, wherein said step of generating predicted traffic data comprises:
inputting traffic data into said prediction engine; training a neural network algorithm on said traffic data; and operating said neural networkalgorithm to produce a predicted traffic data.
27. A method according to claim 26, wherein said traffic data input into said neural network algorithm comprises traffic demand data.
28. A method according to any one of claims 25 to 27, wherein said step of generating predicted traffic data comprises the step of generating a traffic envelope data representing an estimate of transmission resources required to support a predicted traffic demand.
29. A method according to any one of claims 25 to 28, comprising the steps of.
comparing a current traffic demand data representing current end to end communications traffic carried across said private network with said traffic envelope data; and if said current traffic demand data exceeds said current traffic envelope data, renegotiating an increase in transmission resource.
30. A method according to claim 29, wherein said step of renegotiating an increase in transmission resource comprising negotiating a new virtual path with a broadband ATM network.
31. A method according to any one of claims 25 to 30, comprising the step of:
comparing a current traffic (demand) data representing a current end to end communications traffic carried across said private network with said trafFic envelope data; and if said current traffic (demand) data exceeds said current traffic envelope data, selecting a new route.
32. A method according to claim 31, comprising the step of..
determining a route between a source network element and a destination network element of each of a plurality of virtual paths.
33. In a communications network comprising a private network having a plurality of geographically separated customer premises equipment linked by a public backbone network, wherein a traffic contract exists between said private network and said backbone network, said traffic contract specifying limitations on available bitrate capacity linking said customer premises equipment, a method of managing said bitrate capacity over said backbone network comprising the steps of:
generating a traffic capacity prediction data describing a future bitrate capacity provision required by a said customer premises equipment for carrying a plurality of end to end connections; generating routing data describing allocations of traffic of said end to end connections over said backbone communications network; and determining a capacity envelope data from said traffic capacity prediction data for each route described in said routing data.
34. The method according to claim 33, wherein said step of generating routing data comprises operating a routing algorithm in accordance with an optimization criteria data for obtaining said routing data describing an optimized allocation of traffic over said backbone communications network.
35. The method according to claim 33 or 34, wherein said step of generating a traffic capacity prediction data comprises generating a said traffic capacity prediction data for each of a plurality of virtual paths across said backbone network.
36. The method according to any one of claims 33 to 35, comprising the step of determining a capacity envelope data for each of a plurality of virtual paths across said backbone network.
37. The method according to claim 36, wherein said step of determining a capacity envelope data comprises determining a confidence level data describing a confidence of said traffic capacity envelope limit being 5 exceeded.
38. The method according to any one of claims 33 to 37, further comprising the step of negotiating a capacity limit for each of a plurality of virtual paths across said backbone network according to a predetermined negotiation criteria data.
39. In a communications network comprising a private network having a plurality of geographically separated customer premises equipment linked by a public backbone network, wherein a traffic contract exists between said private network and said backbone network, said traffic contract specifying limitations on available bitrate capacity linking said customer premises equipment, a method of managing said bitrate capacity over said backbone network comprising the steps of:
generating a traffic capacity prediction data describing a future bitrate capacity provision required by a said customer premises equipment for carrying a plurality of end to end connections; generating routing data describing allocations of traffic of said end to end connections over said backbone communications network; and negotiating a capacity availability over said backbone communications network according to said traffic capacity prediction data and said routing data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9808349A GB2338144A (en) | 1998-04-22 | 1998-04-22 | Predictive capacity management |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9808349A GB2338144A (en) | 1998-04-22 | 1998-04-22 | Predictive capacity management |
Publications (2)
Publication Number | Publication Date |
---|---|
GB9808349D0 GB9808349D0 (en) | 1998-06-17 |
GB2338144A true GB2338144A (en) | 1999-12-08 |
Family
ID=10830626
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB9808349A Withdrawn GB2338144A (en) | 1998-04-22 | 1998-04-22 | Predictive capacity management |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2338144A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003084252A1 (en) * | 2002-06-14 | 2003-10-09 | Datasquirt Limited | Intelligent wireless messaging system |
GB2395869A (en) * | 2001-06-15 | 2004-06-02 | Datasquirt Ltd | Intelligent wireless messaging system |
AU2002314649B2 (en) * | 2001-06-15 | 2006-11-16 | Datasquirt Limited | Intelligent wireless messaging system |
US10255607B2 (en) | 2006-11-15 | 2019-04-09 | Disney Enterprises, Inc. | Collecting consumer information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995024802A1 (en) * | 1994-03-09 | 1995-09-14 | British Telecommunications Public Limited Company | Bandwidth management in a switched telecommunications network |
GB2311439A (en) * | 1996-03-21 | 1997-09-24 | Northern Telecom Ltd | Data communication network |
EP0798942A2 (en) * | 1996-03-29 | 1997-10-01 | Gpt Limited | Routing and bandwith allocation |
GB2311689A (en) * | 1996-03-29 | 1997-10-01 | Plessey Telecomm | Bidding for bandwidth in a telecommunications system |
-
1998
- 1998-04-22 GB GB9808349A patent/GB2338144A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995024802A1 (en) * | 1994-03-09 | 1995-09-14 | British Telecommunications Public Limited Company | Bandwidth management in a switched telecommunications network |
GB2311439A (en) * | 1996-03-21 | 1997-09-24 | Northern Telecom Ltd | Data communication network |
EP0798942A2 (en) * | 1996-03-29 | 1997-10-01 | Gpt Limited | Routing and bandwith allocation |
GB2311689A (en) * | 1996-03-29 | 1997-10-01 | Plessey Telecomm | Bidding for bandwidth in a telecommunications system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2395869A (en) * | 2001-06-15 | 2004-06-02 | Datasquirt Ltd | Intelligent wireless messaging system |
GB2395869B (en) * | 2001-06-15 | 2005-02-16 | Datasquirt Ltd | Intelligent wireless messaging system |
AU2002314649B2 (en) * | 2001-06-15 | 2006-11-16 | Datasquirt Limited | Intelligent wireless messaging system |
WO2003084252A1 (en) * | 2002-06-14 | 2003-10-09 | Datasquirt Limited | Intelligent wireless messaging system |
US10255607B2 (en) | 2006-11-15 | 2019-04-09 | Disney Enterprises, Inc. | Collecting consumer information |
Also Published As
Publication number | Publication date |
---|---|
GB9808349D0 (en) | 1998-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0765552B1 (en) | Enhancement of network operation and performance | |
US6069894A (en) | Enhancement of network operation and performance | |
US5970064A (en) | Real time control architecture for admission control in communications network | |
KR100235689B1 (en) | The improved dynamic bandwidth predicting and adapting apparatus and method in high speed packet switch | |
CA2299785C (en) | Packet scheduling in a communication network with statistical multiplexing of service classes | |
Murphy et al. | Distributed pricing for embedded ATM networks | |
EP0931408B1 (en) | Multi-protocol telecommunications routing optimization | |
CA2187242C (en) | A method of admission control and routing of virtual circuits | |
US5978387A (en) | Dynamic allocation of data transmission resources | |
JP2000286896A (en) | Packet routing device, packet routing method and packet router | |
WO2002009494A2 (en) | End-to-end qos in a softwitch-based network | |
EP0765582B1 (en) | A resource model and architecture for a connection handling system | |
JP2003534678A (en) | Dynamic optimization of high quality services in data transfer networks | |
US6842780B1 (en) | Method of management in a circuit-switched communication network and device which can be used as a node in a circuit-switched communication network | |
EP0748142A2 (en) | Broadband resources interface management | |
GB2338144A (en) | Predictive capacity management | |
Arvidsson | High level B-ISDN/ATM traffic management in real time | |
US20090268756A1 (en) | Method for Reserving Bandwidth in a Network Resource of a Communications Network | |
JP3856837B2 (en) | Method of management in circuit switched communication network and apparatus usable as node in circuit switched communication network | |
Resende | Combinatorial optimization in telecommunications | |
CA2348577A1 (en) | Management of terminations in a communications network | |
AU780487B2 (en) | Network linkage type private branch exchange system and control method thereof | |
Baglietto et al. | A proposal of new price-based call admission control rules for guaranteed performance services multiplexed with best effort traffic | |
YAMAMOTO et al. | Dynamic routing schemes for advanced network management | |
Youn et al. | A Shared Buffer‐Constrained Topology Reconfiguration Scheme in Wavelength Routed Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |