WO2007134339A2 - Engine near cache for reducing latency in a telecommunications environment - Google Patents

Engine near cache for reducing latency in a telecommunications environment Download PDF

Info

Publication number
WO2007134339A2
WO2007134339A2 PCT/US2007/069023 US2007069023W WO2007134339A2 WO 2007134339 A2 WO2007134339 A2 WO 2007134339A2 US 2007069023 W US2007069023 W US 2007069023W WO 2007134339 A2 WO2007134339 A2 WO 2007134339A2
Authority
WO
WIPO (PCT)
Prior art keywords
state
tier
engine
message
cache
Prior art date
Application number
PCT/US2007/069023
Other languages
French (fr)
Other versions
WO2007134339A3 (en
Inventor
Anno R. Langen
Rao Nasir Khan
John D. Beatty
Ioannis Cosmadopoulos
Original Assignee
Bea Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bea Systems, Inc. filed Critical Bea Systems, Inc.
Publication of WO2007134339A2 publication Critical patent/WO2007134339A2/en
Publication of WO2007134339A3 publication Critical patent/WO2007134339A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • H04L65/1104Session initiation protocol [SIP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the current invention relates generally to managing telecommunications and more particularly to providing a near cache for reducing latency in a cluster network environment.
  • ⁇ > can be limited by existing IT a.nd network infrastructures that are closed, proprietary and too rigid to support these next generation services.
  • PSTN Public Switched Telephone Networks
  • VoIP Voice Over Internet Protocol
  • VoIP technologies enable voice communication over "vanilla" IP networks, such as the public Internet.
  • a steady decline m voice revenues has resulted in heightened competitive pressures as carriers vie to grow data/service revenues and reduce churn through the delivery of these more sophisticated data services.
  • Increased federal regulation, security and privacy issues, as well as newly emerging standards can further compound the pressure
  • FIG 1 A is an exemplary illustration of a functional system layers in various embodiments.
  • FIG 1C is an exemplary' illustration of a SlP server deployed in a production environment, in accordance with various embodiments.
  • FIG. 2 is an exemplary illustration of the SlP server cluster architecture in accordance with various embodiments of the invention
  • FIG 3 is an exemplary illustration of a near cache in the SIP server cluster architecture in accordance with various embodiments of the invention
  • FIG. 4A is an exemplary flow diagram of the near cache functionality, in accordance with various embodiments.
  • FSG. 4B is an exemplary flow diagram of the engine tier message processing, in accordance with various embodiments.
  • FtG. 4C is asi exemplar) ' flow diagram of timing the performance of the near engine cache, in accordance with various embodiments.
  • FIG. 5 is ati exemplary illustration of a call flow in a typical SlP communication session, in accordance with various embodiments.
  • a diagram may depict components as logical iy separate, such depiction is merely for illustrative purposes, it can be apparent to those skilled in the art that the components portrayed can be combined or divided into separate software, firmware and/or hardware components.
  • a network accessible device/appliance such as a router.
  • an engine-near cache in a session initiation protocol (SlP) server architecture for improving latency and reducing various time costs in processing messages.
  • the SlP server can be comprised of an engine tier and a state tier distributed on a cluster network environment.
  • the engine tier can send, receive and process various messages.
  • the state tier can maintain in-memory state data associated with various SlP sessions.
  • a near cache can be residing on the engine tier in order to maintain a local copy of a portion of the state data contained in the state tier.
  • Various engines in the engine tier can determine whether the near cache contains a current version of the state needed to process a message before retrieving the state data from the state tier. Accessing the state from the near cache can save on various latency costs such as serialization, transport and deserialization of state to and from the state tier.
  • the near cache can be toned to further improve performance of the SIP server
  • FIGLfRIi 1' I A is an exemplary illustration of functional system layers in accordance with various embodiments.
  • this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it will also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
  • a Session initiation Protocol (SlP) Server 102 and a Network Gatekeeper 104 can comprise a portfolio of products that collectively make up the Communications Platform 100.
  • the SlP Server 102 provides the Communications Platform !00 with a subsystem in. which application components that interact with S IP-based networks may be deployed
  • the Network Gatekeeper 104 provides a policy-driven telecommunications Web services gateway that allows granular control over access to network resources from ⁇ n-trusted domains,
  • a variety of shared and re-usable software and service infrastructure components comprise the Communications Platform 100.
  • an Application Server such as the WebLogicTM Application Server by BHA Systems, Inc. of San Jose, California.
  • This Application Server may be augmented and adapted for deployment in telecommunications networks, while providing many features and functionality of the WebLogic Server counterpart widely deployed in enterprise computing environments.
  • Application Server embodiments for use in the telecommunications applications can provide a variety of additional features and functionality, such as without limitation: Optimized for Peak Throughput
  • communications platform embodiments can provide a variety of additional features and functionality, such as without limitation: Highly Deterministic Runtime Environment Clustering for High- Availability (HA) and Scalabi 1 ity
  • FIGURE IB is another exemplary illustration of functional system layers in a communications platform embodiment.
  • this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. Tt will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it will also he apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
  • Communications platform 100 comprises a SiP Server (WLSS) 102 and a Network Gatekeeper (WLNG) 104.
  • Tools for interacting with Web Services such as a Web Service - Universal Description Discovery Interface fWS/UDDl) 1 10, a Web Service - Business Process Execution Language (WS/BPEL) 1.12 may be coupled to the SlP Server 102 and the Network Gatekeeper 104 in embodiments.
  • a Sog/trace and database 1 14 can assist with troubleshooting.
  • the Communications Platform !00 can interface with a.n OSS/BSS system 120 via resource adapters 122. Such interfaces can provide access to billing applications 124, Operation, Administration, and Maintenance (OAM) applications 126 and others.
  • OAM Operation, Administration, and Maintenance
  • a policy engine 128 can control the activities of the above-described components which can be implemented in a scalable cluster environment (SCE) 130.
  • SCE scalable cluster environment
  • 4 Communications Platform embodiment can provide an open, high performance, software based fault-tolerant platform that allows operators to maximize revenue potential by shortening time to market and bigniiicantiy reducing pcr-servicc implementation and integration cost and complexit ⁇
  • the Communications Platform is suitable for use by for " Network Infrastructure Vendor, Network Operators and Communications Service Prov iders in multiple deployment scenarios ranging from fully IMS oriented network architectures to hybrid and highly heterogeneous netwotk architectures Ii is not restricted to use only in carrier networks, however, and may be deployed in Hn ⁇ erpme communications networks without restriction or extensive customization
  • the Communications Platform can serve in the role of &n IMS SIP Application Server and offers Communications Service Providers an execution environment in which to host applications (such as the WebLogic Network Gatekeeper), components and standard service cnablers
  • FIGURE 1C is an exemplary illustration of a SlP seivei deployed in a production environment, in accordance with ⁇ arious embodiments
  • this diagram depicts components as logically separate, such depiction Is merely for illustrative purposes It will be apparent to those skilled in the an that the components portrayed in this figure can be ajbitranly combined or divided into separate software, firmware and/or hardware Krrtherraore. it will also be apparent to those skilled in the art that such components. regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means
  • the SlP server 102 can be used as a back-to-back user agent (B2BUA) 150 in a typical telecommunications environment ⁇ B2BUA can take the place of an intermedial') between communications between user agents 160, 162, including various cellular phones, wireless devices, laptops, computets, applications, and oilier components capable of communicating with one another electronically
  • B2BUA 150 can provide multiple advantages, including controlling the flow of communication between user agents, enabling different user agents to communicate with one another (e g a web application can communicate with a cellular phone), as well as various security advantages ⁇ s an illustration, the user agents can transmit to the SlP server instead of communicating directly to each othei and thus malicious users can be from sending spam and viruses, hacking into other user agent devices, and otherwise compromising security.
  • the SIP server 102 can be implemented as a Java Enterprise Edition application server that has been extended with support for the session initiation protocol (SIP) as well as other operational enhancements that allow it to meet the demanding requirements of the next generation protocol -based communication networks.
  • the SIP server 102 can include an Enterprise Java Beans (BJB) container 144, a Hyper Text Transfer Protocol (HTTP) servlet container 142, an SIP servlet container 140, various Java 2 Enterprise Edition (J2EE) services 146, and SIP 150 and HTTP 148 components.
  • BJB Enterprise Java Beans
  • HTTP Hyper Text Transfer Protocol
  • SIP servlet container 140 an SIP servlet container 140
  • J2EE Java 2 Enterprise Edition
  • SIP 150 and HTTP 148 components The SIP stack of the server can be fully integrated into the SIP sen-let container 140 and can offer much greater ease of use than a traditional protocol stack.
  • a SlP servlet Application Programming Interface can be provided in order to expose the full capabilities of the SlP protocol in the Java programming language.
  • the SlP servlet APi cars define a higher layer of abstraction than simple protocol stacks provide and can thereby free up the developer from concern about the mechanics of the SIP protocol itself. For example, the developer can be shielded from syntactic validation of received requests, handling of transaction layer timers, generation of non application related responses, generation of fully-formed SIP requests from request objects (which can involve correct preparation of system headers and generation of syntactically correct SiP messages) and handling of lower-layer transport protocols such as TCP, UDP or SCTP.
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • the container is a server software that hosts applications (ie. contains tliem).
  • applications ie. contains tliem
  • SIP container it hosts SSP applications.
  • the container can perform a number of STP functions as specified by the protocol thereby taking the burden off the applications.
  • the SIP container can expose the application to SlP protocol messages (via the SIP Servlet API) on which applications can perform various actions. Different applications can thus be coded and deployed to the container that provides various telecommunication and multimedia services.
  • FIGURE 2 is an exemplary' illustration of the SIP server cluster architecture in accordance with various embodiments of the invention.
  • this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware Furthermore, it will also be apparent to those skilled in the art that such components, regardless of how the) arc combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means
  • FIGURE 2 shows Host A implementing both an engine node and a data node, this should not be construed as limiting the invention In many cases, it can be preferable to distribute the engine node and data node onto separate host machines Similarly, wbiie FIGURE 2 illustrates two host machines, it is possible and even advantageous to implement many more such hosts in order to take advantage of distribution, load balancing and fatlovci that such
  • the load balancer can be a standard load balancing appliance hardware device and it is not necessary that it be SIP aware, there is no requirement that the load balancer support affinity between the engines 216.
  • the load balancer can be implemented as software that distributes the messages to the various engines
  • the primary goal of the load balancer 202 can be to ide a single public address that dbtiibutes incoming SlP requests to a ⁇ ai!ablc servers in the SIP server engine tier 210 Such distribution of requests can ensure that the SIP server engines are fuih utilized
  • the load balancer 202 can also be used for performing maintenance acthkies such as upgrading indi ⁇ idual servers OJ applications without disrupting existing SlP clients
  • the SIP server can provide a two-tier cluster architecture model to handle the incoming messages in this model
  • a stateless engine tier 210 can process ail signaling traffic and can also replicate transaction and session state to the state tier 212 and its partitions 222 Facli partition 222 can consist of any number of nodes (replicas) 218, 214 distributed across any number of hosts such as host 1 220 and host 2 204 which can be implemented as computers linked in a cluster type network enviionment
  • the state tier 212 can be an peei -replicated Random Access Memory (RAM) store that maintains various data objects which can be accessed by the engine
  • RAM peei -replicated Random Access Memory
  • the state tier can also function as a lock manager where call state access follows a simple library book model, li e a call state can he checked out by one SiP engine at a time)
  • the engine tier 210 can be implemented as a cluster of SlP server instances that hosts the SIP seniets which provide v arious features to SIP clients
  • the engine tier 2 i 0 is stateless, meaning that most SSP session state information is not persisted in the engine tier, but is obtained by querying the state tier 212 which can in turn provide replication and failos cr services for SlP session data
  • the engine tier can have state maintained in a local near cache for improving latency
  • the primary goal of the engine tier 210 can be to prox idc maximum throughput combined with low response time to SlP clients As the number of calls or ⁇ heir duration increases, more server instances can be added to the engine tier to manage the additional load It should be noted however, that although the engine tier may include many such server instances, it can be managed as a single, logical entity For example, the SlP sen lets can be deplo) ed uniformly to all server instances b> targeting the cluster itself and the ioad balancer need not maintain affinity between SIP clients and individual servers in the engine tier
  • the state tier 212 can be implemented as a elustei of SlP server instances that provides a high-performance, highly-available, in-mem ⁇ ry store for maintaining and retrieving session state data for SIP servlets
  • This session data ma ⁇ be required by SlP applications in the SlP sen ⁇ er engine tier 210 in order to process incoming messages
  • session data can be managed in one or more partitions 222.
  • each partition manages a fixed portion of ⁇ he concurrent call state
  • the first partition could manage one half of the concurrent call state Ce g A-M J and the second partition can manage the other half (e g N-7)
  • each can manage a third of the call state and so on Additional partitions can be added as needed to manage large number of concurrent calls
  • each partition 222 multiple serseis can be added to provide redundancy and faiiover should the other seiveis in the partition fail
  • those servers can be referred IU as replicas because each serve?
  • nodes 2i 8 and 214 of the partition 222 can be implemented a ⁇ replicas
  • the data can be split evenly across a set of partitions, as previously discussed
  • the number of replicas in the partition can be called the replication factor, since it determines the level of redundancy and strength of that it provides For example, if one node goes down or becomes disconnected from the network, any available replica can automatically provide call state data to the engine tier
  • Replicas 214, 218 can join and leave the partition 222 and each replica can serve a ⁇ > exactly one partition at a time
  • the total available call state storage capacity of the duster is a summation of the capacities of each partition 222
  • each partition 222 can peer-replicated, meaning that clients perform all operations (reads-'writes) to all replicas 218, 214 in the partition (wherein the cu ⁇ ent set of replicas in the partition Is called the partition slew)
  • This can provide improved latency adv antages over more traditional synchronous "primary-secondary" architecture wherein one store acts as a primary and the other nodes serve as secondaries Latency is reduced because there is no wait for the second hop of primary-secondary systems
  • the pee ⁇ -replicated scheme can provide better iailover characteristics as vveil since there does not need to be change propagation delay
  • the engine nodes 208, 216 can be responsible for executing the call processing Each call can hav e a call .slate associated with it This call state can contain various information associated with the call, such as the ids of the caller callce, where the caller is, wliat application is running on the callee, an ⁇ timer objects that
  • a typical message processing flow can involve locking/getting the call state, processing the message and then putting/unlocking the call state
  • the operations supported by the t eplieas for normal operations can include
  • the engine tier can maintain mainly short lived objects and any long lived objects which may be needed for message processing can be stored on the state tier. This can provide improvements in latency during garbage collection.
  • the Java Virtual Machine (JVM) garbage collector can safely and quickly remove the short lived objects from memory without interfering with the execution of various other threads which may be in the process of executing.
  • the longer lived objects are not as easily removed by the garbage col Sector (since they may be referenced and depended on by various entities) and thus in some cases, the JVM garbage collector may need to stop processing all threads in order to safely perform its garbage collection.
  • Short lived objects typically exist in a different (more localized) memory scope than the long lived objects, which may be referenced by more entities. Thus, it can be more difficult for garbage collectors to ensure that every executing entity has finished using the long lived objects and various threads are usually stopped in order to perform their regular garbage collection. This can introduce latency.
  • the engine tier can maintain mostly short lived objects. In cases where longer lived objects are needed by the engine tier, they can be retrieved from the state tier, used as short lived objects in the engine tier, and subsequently pushed back to the state tier. This can be advantageous in that garbage collection can cause lesser interference with thread execution in the engine tier,
  • the state tier 212 can maintain call state in various data objects residing in the random access memory (RAM) of a computer This can provide significant access speed advantages to the engine tier 210 over the use of a database.
  • call state can be maintained in a database or some other form of persistent store, which can be accessed (albeit slower) by the engine tier.
  • State of various applications running on the SlP server can also be maintained on the state tier. Developers can be provided an API to allow their applications to access the state tier and to store various data thereon for later access by various applications. Alternatively, application state may be stored in a database.
  • FIGURE 3 is an exemplary illustration of the near cache implemented in the SiP server architecture, in accordance with various embodiments of the invention.
  • this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it. will also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
  • the engine tier 300 can be comprised of multiple engine nodes such as engine node A 310 and engine node B 316 that have StP applications 314, 318 running thereon which provide services to various SLP clients 308.
  • a separate state tier 302 cluster can manage state data and the engine nodes can fetch and write state in the state tier as necessary.
  • the state tier can include a number of partitions (such as partition A 306) which can have state replicas 322, 326 for maintaining duplicate state 324, 328 thereon.
  • the engines can write call state data to multiple replicas in each partition in order to provide automatic fail over should a state tier replica go offline.
  • the engine nodes are not entirely stateless, hut implement a
  • RAM-based near cache 312, 320 that maintains a portion of the call state 324, 328 locally, as well as in the state tier.
  • an engine tier server can first check its local cache for existing call state data when processing various messages, ⁇ n one embodiment, if the cache contains the data needed by the engine server, and if the local copy is up to date (when compared to the state tier copy), the engine node can lock the call state in the state tier but read directly from its cache. This can improve response time performance for the request because the engine does not have to retrieve the call state data from a data tier server.
  • Retrieving call state from the state tier can involve various costs.
  • One such cost is the lime duration of the communication and transporting the state data between the engine node and a state replica.
  • Another such cost is the time for serialization and de-serialization of the call state.
  • serialization is used to transmit an object or data over the network as a series of bytes. De-se ⁇ aiization involves using these bytes on the receiving end in order to re-construct the object (or a copy thereof).
  • the Java programming language provides automatic serialization and may require that the object be marked by implementing the java.io.Seriaiizable interface. Java can then handle the serialization internally.
  • serialization and de-serialization can introduce latency which in certain cases may be undesirable.
  • the SlP server can receive a flurry of initial messages from several S ⁇ P clients. It may be advantageous to maintain a local copy of the state on the engine server while handling this flurry of messages, instead of repeatedly accessing the state tier upon even' message. Maintaining such a local copy can prevent the need to serialize and de-serialize the state data each time since it does not need to be transported across the network.
  • the local cache can be further beneficial when a S IP- aware load balancer 304 is used to manage SlP client requests to the engine tier cluster.
  • a SlP-aware load balancer With a SlP-aware load balancer, all of the requests for one call leg can be directed to the same engine tier server, which can improve the effectiveness of the cache. For example, if the load balancer is not S ⁇ P-aware, subsequent messages/requests for the same call could be distributed to different engine tier servers which may have different cache contents and thus the performance benefit of the near cache can be inhibited Even in such embodiments, however, some performance improvements can be realized as there should be at least some cache hits.
  • messages for the same call ieg are distributed to the same engine node, it is more likely that the engine node has the state needed for the message stored locally in the near cache. In this manner, latency can be further improved.
  • objects in the near cache can be complex or long lived objects, it may be more difficult for the garbage collector to remove them in order to clean up the memory. This can introduce latency, as previously discussed.
  • using the near cache can reduce the time costs of communicating, transporting, serializing and deserializing data. Therefore, it may be preferable to tune various factors such as the size of the cache, the JVM and its garbage collection to a proper balance in order to achieve maximum performance output, As an illustration, latency can be monitored as the maximum size of the near cache is adjusted. This can be used to determine the optimal size of the cache for a particular network.
  • a cache hits counter can be maintained and incremented whenever a '1ockAndGel €aliState' ⁇ returns a non null value from the cache.
  • Further alternatives include experimenting with different sizes of the cache and expiration characteristics (such as the least- recently-used scheme) to help determine the recommended settings for different call rates/flows and different deployments, including engine/partition ratio and load balancer features.
  • Another option may be to store the cali state as a byte array (the form in which it is received from the state tier) and deserializing on demand. This may cause slower individual access, but may decrease garbage collection pauses, ⁇ n various embodiments, a proper balance can be determined by a system administrator or other technical person in order to maximize the performance of the near cache and the SiP server.
  • the cache can be an object cache residing on each of the engine nodes in the engine tier and it can contain a portion of the same information that is contained in the state tier.
  • the near cache can be implemented as a bounded map of cali states indexed by call ID.
  • call states both in the near cache and in the state tier can be associated with a version This may be useful in processing synchronous message interaction between several SIP clients when the call state cannot be updated simultaneously.
  • the SlP protocol (and thus call state) can be sensitive to the particular order of the messages arriving to/from the SIP server. For example, during a conference call SiP session, two users may pick up at the same time. In some embodiments, those messages may need to be processed synchronously (one at a time) in order to ensure the integrity and accuracy of the call state, In those embodiments, locking and versioning the call state can enable the near cache to ensure correctness of the state
  • the near cache can be used in conjunction with fetching as well as writing to the state tier. For example, during a "get and lock" cal! state, before fetching from a state replica, the engine can first, perform a search in the near cache. Versioning information about the cached version can be passed to the state replica(s) and the replica can respond by returning versioning information about the call state. If the version in the cache is up to date, the engine can then read the call state from the near cache while still locking that call state in the state tier. Thus, while locking and versioning information are passed between the engine and the state tiers, the engine may not need to transport the cali state itself from the state tier and may save on serializing and de- seriaHzi ⁇ g the data.
  • the engine can pass the version to the state tier when it executes a lock md get. Then, the lock and get can return the call state from the state tier if the version is out of date, otherwise, it can be readily available from the cache.
  • the engine server can save cal! state and versioning information in the near cache before writing the state to the replicas.
  • the state tier can transmit the call state bytes but the state can be retrieved from the cache (assuming proper version) saving on the de-serialization costs.
  • the near cache can be integrated with handling of the timer objects as discussed in further detail below. For example, when timers fire and the engine tier may need call state in order to process the message specified by the timer, that state can be readily available in the near cache. In this manner, the engine can also save on the data transport costs during the execution of various timer objects.
  • FIGURE 4A is ati exemplary flow diagram of the near cache functionality, in accordance with various embodiments. Although this figure depicts functional steps in a particular sequence for purposes of illustration., die process is not necessarily limited to this particular order or steps One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways.
  • a cluster network of computers can maintain an engine tier and a state tier distributed thereon.
  • the engine tier can handle the processing of various messages and store mainly short lived objects to be employed thereby.
  • the state tier can store the state associated with an SlP message, including long lived objects which may be used in processing the message.
  • a near cache can be maintained in the engine tier, in order to store a portion of the state data that is stored on the state tier. This portion of the state can be used when processing messages that frequently used the state. For example, during an initial call setup, the SIP server may receive a high period of message activity for one call, where each message can use the state data from the state tier. Rather than accessing it from the state tier upon each message, it may be helpful to maintain a local copy on the engine tier in the near cache.
  • a SlP communication message can be received to the load balancer in the cluster network.
  • the transmission of the message can come from various devices or software, such as a cellular phone, a wireless device, a laptop computer, an application, or can be specified by various time?
  • the load balancer can then distribute tli c SlF' message to an appropriate engine server node in the engine tier
  • the load balancer can be a hardware deuce whose primary goal is to provide a single IP address to the message clients and to distribute the incoming traffic to the engine tier
  • the engine server can determine whether the state needed to process the message is available in the near cache If the state is available, the engine node can then check if the version currently in the near cache is up to date, as il ⁇ usuated in step 410 This may be useful few keeping the slate data consistent acioss the slate lier and the neat cache
  • the engine server can lock the state data in.
  • the state tier This can be useful for synchronously processing incoming messages and in order to ensure the accuracy of the state, as previously discussed
  • the version in the near cache can then be accessed and employed b> the engine tier in processing the message, as illustrated in step 4 IS
  • the engine tier may then decide to retrieve the state form the state tier T he state data can be locked first, as illustrated in step 412, and the data can then be retrieved from the state tier and transported to the engine t ⁇ e ⁇ to be used there, as illustrated in step 414
  • Such retrieval and transporting of data can be costh , as previously discussed
  • the near engine cache can improv e latency by reducing on time taken for serializing, transporting and deserializing the state bv having a local version on the engine tier
  • the steps illustrated herein can be rearranged omitted, combined or new steps can be added as well
  • the engine tier can send a lock and get message to the state tier along with the version of the state in the near cache The state tier can then respond by sending the stale if the version is cxpiied, otherwise the engine tier can use the v ersion in the near cache
  • Other such implementations are also possible and well within the scope of the invention
  • FIGURF 4B is an exemplary flow diagram of the engine tier message processing, in accordance with various embodiments
  • this figure depicts functional steps in a particulaf sequence fof purposes of illustration, the process is not necessarily limited to this particular order oi steps
  • One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways
  • the engine tiet can be iesponsible for processing various messages
  • an engine node can recehe incoming message from the load balancer or can receiv e directions to send a message from the state tier
  • the engine node can then gain access to the state needed to process the message, as previously discussed
  • the engine node can save the state thai was used in the near engine cache, as illustrated in step 422 This may include updating the version in the cache as well as updating the state data itself T he state data can then be written to a state replica in the appropriate partition of the state tier, as illustrated in step 424 If failover ss desired, that state can also be duplicated across other state replicas in the partition, as illustrated in step 426 At this point, as illustrated in step 428, the piece of call state can be unlocked within the state tier so that other engine nodes processing olhei messages that may need that same state can use it accordingly This can help to ensure synchronous call and message processing as described in further detail below
  • HGURE 4C is an exemplary- flow diagram of tuning the performance of the near engine cache, in accordance with various embodiments
  • this figure depicts functional steps in a particular sequence for purposes of illustration, the process is not necessarily limited to this particular order or steps
  • One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways
  • the performance of the near engine cache within the SII* server can be continually monitored Similarly, as illustrated in step 432, the latency caused by various garbage collection algorithms can also be monitored For example, monitoring can be performed by running varying amounts, of call flow traffic and applications on the SlP server and measuring the time taken to process thai traffic
  • monitoring can be performed by running varying amounts, of call flow traffic and applications on the SlP server and measuring the time taken to process thai traffic
  • a system administrator may implement an assortment of tools in order to monitor performance and latency, such as a counter of hits to the near cache, a proportion of those hits that return a current version, time intervals during which execution of ⁇ arious threads is halted by the garbage collector, average time taken to process a message, as well as various other tools
  • an administrator can tune the performance of the SlP server and the near engine cache For example, in step 434, the size of the near cache can be adjusted to suit the particular network and call flow Similar]) , the expiration of objects in the near cache can be adjusted to be longer or shorter lived
  • the size of the Java Virtual Machine (JVM) heap can be adjusted so as to reduce garbage collection latency
  • the JVM heap is typically where the objects of a Java Virtual Machine (JVM) heap
  • the JYM heap is a repository' for live objects, dead objects and free memory
  • the JVM heap size can determine how long or how often the JYM will perform garbage collection In one embodiment, if vou set a large heap size. garbage collection may occur less frequently but can take longer to finish Similarly, smaller heap sizes can speed up the garbage collection but ma> cause it to occur more frcqucmly Adjusting the si/e of the JVM heap can help to set most favorable performance of the SIP server
  • step 4 >8 the IVM ratio of when objects should move from the new generation heap (nursery) to the older generation heap can be adjusted.
  • JY ⁇ 'l heap can store short lis ⁇ d objects in the new generation heap and the Song lived objects in the old generation heap
  • the size of these heaps can be similarly adjusted, as illustrated in step 440, in order to maximize performance
  • Further adjustments can also include changing the storage of objects in the neat cache to an arrav of bytes which can be deserialized on call, as illustrated in step 442
  • the adjusting of various factors discussed above can be repeated, arranged, interrupted or omitted as performance of the SlP server is monitored ⁇ s an illustration, a system administrator can adjust one of the parameters discussed above, monifoi performance, adjust another paiametet, monitor any change in performance and so on In various embodiments, this can enable an administrator to deteimine the optimal or near-optimal performance of the near cache and the SIP server
  • These performance settings * may differ across the various organizations that implement the SIP server, due to factors such as call flow volume, size of the cluster netvsork, amount of data processed as well as a multitude of other factors
  • the methodology illustrated in FIGURE 4C can help the organization improve its efficiency by adjusting the various factors influencing the SSP server
  • FIGURE 5 is an exemplary illustration of a simplified call flow in a typical S ⁇ P communication session, in accordance with various embodiments.
  • this figure depicts functional steps in a particular sequence for purposes of illustration, the process is not necessarily limited to this particular order or steps.
  • One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways,
  • a back to back user agent (B2BUA) 500 having a running SiP server thereon can take the place of being an intermediary between the Communications sent between various users. Tins can be done for purposes of controlling the call and message flow between user agent 1 502 and user agent 2 504 and in order to prevent any unwanted behavior and messages (e.g. spamraing, hacking, viruses, etc.) from being sent to the user agent device, ' it should be noted that although user agent 1 502 and user agent 2 504 are illustrated as telephones in FIGURE 5, the SlP messages can come from various other sources as well.
  • the user agent can also be a cell phone, a wireless device, a laptop, an application or any other component that can initiate a SlP type of communication.
  • FIGURE 5 illustrates communications between two user agents (502, 504), there can be more such user agents taking part of a single communication session. For example, during a conference call, there may be 20 or 30 user agents for all attendees of the conference, each of which could send SlP messages to the B2BUA 500 and receive transmissions back therefrom.
  • a telephone call can be set up between user agent I 502 and user agent 2 504 via the use of the SIP server.
  • the first message sent from user agent 1 502 to the SIP server on the B2BUA 500 can be an invite message, requesting to set up a telephone call with user agent 2 504.
  • the invite message can be received by the load balancer 202 of the SlP server a.nd it can be directed to an engine in the engine tier 210 for processing.
  • the engine tier (e.g an application executing thereon) can then perforin logic for determining various factors associated with the call, such as determining whether user agent 1 502 is allowed to make the type of call attempted to be initiated, determining whether the callee that will be contacted is properly identified, as well as any other logic that the server may need to calculate before attempting to set up a telephone call.
  • the engine can then generate state around the fact that a call is being set up, including generating the proper long lived and short, lived objects associated with the messages, as previously discussed.
  • the engine can also determine how to find the target of the call (i.e. l ⁇ ser agent 2 504) and the right path to route the message to the callee, As illustrated herein, user agent !
  • the SIP server can send a " !00 trying"' message back to user agent 1 502, indicating that it has received the invite message and that it is in the process of handling it.
  • the "100 trying" .message is part of the SIP protocol definition and can be used by a server In order to stop the user agent from re-transmitting the invite request.
  • the user agent may have interference which might cause an interruption or loss of various messages. Therefore SIP protocol defines various re-transmission schemes in order to handle such mobility and interruptions. Messages such as "100 trying,” “'180 ringing,” and "200 OK" are just some of the examples of messages defined in SlP for handling communication.
  • the SIP server can then send an invite message to the user agent 2 504 and can receive back a " 180 ringing" message, indicating that user agent 2 504 has received the invitation and is now waiting for a user to answer.
  • the SlP server engine tier can then transmit the "180 ringing" message back to user agent 1 502.
  • user agent 2 504 can then send a "200 ok" message to the SlP server, the serves' can transmit that message to user agent 1 502.
  • the user agent 1 502 can send an acknowledgement ("Ack" message) to the SlP server which can be transmitted along to user agent 2 504 and at this point a sound transfer conversation can be set up between the two user agents.
  • Ack acknowledgement
  • This sound transfer can be implemented via real transfer protocol (RTP) on a. media server.
  • RTP real transfer protocol
  • either user agent can choose to terminate the call by sending a "Bye” message.
  • user agent 1 502 terminates the call by sending a "Bye” message to the SIP server which sends it off to user agent 2 504.
  • the SlP server can transmit that message to user agent I and the conversation can be truly ended.
  • the vertical lines such as those extending downward from the user agents 502, 504 and the B2BUA 500 can each illustrate and be referred to as a single call leg.
  • the call flow for each call leg may be time sensitive as some messages should be received or sent before others can be initiated.
  • the user agent A 502 may continue to re-transmit the initial invite message until it receives a 100 trying' " message from the B2BUA 500 ⁇ s such, in some cases certain messages mav need to be processed s ⁇ nchronously while others may be allowed to process in parallel ft should be noted that this illustration of a call may be o ⁇ erly simplified for purposes of clarity For example, there can be various other message transmissions (not illustrated) such as authentication messages for caSler/eallee.
  • sequences of messages exchanged between the SIP server and the user agents for controlling the flow of the call can be controlled by various timer objects residing on the SlP server
  • the StP server will typically forward that invite to another user agent and wait for a response If no response is received within a period of time (e g a number of milliseconds), men the ins Ue message may need to be retransmitted to the second user agent because it may be assumed that the user agent did not receive the first message
  • This type of re-transmission can be controlled by the protocol timer objects which ma ⁇ be residing in the .state tier.
  • an initial Tl timer value of 500 milliseconds can control the retransmission interval for the invite request and responses and can also set the value of various other timers
  • timer objects which can be executing on the level of the entire call For example, if after a specified period of time, nothing is heard back from eithet user agent, the entitc call may be purged from the system This specified period of time can also be controlled b ⁇ firing a timei object
  • state tier instances queue and maintain a complete list of SIP protocol timers and application timers associated with each call
  • Engine tier servers can periodically poll the partitions of the state tier to determine which timers have expired given the current time in order to a ⁇ oid contention on the timei tables, multiple engine tier polls to the state tier can be staggered
  • the engine tier can ⁇ hen process the expired timers using threads in the sip. timer. Default execute queue.
  • the processing of the timer objects can be executed by the engine server as determined by the state tier server.
  • the state tier can tell the engine A to execute the first half of all due timer objects (e.g.
  • fell engine B to execute the other half (e.g. 101-200).
  • the state tier can also simultaneously push the state onto the engine, since the state may need to be employed in executing the timer objects.
  • the engines can then process the tinier objects (e.g. by sending appropriate messages, ending appropriate calls) and can later again query poll the state tier for which timers have become due
  • the state data When used with the near cache, the state data may not need to be pushed onto the engine server since that data may already be available in the cache. Thus, when processing timers, the timers can be fetched from the state tier, however upon the timer firing, the engine can fetch the call state using the cache. Further performance optimization can be obtained by changing the selection of tiers to give affinity to the engine holding the cache for a particular call. Thus, the timers which are going to be executed can be sent to the appropriate engines which have the proper call state in the cache thereon.
  • system server clocks may be preferable to synchronize system server clocks to a common time source (e.g. within a few milliseconds) in order achieve maximum performance.
  • a common time source e.g. within a few milliseconds
  • an engine tier server with a system clock that is significantly faster than other servers may process more expired timers than the other engine tier servers. Irs some situations this may cause retransmits to begin before their allotted time and thus care may need to be taken to ensure against it.
  • the SIP Servlet API can provide a timer service to be used by applications There can he TimerService interface which can be retrieved from as a
  • the TimerService can define a "create Timer(SipApplicationSession appSession, long delay, boolean isPersistent, java.io.Serializable info)" method to start an application level timer.
  • SipApplicationSession can be implicitly associated with the timer.
  • an application defined TinierListener is invoked and ServletTimer object passed up, through which the SipApplicationSession can be retrieved which provides the right context of the timer expiry.
  • the engine tier servers continually access the state tier replicas in order to retrieve and write call state data.
  • the engine tier nodes can
  • ⁇ .> also detect when a stale tier server has failed or become disconnected. For example, in one embodiment, when an engine cannot access or write call state data for some reason (e.g. the state tier node has failed or become disconnected) then the engine can connect to another replica in the partition and retrieve or write data to that replica. The engine can also report that tailed replica as being offline. This can be achieved by updating the view of the partition and data tier such that other engines can also be notified about the offline state tier server as they access state data.
  • a stale tier server has failed or become disconnected. For example, in one embodiment, when an engine cannot access or write call state data for some reason (e.g. the state tier node has failed or become disconnected) then the engine can connect to another replica in the partition and retrieve or write data to that replica. The engine can also report that tailed replica as being offline. This can be achieved by updating the view of the partition and data tier such that other engines can also be notified about the offline state tier server as they access state data.
  • Additional fa U over can also be provided by use of an echo server running on the same machine as the state tier server.
  • the engines can periodically send heartbeat messages to the echo server, which can continually send responses to each heartbeat request. If the echo server fails to respond for a specified period of time, the engines can assume that the state tier server has become disabled and report that state server as previously described. In this manner, even quicker fail over detection is provided, since the engines can notice failed servers without waiting for the time that access is needed and without relying on the TCP protocol's retransmission timers to diagnose a disconnection,
  • Failover can also be provided for the engine tier nodes
  • the engine tier nodes can periodically poll the state tier nodes in order to determine which timer objects it needs to execute. In turn, the state tier nodes can notice whenever the engine tier node has failed to poll. Lf a specified period of time elapses and the engine tier has not polled the state tier, the state server can then report that engine as unavailable (e.g. having failed or disconnected from the network). Ln this manner, failover can be implemented for both the state tier and the engine tier, thereby providing a more reliable and secure cluster for message processing.
  • the invention encompasses in some embodiments, computer apparatus, computing systems and machine-readable media configured to carry out the foregoing methods, in addition to an embodiment consisting of specifically designed integrated circuits or other electronics, the present invention may be conveniently implemented using a conventional general purpose or a specialized digital computer or microprocessor programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
  • the present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention.
  • the storage medium can include, but is not limited to, any type of rotating media including floppy disks, optica! discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, and magnetic or optical cards, nanosysterns (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
  • the present invention includes software far controlling both the hardware of the genera! purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human user or other mechanism utilizing the results of the present invention.
  • Such software may include, but is not limited to, device drivers, operating systems, and user applications.
  • the programming (software) of the general/specialized computer or microprocessor are software modules for implementing the teachings of the present invention, including, but not limited to providing systems and methods for providing the SlP server architecture as discussed herein.
  • Various embodiments may be implemented using a conventional genera! purpose or specialized digital computer(s) and/or processors) programmed according to the teachings of the present disclosure, as can be apparent to those skilled in the computer art.
  • Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as can be apparent to those skilled in the software art.
  • the invention may also be implemented by the preparation of integrated circuits and/or by interconnecting an appropriate network of conventional component circuits, as can be readily apparent to those skilled in the art.
  • Embodiments can provide, by way of example and without limitation, services such as: VoIP services, including, without limitation the following features:
  • Do not disturb The ability to specify policies around receiving calls for example, all calls during office hours to be automatically forwarded to a mobile terminal, all calls during the night to be directed to voice mail etc
  • Locate me This is advanced call forwarding Rather than have all calls to a single location (e g voice mail) when the caller is , ( ocate me can Uy multiple terminals in series or in parallel For example, a u ⁇ er may have two office locations, a mobile, and a pager, and it may make sense to forward a cali to both office locations first then the pager, and then the mobile terminal Locate me is another example of feature interaction
  • Personal conferencing A user could use an existing application (c g , i ⁇ client) to schedule a Web ' 'audio conference to start at a certain time Since the l ⁇ f client already has personal profile information, the conferencing system sends out the Web conference link information cither through IVI and ' Or email to the participants The phone contact information in the profile is used to automatically ring the participants at the time of the conference
  • Lifetime number T his is the facility where a single virtual number can travel a customer wherever they live Lv en if they move, the old number continues to work, and reaches them at their new location Thi.s is. really the analog of static IP addresses in a phone network
  • a typical example here is the need for applications that have a short lifetime, extremely high usage peaks within their lifetime, and immediacy. For example, voting on American Idol during the show or immediately afterwards has proved to be an extremely popular application.
  • Integrated applications including, without limitation the following features
  • the final class of applications is one that combines wireline and wireless terminal usage scenarios.
  • An example of an integrated application is the following a mobile terminal user is on a conference call on their way to work When he reaches his office., he enters a special key sequence to transfer the phone call to his office phone. The transfer happens automatically without the user having to dial in the dial-in information again. It's important to note hear that this capability be available without the use of any specific support from the hand-set (a transfer button for example).
  • Various embodiments include a computer program product which is a storage medium (media) having instructions stored thereon/in, which can be used to program a general purpose or specialized computing processors )/device ⁇ s) to perform any of the features presented herein
  • the storage medium can Include, but is not limited to, one or more of the following- any type of physical media including floppy disks, optical discs, [3VDs, CD-ROMs, niicrodrives, magneto-optlcai disks, holographic storage, ROMs, RAMs 5 PRAMS, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices.
  • Various embodiments include- a computer program product that can be transmitted in whole or in pans and one or more public and/or private networks wherein the transmission includes instructions which can be used by one or more processors to perform any of the features presented herein. Iu various embodiments, the transmission may include a plurality of separate transmissions
  • the present disclosure includes software for controlling both the hardware of general purpose/specialized computers) and/or processor(si), and for enabling the comp ⁇ ter(s) and/or processors) to interact with a human user or other mechanism utilizing the results of the present invention.
  • Such software may include, but is not limited to, device drivers, operating systems, execution environments/containers, user interfaces and applications.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Telephonic Communication Services (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The SIP server can be comprised of an engine tier and a state tier distributed on a cluster network environment. The engine tier can send, receive and process various messages. The state tier can maintain in-memory state data associated with various SIP sessions. A near cache can be residing on the engine tier in order to maintain a local copy of a portion of the state data contained in the state tier. Various engines in the engine tier can determine whether the near cache contains a current version of the state needed to process a message before retrieving the state data from the state tier. Accessing the state from the near cache can save on various latency costs such as serialization, transport and deserialization of state to and from the state tier. Furthermore, the near cache and JVM can be tuned to further improve performance of the SIP server.

Description

ENGINE NEAR CACHE FOR REDUCING LATENCY IN A TE LECOM IM [ fN IC ATI O KS EN V I RON M E !VT
COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains materia! which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records., but otherwise reserves ail copyright rights whatsoever.
CLAIM OF PRIORITY
This application claims priority to the following United States Applications, each of which is incorporated herein by reference: United States Provisional Patent Application No. 60/801,083, entitled ENGINE
NEAR CACHE FOR REDUCING LATENCY IN A TELECOMMUNICATIONS ENVIRONMENT, by Anno R Langen et al . filed May 16, 2006 (Attorney Docket No. BEAS-02062US0K
United States Patent Application No. 1.1/748 J91, entitled ENGINE NEAR CACHE FOR REDUCING LATENCY IN A TELECOMMUNICATIONS ENVIRONMENT, by Anno R. Langen et al., filed May 15, 2007 (Attorney Docket No BEAS-02062USI K
United States Patent Application No. S 1/748,767, entitled HITLESS APPLICA TION UPGRADE FOR SlP SERVER ARCHiTEC TURE, by Anno R. Langen et al , tiled May 15, 2007 (Attorney Docket No, BEAS-0206 S US 1 );
United States Patent Application No. 1 1/378,188, entitled SYSTEM AND METHOD FOR MANAGING COMMUNICATIONS SESSIONS IN A NETWORK, by Reto Kramer, et al , filed on March I 7, 2006 (Attorney Docket No BEAS-1744US1);
United States Patent Application No. 1 1/384,056, entitled SYSTEM AND METHOD FOR A GATEKEEPER IN A COMMUNICATIONS NETWORK, by Reto Kramer et al.. filed on March 17, 2006 (Attorney Docket No. BEAS-1%2USI );
United States Provisional Patent Application No. 60/801 ,09 ! entitled SIP AND HTTP CONVERGENCE IN NETWORK COMPUTING E.N VlRONM ENTS. by Anno R Langen et al , filed on May 16, 2006 (Attorney Docket No BEAS-2060US0); United States Provisional Patent Application No. 60/800,943 entitled HlTLHSS APPLICATION UPGRADE FOR SlP SERVER ARCHITECTURE, by Anno R. Langen et a! , Hied on May 16, 2006 (Attorney Docket No, BEAS-206 I USO);
United States Patent Application No. 1 1/434,022 entitled SYSTEM AND METHOD FOR CONTROLLING DATA FLOW BASED UPON A TEMPORAL POLICY, by Narendra Vemula et ai , tiled on May 15, 2006 ( Attorney Docket No. BEAS- 20&4US0);
United States Patent Application No. 1 1/434,024 entitled SYSTEM AND METHOD FOR CONTROLLING ACCESS TO LEGACY PUSH PROTOCOLS BASED UPON A POLICY, by Bengt-inge Jakobsson et si , filed on May 15, 2006 (Attorney Docket No. BEAS-2066US0),
United States Patent Application No. 1 ! /434,010 entitled SYSTE-M AND METHOD FOR CONTROLLING ACCESS TO LEGACY MULTIMEDIA MESSAGE PROTOCOLS BASED UPON A POLICY, by Andreas E Jansson, filed on May LS, 2006 (Attorney Docket No. BEAS-2067US0);
United States Patent Application No I i /434,025 entitled SYSTEM AND METHOD FOR CONTROLLING ACCESS TO LEGACY SHORT MESSAGE PEER- TO-PEER PROTOCOLS BASED UPON A POLICY, by Andreas E. Jansson, tiled on May 15, 2006 (Attorney Docket No. BEAS-2068US0K United States Patent Application No. 1 1/432,934 entitled SYSTEM AND
METLiOD FOR SHAPING TRAFFIC, by Jan Thomas Svensson, filed on May 12.. 2006 (Attorney Docket No BEAS-2070US0).
FlELD OF THE .INVENTION The current invention relates generally to managing telecommunications and more particularly to providing a near cache for reducing latency in a cluster network environment.
BACKGROUND Conventionally, telecommunications and network intrastfucture providers have relied on often decades old switching technology to providing routing for network traffic. Businesses and consumers, however, are driving industry transformation by demanding new converged voice, data and video services. The ability to meet these demands often
~> can be limited by existing IT a.nd network infrastructures that are closed, proprietary and too rigid to support these next generation services. As a result telecommunications companies are transitioning from traditional, circuit-switched Public Switched Telephone Networks (PSTN), the common wired telephone system used around the world to connect any one telephone to another telephone, to Voice Over Internet Protocol (VoIP") networks VoIP technologies enable voice communication over "vanilla" IP networks, such as the public Internet. Additionally, a steady decline m voice revenues has resulted in heightened competitive pressures as carriers vie to grow data/service revenues and reduce churn through the delivery of these more sophisticated data services. Increased federal regulation, security and privacy issues, as well as newly emerging standards can further compound the pressure
However, delivering these more sophisticated data sen-ices has proved to be more difficult than first imagined. Existing IT and network infrastructures, closed proprietary network-based switching fabrics and the like have proved to be too complex and too rigid to allow the creation and deployment of new service offerings. Furthermore, latency has been an important issue in addressing the processing of telecommunications, as more and more users expect seemingly instantaneous access from their devices
BRjϊ-Z.ΪMSCMiπQKO£TM.DRAWJNGS FIG 1 A is an exemplary illustration of a functional system layers in various embodiments.
FIG IB is another exemplary illustration of functional system layers in a communications platform embodiment
FIG 1C is an exemplary' illustration of a SlP server deployed in a production environment, in accordance with various embodiments.
FIG. 2 is an exemplary illustration of the SlP server cluster architecture in accordance with various embodiments of the invention
FIG 3 is an exemplary illustration of a near cache in the SIP server cluster architecture in accordance with various embodiments of the invention FIG. 4A is an exemplary flow diagram of the near cache functionality, in accordance with various embodiments.
FSG. 4B is an exemplary flow diagram of the engine tier message processing, in accordance with various embodiments. FtG. 4C is asi exemplar)' flow diagram of timing the performance of the near engine cache, in accordance with various embodiments.
FIG. 5 is ati exemplary illustration of a call flow in a typical SlP communication session, in accordance with various embodiments.
DETAILED DESCRIPTION
The invention is illustrated by way of example and not by way of limitation in the figures of Die accompanying drawings in which like references indicate similar elements. References to embodiments in this disclosure are not necessarily to the same embodiment, and such references mean at least one. While specific implementations are discussed, it is understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without departi tig from the scope and spirit of the invention.
In the following description, numerous specific details are set forth to provide a thorough description of the invention. However, it will be apparent to those skilled in the art that the invention may be practiced without these specific details. Tn other instances, well -known features have not been described in detail so as not to obscure the invention.
Although a diagram may depict components as logical iy separate, such depiction is merely for illustrative purposes, it can be apparent to those skilled in the art that the components portrayed can be combined or divided into separate software, firmware and/or hardware components. For example, one or more of the embodiments described herein can be implemented in a network accessible device/appliance such as a router.
Furthermore, it can also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means,
In accordance with embodiments, there is provided an engine-near cache in a session initiation protocol (SlP) server architecture for improving latency and reducing various time costs in processing messages. In various embodiments, the SlP server can be comprised of an engine tier and a state tier distributed on a cluster network environment. The engine tier can send, receive and process various messages. The state tier can maintain in-memory state data associated with various SlP sessions. A near cache can be residing on the engine tier in order to maintain a local copy of a portion of the state data contained in the state tier. Various engines in the engine tier can determine whether the near cache contains a current version of the state needed to process a message before retrieving the state data from the state tier. Accessing the state from the near cache can save on various latency costs such as serialization, transport and deserialization of state to and from the state tier. Furthermore, the near cache can be toned to further improve performance of the SIP server
FIGLfRIi1' I A is an exemplary illustration of functional system layers in accordance with various embodiments. Although this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it will also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
A Session initiation Protocol (SlP) Server 102 and a Network Gatekeeper 104 can comprise a portfolio of products that collectively make up the Communications Platform 100. The SlP Server 102 provides the Communications Platform !00 with a subsystem in. which application components that interact with S IP-based networks may be deployed The Network Gatekeeper 104 provides a policy-driven telecommunications Web services gateway that allows granular control over access to network resources from υn-trusted domains,
A variety of shared and re-usable software and service infrastructure components comprise the Communications Platform 100. For example, an Application Server, such as the WebLogic™ Application Server by BHA Systems, Inc. of San Jose, California. This Application Server may be augmented and adapted for deployment in telecommunications networks, while providing many features and functionality of the WebLogic Server counterpart widely deployed in enterprise computing environments. Application Server embodiments for use in the telecommunications applications can provide a variety of additional features and functionality, such as without limitation: Optimized for Peak Throughput
• Clustering for Scalability and i ϊigh-Peiformance
• Generalized for wide range of target platforms (TϊW/OS) support * Extensive deployment configuration options
• Optimized for local management
• Plug and play Enterprise Information Systems (EIS) support
Analogously, communications platform embodiments can provide a variety of additional features and functionality, such as without limitation: Highly Deterministic Runtime Environment Clustering for High- Availability (HA) and Scalabi 1 ity
Optimized for Telecom HW /C)S /HAM W platforms support (SAF, ATC A, HA M /W, etc) • Hardened configuration
• Optimized for Telecom NMS integration
♦ Telecommunications network connectors and interfaces
♦ Partitioning, replication and fai lover
FIGURE IB is another exemplary illustration of functional system layers in a communications platform embodiment. Although this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. Tt will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it will also he apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
Communications platform 100 comprises a SiP Server (WLSS) 102 and a Network Gatekeeper (WLNG) 104. Tools for interacting with Web Services, such as a Web Service - Universal Description Discovery Interface fWS/UDDl) 1 10, a Web Service - Business Process Execution Language (WS/BPEL) 1.12 may be coupled to the SlP Server 102 and the Network Gatekeeper 104 in embodiments. A Sog/trace and database 1 14 can assist with troubleshooting. In some deployments, the Communications Platform !00 can interface with a.n OSS/BSS system 120 via resource adapters 122. Such interfaces can provide access to billing applications 124, Operation, Administration, and Maintenance (OAM) applications 126 and others. A policy engine 128 can control the activities of the above-described components which can be implemented in a scalable cluster environment (SCE) 130. 4 Communications Platform embodiment can provide an open, high performance, software based fault-tolerant platform that allows operators to maximize revenue potential by shortening time to market and bigniiicantiy reducing pcr-servicc implementation and integration cost and complexitΛ The Communications Platform is suitable for use by for "Network Infrastructure Vendor, Network Operators and Communications Service Prov iders in multiple deployment scenarios ranging from fully IMS oriented network architectures to hybrid and highly heterogeneous netwotk architectures Ii is not restricted to use only in carrier networks, however, and may be deployed in Hnϊerpme communications networks without restriction or extensive customization When deployed in conjunction with an IP Multimedia Subsystem, the Communications Platform can serve in the role of &n IMS SIP Application Server and offers Communications Service Providers an execution environment in which to host applications (such as the WebLogic Network Gatekeeper), components and standard service cnablers
FIGURE 1C is an exemplary illustration of a SlP seivei deployed in a production environment, in accordance with \ arious embodiments Although this diagram depicts components as logically separate, such depiction Is merely for illustrative purposes It will be apparent to those skilled in the an that the components portrayed in this figure can be ajbitranly combined or divided into separate software, firmware and/or hardware Krrtherraore. it will also be apparent to those skilled in the art that such components. regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means
Λs illustrated,, the SlP server 102 can be used as a back-to-back user agent (B2BUA) 150 in a typical telecommunications environment λ B2BUA can take the place of an intermedial') between communications between user agents 160, 162, including various cellular phones, wireless devices, laptops, computets, applications, and oilier components capable of communicating with one another electronically The B2BUΛ 150 can provide multiple advantages, including controlling the flow of communication between user agents, enabling different user agents to communicate with one another (e g a web application can communicate with a cellular phone), as well as various security advantages Λs an illustration, the user agents can transmit to the SlP server instead of communicating directly to each othei and thus malicious users can be
Figure imgf000008_0001
from sending spam and viruses, hacking into other user agent devices, and otherwise compromising security.
The SIP server 102 can be implemented as a Java Enterprise Edition application server that has been extended with support for the session initiation protocol (SIP) as well as other operational enhancements that allow it to meet the demanding requirements of the next generation protocol -based communication networks. In one embodiment, the SIP server 102 can include an Enterprise Java Beans (BJB) container 144, a Hyper Text Transfer Protocol (HTTP) servlet container 142, an SIP servlet container 140, various Java 2 Enterprise Edition (J2EE) services 146, and SIP 150 and HTTP 148 components. The SIP stack of the server can be fully integrated into the SIP sen-let container 140 and can offer much greater ease of use than a traditional protocol stack. A SlP servlet Application Programming Interface (API) can be provided in order to expose the full capabilities of the SlP protocol in the Java programming language. The SlP servlet APi cars define a higher layer of abstraction than simple protocol stacks provide and can thereby free up the developer from concern about the mechanics of the SIP protocol itself. For example, the developer can be shielded from syntactic validation of received requests, handling of transaction layer timers, generation of non application related responses, generation of fully-formed SIP requests from request objects (which can involve correct preparation of system headers and generation of syntactically correct SiP messages) and handling of lower-layer transport protocols such as TCP, UDP or SCTP.
Sn one embodiment, the container is a server software that hosts applications (ie. contains tliem). In the case of a SIP container, it hosts SSP applications. The container can perform a number of STP functions as specified by the protocol thereby taking the burden off the applications. At the same time, the SIP container can expose the application to SlP protocol messages (via the SIP Servlet API) on which applications can perform various actions. Different applications can thus be coded and deployed to the container that provides various telecommunication and multimedia services.
FIGURE 2 is an exemplary' illustration of the SIP server cluster architecture in accordance with various embodiments of the invention. Although this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware Furthermore, it will also be apparent to those skilled in the art that such components, regardless of how the) arc combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means For example, while the FIGURE 2 shows Host A implementing both an engine node and a data node, this should not be construed as limiting the invention In many cases, it can be preferable to distribute the engine node and data node onto separate host machines Similarly, wbiie FIGURE 2 illustrates two host machines, it is possible and even advantageous to implement many more such hosts in order to take advantage of distribution, load balancing and fatlovci that such a sv stem can provide Λs illustrated, a message, such as a phone call request or some other transfer of data associated with SIP. can come into the cluster from the internet (such as over VoIP), phone, or some other type of network 200 This message can be received and handled by a load balancer 202 which can be responsible distributing message traffic across the engines (such as engine node 1 215 and engine node 2 208) in the cluster The load balancer can be a standard load balancing appliance hardware device and it is not necessary that it be SIP aware, there is no requirement that the load balancer support affinity between the engines 216. 208, and SlP dialogs or transactions However in alternative embodiments, certain advantages may be obtained by implementing a SIP~a«are load balancer, as discussed in further detail below Alternatively, the load balancer can be implemented as software that distributes the messages to the various engines In the various embodiments, the primary goal of the load balancer 202 can be to
Figure imgf000010_0001
ide a single public address that dbtiibutes incoming SlP requests to a\ ai!ablc servers in the SIP server engine tier 210 Such distribution of requests can ensure that the SIP server engines are fuih utilized The load balancer 202 can also be used for performing maintenance acthkies such as upgrading indi\ idual servers OJ applications without disrupting existing SlP clients
In one embodiment, the SIP server can provide a two-tier cluster architecture model to handle the incoming messages in this model, a stateless engine tier 210 can process ail signaling traffic and can also replicate transaction and session state to the state tier 212 and its partitions 222 Facli partition 222 can consist of any number of nodes (replicas) 218, 214 distributed across any number of hosts such as host 1 220 and host 2 204 which can be implemented as computers linked in a cluster type network enviionment The state tier 212 can be an peei -replicated Random Access Memory (RAM) store that maintains various data objects which can be accessed by the engine
L) nodes in the engine tier In this manner, engines can be provided a dual advantage of faster access to the data objects than retrieving data from a database while at the same time, engines can be freed up from haung to store the data onto the engine tier itself This type of separation can offer \ arioυs performance improvements The state tier can also function as a lock manager where call state access follows a simple library book model, li e a call state can he checked out by one SiP engine at a time)
The engine tier 210 can be implemented as a cluster of SlP server instances that hosts the SIP seniets which provide v arious features to SIP clients In one embodiment the engine tier 2 i 0 is stateless, meaning that most SSP session state information is not persisted in the engine tier, but is obtained by querying the state tier 212 which can in turn provide replication and failos cr services for SlP session data in alternative embodiments, the engine tier can have state maintained in a local near cache for improving latency
The primary goal of the engine tier 210 can be to prox idc maximum throughput combined with low response time to SlP clients As the number of calls or {heir duration increases, more server instances can be added to the engine tier to manage the additional load It should be noted however, that although the engine tier may include many such server instances, it can be managed as a single, logical entity For example, the SlP sen lets can be deplo) ed uniformly to all server instances b> targeting the cluster itself and the ioad balancer need not maintain affinity between SIP clients and individual servers in the engine tier
In various embodiments the state tier 212 can be implemented as a elustei of SlP server instances that provides a high-performance, highly-available, in-memυry store for maintaining and retrieving session state data for SIP servlets This session data ma\ be required by SlP applications in the SlP sen~er engine tier 210 in order to process incoming messages Within the state tier 212, session data can be managed in one or more partitions 222. where each partition manages a fixed portion of {he concurrent call state For example, in a system that uses two partitions, the first partition could manage one half of the concurrent call state Ce g A-M J and the second partition can manage the other half (e g N-7) With three partitions, each can manage a third of the call state and so on Additional partitions can be added as needed to manage large number of concurrent calls
In one embodiment, within each partition 222, multiple serseis can be added to provide redundancy and faiiover should the other seiveis in the partition fail When multiple servers participate in the same partition 222, those servers can be referred IU as replicas because each serve? maintains a duplicate copy of the partition's call state For example, nodes 2i 8 and 214 of the partition 222 can be implemented a^ replicas Furthermore, to Increase the capacity of the state tier 212, the data can be split evenly across a set of partitions, as previously discussed The number of replicas in the partition can be called the replication factor, since it determines the level of redundancy and strength of
Figure imgf000012_0001
that it provides For example, if one node goes down or becomes disconnected from the network, any available replica can automatically provide call state data to the engine tier
Replicas 214, 218 can join and leave the partition 222 and each replica can serve aϊ> exactly one partition at a time Thus, in one embodiment, the total available call state storage capacity of the duster is a summation of the capacities of each partition 222
Ia one embodiment, each partition 222 can peer-replicated, meaning that clients perform all operations (reads-'writes) to all replicas 218, 214 in the partition (wherein the cuπent set of replicas in the partition Is called the partition slew) This can provide improved latency adv antages over more traditional synchronous "primary-secondary" architecture wherein one store acts as a primary and the other nodes serve as secondaries Latency is reduced because there is no wait for the second hop of primary-secondary systems The pee} -replicated scheme can provide better iailover characteristics as vveil since there does not need to be change propagation delay Ia one embodiment, the engine nodes 208, 216 can be responsible for executing the call processing Each call can hav e a call .slate associated with it This call state can contain various information associated with the call, such as the ids of the caller callce, where the caller is, wliat application is running on the callee, an\ timer objects that may neeό to fire in order to process the call flαv, (as discussed below), as well as any other data that may coπelate to a call OJ a message The state for each call can be contained in the state »er 212 The engine tier 210, on the othci hand, could be stateless in order to achieve the maximum performance In alternative embodiments, however, the engine tier can
Figure imgf000012_0002
e certain amounts of state data stored thereon at various times
In one embodiment, a typical message processing flow can involve locking/getting the call state, processing the message and then putting/unlocking the call state The operations supported by the t eplieas for normal operations can include
• lock and get call state
• put and unlock call state * lock and get call stales with expired timers
In various embodiments, the engine tier can maintain mainly short lived objects and any long lived objects which may be needed for message processing can be stored on the state tier. This can provide improvements in latency during garbage collection. As an illustration, the Java Virtual Machine (JVM) garbage collector can safely and quickly remove the short lived objects from memory without interfering with the execution of various other threads which may be in the process of executing. The longer lived objects, on the other hand, are not as easily removed by the garbage col Sector (since they may be referenced and depended on by various entities) and thus in some cases, the JVM garbage collector may need to stop processing all threads in order to safely perform its garbage collection. This is due in part to the scoping of the short lived and long lived objects Short lived objects typically exist in a different (more localized) memory scope than the long lived objects, which may be referenced by more entities. Thus, it can be more difficult for garbage collectors to ensure that every executing entity has finished using the long lived objects and various threads are usually stopped in order to perform their regular garbage collection. This can introduce latency.
In order to deal with such issues, the engine tier can maintain mostly short lived objects. In cases where longer lived objects are needed by the engine tier, they can be retrieved from the state tier, used as short lived objects in the engine tier, and subsequently pushed back to the state tier. This can be advantageous in that garbage collection can cause lesser interference with thread execution in the engine tier,
In various embodiments, the state tier 212 can maintain call state in various data objects residing in the random access memory (RAM) of a computer This can provide significant access speed advantages to the engine tier 210 over the use of a database. Alternatively, if latency is not an issue, call state can be maintained in a database or some other form of persistent store, which can be accessed (albeit slower) by the engine tier. State of various applications running on the SlP server can also be maintained on the state tier. Developers can be provided an API to allow their applications to access the state tier and to store various data thereon for later access by various applications. Alternatively, application state may be stored in a database.
FIGURE 3 is an exemplary illustration of the near cache implemented in the SiP server architecture, in accordance with various embodiments of the invention. Although this diagram depicts components as logically separate, such depiction is merely for illustrative purposes. It will be apparent to those skilled in the art that the components portrayed in this figure can be arbitrarily combined or divided into separate software, firmware and/or hardware. Furthermore, it. will also be apparent to those skilled in the art that such components, regardless of how they are combined or divided, can execute on the same computing device or can be distributed among different computing devices connected by one or more networks or other suitable communication means.
As illustrated, the engine tier 300 can be comprised of multiple engine nodes such as engine node A 310 and engine node B 316 that have StP applications 314, 318 running thereon which provide services to various SLP clients 308. A separate state tier 302 cluster can manage state data and the engine nodes can fetch and write state in the state tier as necessary. The state tier can include a number of partitions (such as partition A 306) which can have state replicas 322, 326 for maintaining duplicate state 324, 328 thereon. The engines can write call state data to multiple replicas in each partition in order to provide automatic fail over should a state tier replica go offline. In one embodiment the engine nodes are not entirely stateless, hut implement a
RAM-based near cache 312, 320 that maintains a portion of the call state 324, 328 locally, as well as in the state tier. When such a near cache is used, an engine tier server can first check its local cache for existing call state data when processing various messages, ϊn one embodiment, if the cache contains the data needed by the engine server, and if the local copy is up to date (when compared to the state tier copy), the engine node can lock the call state in the state tier but read directly from its cache. This can improve response time performance for the request because the engine does not have to retrieve the call state data from a data tier server.
Retrieving call state from the state tier can involve various costs. One such cost is the lime duration of the communication and transporting the state data between the engine node and a state replica. Another such cost is the time for serialization and de-serialization of the call state. In modern systems, serialization is used to transmit an object or data over the network as a series of bytes. De-seήaiization involves using these bytes on the receiving end in order to re-construct the object (or a copy thereof). As an illustration, the Java programming language provides automatic serialization and may require that the object be marked by implementing the java.io.Seriaiizable interface. Java can then handle the serialization internally. In various embodiments, such serialization and de-serialization can introduce latency which in certain cases may be undesirable. For example, during an initial call set up, the SlP server can receive a flurry of initial messages from several SΪP clients. It may be advantageous to maintain a local copy of the state on the engine server while handling this flurry of messages, instead of repeatedly accessing the state tier upon even' message. Maintaining such a local copy can prevent the need to serialize and de-serialize the state data each time since it does not need to be transported across the network.
("all state can be moved into an engine's local cache as needed to respond to SIP client 3OS requests or to refresh out-of-date state data, Sf the cache is full when a new call state should be written to the cache, the Seast-used call state entry can be removed from the cache and the new entry written.
In various embodiments, the local cache can be further beneficial when a S IP- aware load balancer 304 is used to manage SlP client requests to the engine tier cluster. With a SlP-aware load balancer, all of the requests for one call leg can be directed to the same engine tier server, which can improve the effectiveness of the cache. For example, if the load balancer is not SΪP-aware, subsequent messages/requests for the same call could be distributed to different engine tier servers which may have different cache contents and thus the performance benefit of the near cache can be inhibited Even in such embodiments, however, some performance improvements can be realized as there should be at least some cache hits. On the other hand, when messages for the same call ieg are distributed to the same engine node, it is more likely that the engine node has the state needed for the message stored locally in the near cache. In this manner, latency can be further improved.
In some embodiments, there may be a tension between using too large a near cache and reducing latency caused by garbage collection . Since objects in the near cache can be complex or long lived objects, it may be more difficult for the garbage collector to remove them in order to clean up the memory. This can introduce latency, as previously discussed. On the other hand, using the near cache can reduce the time costs of communicating, transporting, serializing and deserializing data. Therefore, it may be preferable to tune various factors such as the size of the cache, the JVM and its garbage collection to a proper balance in order to achieve maximum performance output, As an illustration, latency can be monitored as the maximum size of the near cache is adjusted. This can be used to determine the optimal size of the cache for a particular network. For example, a cache hits counter can be maintained and incremented whenever a '1ockAndGel€aliState'~ returns a non null value from the cache. Further alternatives include experimenting with different sizes of the cache and expiration characteristics (such as the least- recently-used scheme) to help determine the recommended settings for different call rates/flows and different deployments, including engine/partition ratio and load balancer features.
Another option may be to store the cali state as a byte array (the form in which it is received from the state tier) and deserializing on demand. This may cause slower individual access, but may decrease garbage collection pauses, ϊn various embodiments, a proper balance can be determined by a system administrator or other technical person in order to maximize the performance of the near cache and the SiP server.
The cache can be an object cache residing on each of the engine nodes in the engine tier and it can contain a portion of the same information that is contained in the state tier. In some embodiments, the near cache can be implemented as a bounded map of cali states indexed by call ID. In various embodiments, call states, both in the near cache and in the state tier can be associated with a version This may be useful in processing synchronous message interaction between several SIP clients when the call state cannot be updated simultaneously. In some cases, the SlP protocol (and thus call state) can be sensitive to the particular order of the messages arriving to/from the SIP server. For example, during a conference call SiP session, two users may pick up at the same time. In some embodiments, those messages may need to be processed synchronously (one at a time) in order to ensure the integrity and accuracy of the call state, In those embodiments, locking and versioning the call state can enable the near cache to ensure correctness of the state
Io one embodiment, the near cache can be used in conjunction with fetching as well as writing to the state tier. For example, during a "get and lock" cal! state, before fetching from a state replica, the engine can first, perform a search in the near cache. Versioning information about the cached version can be passed to the state replica(s) and the replica can respond by returning versioning information about the call state. If the version in the cache is up to date, the engine can then read the call state from the near cache while still locking that call state in the state tier. Thus, while locking and versioning information are passed between the engine and the state tiers, the engine may not need to transport the cali state itself from the state tier and may save on serializing and de- seriaHziπg the data. In lock and get message processing, the engine can pass the version to the state tier when it executes a lock md get. Then, the lock and get can return the call state from the state tier if the version is out of date, otherwise, it can be readily available from the cache. In put and unlock message processing, the engine server can save cal! state and versioning information in the near cache before writing the state to the replicas. In get and tock timers message processing, the state tier can transmit the call state bytes but the state can be retrieved from the cache (assuming proper version) saving on the de-serialization costs. hi various embodiments., the near cache can be integrated with handling of the timer objects as discussed in further detail below. For example, when timers fire and the engine tier may need call state in order to process the message specified by the timer, that state can be readily available in the near cache. In this manner, the engine can also save on the data transport costs during the execution of various timer objects.
FIGURE 4A is ati exemplary flow diagram of the near cache functionality, in accordance with various embodiments. Although this figure depicts functional steps in a particular sequence for purposes of illustration., die process is not necessarily limited to this particular order or steps One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways.
As illustrated in step 402, a cluster network of computers can maintain an engine tier and a state tier distributed thereon. The engine tier can handle the processing of various messages and store mainly short lived objects to be employed thereby. The state tier can store the state associated with an SlP message, including long lived objects which may be used in processing the message.
In step 404, a near cache can be maintained in the engine tier, in order to store a portion of the state data that is stored on the state tier. This portion of the state can be used when processing messages that frequently used the state. For example, during an initial call setup, the SIP server may receive a high period of message activity for one call, where each message can use the state data from the state tier. Rather than accessing it from the state tier upon each message, it may be helpful to maintain a local copy on the engine tier in the near cache.
In step 406, a SlP communication message can be received to the load balancer in the cluster network. The transmission of the message can come from various devices or software, such as a cellular phone, a wireless device, a laptop computer, an application, or can be specified by various time? objects which have ilred The load balancer can then distribute tli c SlF' message to an appropriate engine server node in the engine tier The load balancer can be a hardware deuce whose primary goal is to provide a single IP address to the message clients and to distribute the incoming traffic to the engine tier In step 408, the engine server can determine whether the state needed to process the message is available in the near cache If the state is available, the engine node can then check if the version currently in the near cache is up to date, as ilϊusuated in step 410 This may be useful few keeping the slate data consistent acioss the slate lier and the neat cache In step 416, if there is a current \ersion of the state data in the near cache, the engine server can lock the state data in. the state tier This can be useful for synchronously processing incoming messages and in order to ensure the accuracy of the state, as previously discussed The version in the near cache can then be accessed and employed b> the engine tier in processing the message, as illustrated in step 4 IS On the other hand, if there is no state for the message in the near cache, or if the version stored in the near cache is out of date, the engine tier may then decide to retrieve the state form the state tier T he state data can be locked first, as illustrated in step 412, and the data can then be retrieved from the state tier and transported to the engine tιeτ to be used there, as illustrated in step 414 Such retrieval and transporting of data can be costh , as previously discussed Thus for example, the near engine cache can improv e latency by reducing on time taken for serializing, transporting and deserializing the state bv having a local version on the engine tier
Λs noted, however, the steps illustrated herein can be rearranged omitted, combined or new steps can be added as well For example, the engine tier can send a lock and get message to the state tier along with the version of the state in the near cache The state tier can then respond by sending the stale if the version is cxpiied, otherwise the engine tier can use the v ersion in the near cache Other such implementations are also possible and well within the scope of the invention
FIGURF 4B is an exemplary flow diagram of the engine tier message processing, in accordance with various embodiments Although this figure depicts functional steps in a particulaf sequence fof purposes of illustration, the process is not necessarily limited to this particular order oi steps One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways
As illustrated in step 420, the engine tiet can be iesponsible for processing various messages For example, an engine node can recehe incoming message from the load balancer or can receiv e directions to send a message from the state tier The engine node can then gain access to the state needed to process the message, as previously discussed
After processing the message, the engine node can save the state thai was used in the near engine cache, as illustrated in step 422 This may include updating the version in the cache as well as updating the state data itself T he state data can then be written to a state replica in the appropriate partition of the state tier, as illustrated in step 424 If failover ss desired, that state can also be duplicated across other state replicas in the partition, as illustrated in step 426 At this point, as illustrated in step 428, the piece of call state can be unlocked within the state tier so that other engine nodes processing olhei messages that may need that same state can use it accordingly This can help to ensure synchronous call and message processing as described in further detail below
HGURE 4C is an exemplary- flow diagram of tuning the performance of the near engine cache, in accordance with various embodiments Although this figure depicts functional steps in a particular sequence for purposes of illustration, the process is not necessarily limited to this particular order or steps One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways
Λs illustrated in step 430, the performance of the near engine cache within the SII* server can be continually monitored Similarly, as illustrated in step 432, the latency caused by various garbage collection algorithms can also be monitored For example, monitoring can be performed by running varying amounts, of call flow traffic and applications on the SlP server and measuring the time taken to process thai traffic A system administrator may implement an assortment of tools in order to monitor performance and latency, such as a counter of hits to the near cache, a proportion of those hits that return a current version, time intervals during which execution of \ arious threads is halted by the garbage collector, average time taken to process a message, as well as various other tools By weighing the latency that may be introduced by garbage collection against the benefit obtained by the near engine cache, an optimal performance of the SIP server can be determined
In \ arious embodiments, an administrator can tune the performance of the SlP server and the near engine cache For example, in step 434, the size of the near cache can be adjusted to suit the particular network and call flow Similar]) , the expiration of objects in the near cache can be adjusted to be longer or shorter lived
In step 436. the size of the Java Virtual Machine (JVM) heap can be adjusted so as to reduce garbage collection latency The JVM heap is typically where the objects of a
Jav a program live In various embodiments, the JYM heap is a repository' for live objects, dead objects and free memory The JVM heap size can determine how long or how often the JYM will perform garbage collection In one embodiment, if vou set a large heap size. garbage collection may occur less frequently but can take longer to finish Similarly, smaller heap sizes can speed up the garbage collection but ma> cause it to occur more frcqucmly Adjusting the si/e of the JVM heap can help to set most favorable performance of the SIP server
In step 4 >8, the IVM ratio of when objects should move from the new generation heap (nursery) to the older generation heap can be adjusted In various embodiments, the
JYλ'l heap can store short lisεd objects in the new generation heap and the Song lived objects in the old generation heap The size of these heaps can be similarly adjusted, as illustrated in step 440, in order to maximize performance
Further adjustments can also include changing the storage of objects in the neat cache to an arrav of bytes which can be deserialized on call, as illustrated in step 442 The adjusting of various factors discussed above, can be repeated, arranged, interrupted or omitted as performance of the SlP server is monitored <\s an illustration, a system administrator can adjust one of the parameters discussed above, monifoi performance, adjust another paiametet, monitor any change in performance and so on In various embodiments, this can enable an administrator to deteimine the optimal or near-optimal performance of the near cache and the SIP server These performance settings* may differ across the various organizations that implement the SIP server, due to factors such as call flow volume, size of the cluster netvsork, amount of data processed as well as a multitude of other factors The methodology illustrated in FIGURE 4C can help the organization improve its efficiency by adjusting the various factors influencing the SSP server
Call Flow FIGURE 5 is an exemplary illustration of a simplified call flow in a typical SΪP communication session, in accordance with various embodiments. Although this figure depicts functional steps in a particular sequence for purposes of illustration, the process is not necessarily limited to this particular order or steps. One skilled in the art will appreciate that the various steps portrayed in this figure can be changed, omitted, rearranged, performed in parallel or adapted in various ways,
As illustrated, a back to back user agent (B2BUA) 500, having a running SiP server thereon can take the place of being an intermediary between the Communications sent between various users. Tins can be done for purposes of controlling the call and message flow between user agent 1 502 and user agent 2 504 and in order to prevent any unwanted behavior and messages (e.g. spamraing, hacking, viruses, etc.) from being sent to the user agent device, 'it should be noted that although user agent 1 502 and user agent 2 504 are illustrated as telephones in FIGURE 5, the SlP messages can come from various other sources as well. For example, the user agent can also be a cell phone, a wireless device, a laptop, an application or any other component that can initiate a SlP type of communication. Similarly, while FIGURE 5 illustrates communications between two user agents (502, 504), there can be more such user agents taking part of a single communication session. For example, during a conference call, there may be 20 or 30 user agents for all attendees of the conference, each of which could send SlP messages to the B2BUA 500 and receive transmissions back therefrom.
Continuing with the illustration, a telephone call can be set up between user agent I 502 and user agent 2 504 via the use of the SIP server. The first message sent from user agent 1 502 to the SIP server on the B2BUA 500 can be an invite message, requesting to set up a telephone call with user agent 2 504. The invite message can be received by the load balancer 202 of the SlP server a.nd it can be directed to an engine in the engine tier 210 for processing.
In various embodiments, the engine tier (e.g an application executing thereon) can then perforin logic for determining various factors associated with the call, such as determining whether user agent 1 502 is allowed to make the type of call attempted to be initiated, determining whether the callee that will be contacted is properly identified, as well as any other logic that the server may need to calculate before attempting to set up a telephone call. The engine can then generate state around the fact that a call is being set up, including generating the proper long lived and short, lived objects associated with the messages, as previously discussed. The engine can also determine how to find the target of the call (i.e. lϊser agent 2 504) and the right path to route the message to the callee, As illustrated herein, user agent ! is an originator (as well as the terminator) of the call and user agent 2 is referred to as the cailee. After receiving the invite message, the SIP server can send a " !00 trying"' message back to user agent 1 502, indicating that it has received the invite message and that it is in the process of handling it. The "100 trying" .message is part of the SIP protocol definition and can be used by a server In order to stop the user agent from re-transmitting the invite request. In cellular phone environments, the user agent may have interference which might cause an interruption or loss of various messages. Therefore SIP protocol defines various re-transmission schemes in order to handle such mobility and interruptions. Messages such as "100 trying," "'180 ringing," and "200 OK" are just some of the examples of messages defined in SlP for handling communication.
Continuing with the illustration, the SIP server can then send an invite message to the user agent 2 504 and can receive back a " 180 ringing" message, indicating that user agent 2 504 has received the invitation and is now waiting for a user to answer. The SlP server engine tier can then transmit the "180 ringing" message back to user agent 1 502. When a person finally answers the phone, user agent 2 504 can then send a "200 ok" message to the SlP server, the serves' can transmit that message to user agent 1 502. The user agent 1 502 can send an acknowledgement ("Ack" message) to the SlP server which can be transmitted along to user agent 2 504 and at this point a sound transfer conversation can be set up between the two user agents. This sound transfer can be implemented via real transfer protocol (RTP) on a. media server. At the end of the conversation, either user agent can choose to terminate the call by sending a "Bye" message. In this illustration, user agent 1 502 terminates the call by sending a "Bye" message to the SIP server which sends it off to user agent 2 504. After receiving back a "200 ok" message from user agent 2, the SlP server can transmit that message to user agent I and the conversation can be truly ended.
In various embodiments, the vertical lines such as those extending downward from the user agents 502, 504 and the B2BUA 500 can each illustrate and be referred to as a single call leg. The call flow for each call leg may be time sensitive as some messages should be received or sent before others can be initiated. For example, as illustrated herein, the user agent A 502 may continue to re-transmit the initial invite message until it receives a 100 trying'" message from the B2BUA 500 Λs such, in some cases certain messages mav need to be processed s\nchronously while others may be allowed to process in parallel ft should be noted that this illustration of a call may be o\erly simplified for purposes of clarity For example, there can be various other message transmissions (not illustrated) such as authentication messages for caSler/eallee. determining the t\ pe of user agent the SIP serser is communicating with and various other handshaking messages that can be exchanged between the SIP set vet and the user agents {-"urthennore, message transmitting steps mav be added, changed, interrupted or rearranged in case of interference or fai Sure of various components
Timer Objects
As* previously discussed, in various embodiments there may be specific sequences of messages exchanged between the SIP server and the user agents for controlling the flow of the call These sequences can be controlled by various timer objects residing on the SlP server As a nonl uniting illustration, after receiving the invite message from one user agent, the StP server will typically forward that invite to another user agent and wait for a response If no response is received within a period of time (e g a number of milliseconds), men the ins Ue message may need to be retransmitted to the second user agent because it may be assumed that the user agent did not receive the first message This type of re-transmission can be controlled by the protocol timer objects which ma\ be residing in the .state tier. Sn one embodiment, an initial Tl timer value of 500 milliseconds can control the retransmission interval for the invite request and responses and can also set the value of various other timers
In various embodiments, there are also other timer objects which can be executing on the level of the entire call For example, if after a specified period of time, nothing is heard back from eithet user agent, the entitc call may be purged from the system This specified period of time can also be controlled b\ firing a timei object
In one embodiment, as engine tier servers add new call state data to the state tier, state tier instances queue and maintain a complete list of SIP protocol timers and application timers associated with each call Engine tier servers can periodically poll the partitions of the state tier to determine which timers have expired given the current time in order to a\ oid contention on the timei tables, multiple engine tier polls to the state tier can be staggered The engine tier can {hen process the expired timers using threads in the sip. timer. Default execute queue. Thus, the processing of the timer objects can be executed by the engine server as determined by the state tier server. For example, the state tier can tell the engine A to execute the first half of all due timer objects (e.g. 1-100) and fell engine B to execute the other half (e.g. 101-200). The state tier can also simultaneously push the state onto the engine, since the state may need to be employed in executing the timer objects. The engines can then process the tinier objects (e.g. by sending appropriate messages, ending appropriate calls) and can later again query poll the state tier for which timers have become due
When used with the near cache, the state data may not need to be pushed onto the engine server since that data may already be available in the cache. Thus, when processing timers, the timers can be fetched from the state tier, however upon the timer firing, the engine can fetch the call state using the cache. Further performance optimization can be obtained by changing the selection of tiers to give affinity to the engine holding the cache for a particular call. Thus, the timers which are going to be executed can be sent to the appropriate engines which have the proper call state in the cache thereon.
In various embodiments, it. may be preferable to synchronize system server clocks to a common time source (e.g. within a few milliseconds) in order achieve maximum performance. For example, an engine tier server with a system clock that is significantly faster than other servers may process more expired timers than the other engine tier servers. Irs some situations this may cause retransmits to begin before their allotted time and thus care may need to be taken to ensure against it.
In various embodiments, the SIP Servlet API can provide a timer service to be used by applications There can he TimerService interface which can be retrieved from as a
ServletContext attribute. The TimerService can define a "create Timer(SipApplicationSession appSession, long delay, boolean isPersistent, java.io.Serializable info)" method to start an application level timer. The
SipApplicationSession can be implicitly associated with the timer. When a timer fires, an application defined TinierListener is invoked and ServletTimer object passed up, through which the SipApplicationSession can be retrieved which provides the right context of the timer expiry.
EaiJoveπ
In various embodiments, the engine tier servers continually access the state tier replicas in order to retrieve and write call state data. In addition, the engine tier nodes can
,ώ.> also detect when a stale tier server has failed or become disconnected. For example, in one embodiment, when an engine cannot access or write call state data for some reason (e.g. the state tier node has failed or become disconnected) then the engine can connect to another replica in the partition and retrieve or write data to that replica. The engine can also report that tailed replica as being offline. This can be achieved by updating the view of the partition and data tier such that other engines can also be notified about the offline state tier server as they access state data.
Additional fa U over can also be provided by use of an echo server running on the same machine as the state tier server. The engines can periodically send heartbeat messages to the echo server, which can continually send responses to each heartbeat request. If the echo server fails to respond for a specified period of time, the engines can assume that the state tier server has become disabled and report that state server as previously described. In this manner, even quicker fail over detection is provided, since the engines can notice failed servers without waiting for the time that access is needed and without relying on the TCP protocol's retransmission timers to diagnose a disconnection,
Failover can also be provided for the engine tier nodes As previously discussed, the engine tier nodes can periodically poll the state tier nodes in order to determine which timer objects it needs to execute. In turn, the state tier nodes can notice whenever the engine tier node has failed to poll. Lf a specified period of time elapses and the engine tier has not polled the state tier, the state server can then report that engine as unavailable (e.g. having failed or disconnected from the network). Ln this manner, failover can be implemented for both the state tier and the engine tier, thereby providing a more reliable and secure cluster for message processing.
In other aspects, the invention encompasses in some embodiments, computer apparatus, computing systems and machine-readable media configured to carry out the foregoing methods, in addition to an embodiment consisting of specifically designed integrated circuits or other electronics, the present invention may be conveniently implemented using a conventional general purpose or a specialized digital computer or microprocessor programmed according to the teachings of the present disclosure, as will be apparent to those skilled in the computer art.
Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as will be apparent to those skilled in the software art. The Invention may also be implemented by the preparation of application specific integrated circuits or by interconnecting an appropriate network of conventional component circuits, as will be readily apparent to those skilled in the art.
The present invention includes a computer program product which is a storage medium (media) having instructions stored thereon/in which can be used to program a computer to perform any of the processes of the present invention. The storage medium can include, but is not limited to, any type of rotating media including floppy disks, optica! discs, DVD, CD-ROMs, microdrive, and magneto-optical disks, and magnetic or optical cards, nanosysterns (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data. Stored on any one of the machine readable medium (media), the present invention includes software far controlling both the hardware of the genera! purpose/specialized computer or microprocessor, and for enabling the computer or microprocessor to interact with a human user or other mechanism utilizing the results of the present invention. Such software may include, but is not limited to, device drivers, operating systems, and user applications.
Included in the programming (software) of the general/specialized computer or microprocessor are software modules for implementing the teachings of the present invention, including, but not limited to providing systems and methods for providing the SlP server architecture as discussed herein. Various embodiments may be implemented using a conventional genera! purpose or specialized digital computer(s) and/or processors) programmed according to the teachings of the present disclosure, as can be apparent to those skilled in the computer art. Appropriate software coding can readily be prepared by skilled programmers based on the teachings of the present disclosure, as can be apparent to those skilled in the software art. The invention may also be implemented by the preparation of integrated circuits and/or by interconnecting an appropriate network of conventional component circuits, as can be readily apparent to those skilled in the art.
Embodiments can provide, by way of example and without limitation, services such as: VoIP services, including, without limitation the following features:
Basic features. These include standards sendees such as Voice mail, Caiier ID, Call waiting, and call forwarding (the ability to forward a call to a different number).
Advanced features. Following is a brief list of advanced features: Call logs The ability tu view calls made over a given period of time online, ability to associate names with phone numbers, integrate call log information to other applications such as IM
Do not disturb The ability to specify policies around receiving calls for example, all calls during office hours to be automatically forwarded to a mobile terminal, all calls during the night to be directed to voice mail etc
Locate me This is advanced call forwarding Rather than have all calls
Figure imgf000027_0001
to a single location (e g voice mail) when the caller is
Figure imgf000027_0002
, ( ocate me can Uy multiple terminals in series or in parallel For example, a u^er may have two office locations, a mobile, and a pager, and it may make sense to forward a cali to both office locations first then the pager, and then the mobile terminal Locate me is another example of feature interaction
Personal conferencing A user could use an existing application (c g , i\ϊ client) to schedule a Web''audio conference to start at a certain time Since the l\f client already has personal profile information, the conferencing system sends out the Web conference link information cither through IVI and'Or email to the participants The phone contact information in the profile is used to automatically ring the participants at the time of the conference
Lifetime number T his is the facility where a single virtual number can travel
Figure imgf000027_0003
a customer wherever they live Lv en if they move, the old number continues to work, and reaches them at their new location Thi.s is. really the analog of static IP addresses in a phone network
Speed dial This is the abilitv to dramatically expand the list of numbers that can be dialed through short-key and accelerator combinations This is another example of a converged application, since it's very Sikelx that when a met will set up this information when they work through the call logs on the operator user portal, and the updated information needs to be piopagated to the network side in real-lime
Media delivery services, including, without limitation the following features
Depending <>n the service lex el agreement users are willing to sign up to the quality of media delivered (e g number of frames per second! will vary Fh e
Figure imgf000027_0004
engine enables segmenting the customer base by revenue potential, and to maximize return on investment made in the network
Context-sensitive applications including, without limitation the following features
2δ A typical example here is the need for applications that have a short lifetime, extremely high usage peaks within their lifetime, and immediacy. For example, voting on American Idol during the show or immediately afterwards has proved to be an extremely popular application. Integrated applications including, without limitation the following features
The final class of applications is one that combines wireline and wireless terminal usage scenarios. An example of an integrated application is the following a mobile terminal user is on a conference call on their way to work When he reaches his office., he enters a special key sequence to transfer the phone call to his office phone. The transfer happens automatically without the user having to dial in the dial-in information again. It's important to note hear that this capability be available without the use of any specific support from the hand-set (a transfer button for example).
Various embodiments include a computer program product which is a storage medium (media) having instructions stored thereon/in, which can be used to program a general purpose or specialized computing processors )/device{s) to perform any of the features presented herein The storage medium can Include, but is not limited to, one or more of the following- any type of physical media including floppy disks, optical discs, [3VDs, CD-ROMs, niicrodrives, magneto-optlcai disks, holographic storage, ROMs, RAMs5 PRAMS, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices. magnetic or optical cards, nanosystems (including molecular memory ICs); paper or paper-based media, and any type of media or device .suitable for storing instructions and/or information. Various embodiments include- a computer program product that can be transmitted in whole or in pans and
Figure imgf000028_0001
one or more public and/or private networks wherein the transmission includes instructions which can be used by one or more processors to perform any of the features presented herein. Iu various embodiments, the transmission may include a plurality of separate transmissions
Stored one or more of the computer readable medium (media), the present disclosure includes software for controlling both the hardware of general purpose/specialized computers) and/or processor(si), and for enabling the compυter(s) and/or processors) to interact with a human user or other mechanism utilizing the results of the present invention. Such software may include, but is not limited to, device drivers, operating systems, execution environments/containers, user interfaces and applications The foregoing description of the preferred embodiments of the present invention has been provided for purposes of illustration and description It is not intended to be exhaustive or to limit the invention to the piecise foims disclosed Ntanj modifications and variations can be apparent to the practitioner skilled in the art Embodiments were chosen and described in order to best explain the principles of the invention and its practical application, thereby enabling others skilled in the relevant art to understand the in\ ention it is intended that the scope of the in\ ention be defined by the following claims and their equivalents

Claims

CiAMS What js.clai.medjs.:
1 . A computer implemented method for providing a near engine cache in a network 5 environment, comprising' maintaining an engine tier distributed over a cluster network and adapted to process one or more messages; maintaining a stale tier distributed over the cluster network and adapted to store state data associated with the messages;
10 storing a local copy of a portion of the state data onto a near cache residing on the engine, tier; receiving a message; determining whether a current version of the state data associated with the message is in the near cache, i 5 retrieving the state data associated with the message from the state tier if no current version is stored in the near cache, otherwise accessing the state data in the near cache
2. The method of claim 1 wherein retrieving the state from the state tier further comprises: 0 serializing the state data and transporting it to the engine tier, and deserialising the state data in the engine tier
3, The method of claim 2 wherein accessing the state data in the near cache does not require serializing and deserializing the state data.
4 The method of claim 3 further comprising' adjusting the size of the near cache in order to achieve a balance between latency introduced by garbage collection and latency reduced by elimination of serializing and deserializing the state data. 0
5. The method of claim 1 wherein the state data is maintained in random access memory (RAM) on the state tier and on the near cache.
6. The method of claim I wherein the state data in the near cache is implemented as a bounded map of call states indexed by call ID,
7. The method of claim 1 further comprising: providing one or more timer objects for specifying to the engine tier when to process the incoming messages wherein the timer objects are integrated with the near cache such that the state data can be accessed from the near cache by the engine tier while processing messages specified by the timer objects.
8. The method of claim 1 wherein receiving a message further comprises: receiving the message by a load balancer; and distributing the message to an engine node in the engine tier; wherein the load balancer distributes all messages associated with a same call leg to the same engine node in the engine tier.
9. The method of claim I wherein retrieving and accessing the state data further includes locking the state data in the state tier.
10. The method of claim 1 further comprising: saving the state data in the near cache after processing the message; and writing the state data to the state tier.
I L A system for providing a near cache for a network environment, comprising- an engine tier distributed on a cluster network and adapted to process messages; a state tier distributed on the cluster network and adapted to maintain state associated with the messages such that the engine tier retrieves the state maintained in the state tier in order to process the messages, and a near cache residing on the engine tier for storing a local copy of a portion of the state maintained in the state tier such that the engine tier can acquire the state from the near cache instead of retrieving from the state tier.
12. The system of claim 11 wherein retrieving the state from the state tier further comprises: serializing the state data and transporting it. to {he engine tier; and deserializing the state data in the engine tier,
13. The system of claim 12 wherein acquiring the state from the near cache does not 5 require serializing and deserializing the state data.
14. The system of claim 13 wherein the size of the near cache is adjusted in order to achieve a balance between latency introduced by garbage collection add latency reduced by elimination of serializing and deserializing the state.
\0
15. The system of claim 11 wherein the state is maintained in random access memory (RA M) on the state tier and on the near cache.
16 The system of claim 1 1 further comprising, i 5 one or more timer objects for specifying to the engine tier when to process the incoming messages wherein the timer objects are integrated with the near cache such that the state can be accessed from the near cache by the engine tier while processing messages specified fay the timer objects.
0 17, The system of claim 1 1 further comprising' a load balancer that receives an incoming message and distributes the message to an engine node in the engine tier for processing wherein the load balancer distributes all messages associated with, a same call leg to the same engine node in the engine tier
5 1 S. The system of claim 1 1 wherein the state is locked in the state tier whenever the engine tier is using the state for message processing
19. The system of claim 11 wherein the state is saved into the near cache after processing the message and wherein the state is written into the near cache. 0
20. A computer readable medium having instructions stored thereon which when executed by one or more processors cause a system tc maintain an engine tier distributed over a cluster network and adapted Io process one or more messages; maintain a state tier distributed over the cluster network and adapted to store state data associated with the messages; store a local copy of a portion of the state data onto a near cache residing on the engine tier; receive a message; determine whether a current version of the stale data associated with the message is in the near cache; retrieve the state data associated with the message from the state tier if no current version is stored in the near cache, otherwise accessing the state data in the near cache.
PCT/US2007/069023 2006-05-16 2007-05-16 Engine near cache for reducing latency in a telecommunications environment WO2007134339A2 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US80108306P 2006-05-16 2006-05-16
US80094306P 2006-05-16 2006-05-16
US80109106P 2006-05-16 2006-05-16
US60/801,083 2006-05-16
US60/800,943 2006-05-16
US60/801,091 2006-05-16

Publications (2)

Publication Number Publication Date
WO2007134339A2 true WO2007134339A2 (en) 2007-11-22
WO2007134339A3 WO2007134339A3 (en) 2008-10-30

Family

ID=38694789

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/069023 WO2007134339A2 (en) 2006-05-16 2007-05-16 Engine near cache for reducing latency in a telecommunications environment

Country Status (1)

Country Link
WO (1) WO2007134339A2 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6721286B1 (en) * 1997-04-15 2004-04-13 Hewlett-Packard Development Company, L.P. Method and apparatus for device interaction by format
US6747970B1 (en) * 1999-04-29 2004-06-08 Christopher H. Lamb Methods and apparatus for providing communications services between connectionless and connection-oriented networks
US7089307B2 (en) * 1999-06-11 2006-08-08 Microsoft Corporation Synchronization of controlled device state using state table and eventing in data-driven remote device control model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6721286B1 (en) * 1997-04-15 2004-04-13 Hewlett-Packard Development Company, L.P. Method and apparatus for device interaction by format
US6747970B1 (en) * 1999-04-29 2004-06-08 Christopher H. Lamb Methods and apparatus for providing communications services between connectionless and connection-oriented networks
US7089307B2 (en) * 1999-06-11 2006-08-08 Microsoft Corporation Synchronization of controlled device state using state table and eventing in data-driven remote device control model

Also Published As

Publication number Publication date
WO2007134339A3 (en) 2008-10-30

Similar Documents

Publication Publication Date Title
US8112525B2 (en) Engine near cache for reducing latency in a telecommunications environment
US8171466B2 (en) Hitless application upgrade for SIP server architecture
US8001250B2 (en) SIP and HTTP convergence in network computing environments
US8219697B2 (en) Diameter protocol and SH interface support for SIP server architecture
US7661027B2 (en) SIP server architecture fault tolerance and failover
US20080086567A1 (en) SIP server architecture for improving latency in message processing
US9723048B2 (en) System and method for providing timer affinity through notifications within a session-based server deployment
US7844851B2 (en) System and method for protecting against failure through geo-redundancy in a SIP server
US9667430B2 (en) System and method for a SIP server with offline charging
US7895475B2 (en) System and method for providing an instrumentation service using dye injection and filtering in a SIP application server environment
US8078737B2 (en) System and method for efficient storage of long-lived session state in a SIP server
US8331351B2 (en) Communicating with session initiation protocol (SIP) application sessions using a message-oriented middleware system
US20080147551A1 (en) System and Method for a SIP Server with Online Charging
Singh et al. Failover, load sharing and server architecture in SIP telephony
US8179912B2 (en) System and method for providing timer affinity through engine polling within a session-based server deployment
US8107612B2 (en) Distributed session-based data
US8719780B2 (en) Application server with a protocol-neutral programming model for developing telecommunications-based applications
US20140022889A1 (en) Transferring a conference session between conference servers due to failure
Singh Reliable, Scalable and Interoperable Internet Telephony
WO2007134338A2 (en) Hitless application upgrade for sip server architecture
CN102546712B (en) Message transmission method, equipment and system based on distributed service network
US20100329238A1 (en) System and method for exposing third party call functions of the intelligent network application part (inap) as a web service interface
WO2007134339A2 (en) Engine near cache for reducing latency in a telecommunications environment
Femminella et al. Scalability and performance evaluation of a JAIN SLEE-based platform for VoIP services
TWI397296B (en) Server system and method for user registeration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07797498

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07797498

Country of ref document: EP

Kind code of ref document: A2