WO2001035242A1 - Architecture de serveur informatique hautement distribuee et systeme d'exploitation - Google Patents

Architecture de serveur informatique hautement distribuee et systeme d'exploitation Download PDF

Info

Publication number
WO2001035242A1
WO2001035242A1 PCT/US2000/031108 US0031108W WO0135242A1 WO 2001035242 A1 WO2001035242 A1 WO 2001035242A1 US 0031108 W US0031108 W US 0031108W WO 0135242 A1 WO0135242 A1 WO 0135242A1
Authority
WO
WIPO (PCT)
Prior art keywords
server
hda
resources
computer server
computer
Prior art date
Application number
PCT/US2000/031108
Other languages
English (en)
Inventor
Gad Barnea
Original Assignee
Zebrazone, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebrazone, Inc. filed Critical Zebrazone, Inc.
Priority to AU16021/01A priority Critical patent/AU1602101A/en
Publication of WO2001035242A1 publication Critical patent/WO2001035242A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/75Indicating network or usage conditions on the user display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1012Server selection for load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • the present invention is related generally to computer server systems, and more particularly to a computer server system employing a highly distributed architecture and utilizing an adaptive, migrative server-side computer operating system.
  • n-tier tier-based server platforms
  • tier-based server platforms or "n-tier” architectures
  • the typical n-tier architecture is built upon a multiplicity of individual and independent servers, each acting autonomously and making use of its own physical and logical resources exclusively.
  • Such configurations generally revolve around central processing hubs (or "tiers"); these tiers host applications or application components under a fixed hardware topology, i.e. each component is tied to a specific "physical" location.
  • n-tier computer server assigns a different thread to each active Session. While threading may be considered effective for limited uses to a certain extent, the technique is processor intensive, and consequently, slow and cumbersome.
  • n-tier computer server platforms While more flexible than servers built on a single physical machine, still suffer severe limitations in terms of scalability and bandwidth.
  • the present invention overcomes the foregoing and other shortcomings of conventional systems by providing software for a computer server having a Highly Distributed Architecture (HDA); in one embodiment, the computer server system of the present invention advantageously utilizes a sophisticated, HDA server-side computer operating system. While one implementation of the HDA Server is generally described herein with reference to an innovative infrastructure enabling Business Service Providers (BSPs) to provide value-added services to their customers on the World Wide Web (Web), it is within the scope and contemplation of the invention to employ such an HDA Server in other contexts for other applications, such as intranet connectivity, home networking, pervasive computing, and so forth.
  • BSPs Business Service Providers
  • Web World Wide Web
  • a computer server having a highly distributed architecture generally includes a plurality of physical machines (PMs), each having physical and logical resources, a network enabling data transmission between and among the PMs, and program code for managing system resources.
  • PMs physical machines
  • the program code for managing system resources may be in the form of an HDA operating system designed to take advantage of the HDA Server architecture.
  • the benefits of implementing such an HDA-based system include efficient overall system resource management, excellent fault tolerance characteristics (i.e. stability and reliability), and virtually infinite scalability as well as ease of maintenance and application deployment.
  • a system including an HDA computer server is employed for serving as a platform for facilitating Internet transactions.
  • a system employing the HDA Server of the present invention may provide "SoftSpot" technology.
  • a SoftSpot is an Adaptive User Interface (AUI).
  • SoftSpots may exist independently of a host user interface, but may be able to integrate into the host interface.
  • SoftSpots may be embedded in a Web page, but they may also function independently, for example, in a cellular telephone Wireless Application Protocol (WAP) interface or in a Voice-driven interface.
  • WAP Wireless Application Protocol
  • FIG. 1 is a simplified block diagram of one embodiment of a computer server employing a Highly Distributed Architecture (HDA).
  • Figure 2 is a simplified block diagram of one embodiment of an HDA computer server connected to a network.
  • HDA Highly Distributed Architecture
  • Figure 3 is a simplified block diagram of one embodiment of the Registry component employed by an HDA Server.
  • Figure 4 is a simplified flow chart illustrating one embodiment of the operation of the Registry component of an HDA Server.
  • Figure 5 is a simplified flow chart illustrating one embodiment of the data flow through the Adaptive Messaging Services component.
  • Figure 6 is a simplified flow chart illustrating one embodiment of the procedures for runtime activation and initialization of Application software employed by an HDA Server.
  • FIG. 7 is a simplified block diagram illustrating one embodiment of an HDA Server operating system for use in conjunction with an HDA Server.
  • Figures 8 A and 8B illustrate an XML representation of one embodiment of a TopologyDescriptor employed by an HDA Server OS.
  • Figure 9 A is an example of one embodiment of a Migration Request event.
  • Figure 9B is a simplified flow chart illustrating one embodiment of a Migration employed by an HDA Server OS.
  • Figure 10 illustrates one embodiment of an svcs.xml file which may govern the boot process for an HDA system.
  • Figure 11 is a simplified flow chart illustrating one embodiment of an HDA system boot process.
  • Figure 12 is a simplified block diagram illustrating one embodiment of a Server Gateway which may be employed by an HDA Server.
  • Figure 13 is a simplified flow chart illustrating the operation of one embodiment of a Server Gateway employed in conjunction with an HDA Server.
  • Figure 14 is a simplified block diagram illustrating the operation of one embodiment of database access which may be employed by an HDA Server.
  • Figure 15 is a simplified block diagram illustrating the operation of another embodiment of database access which may be employed by an HDA Server.
  • Figure 16 is a simplified flow chart illustrating one embodiment of the life cycle of a SoftSpot which may interact with an HDA Server.
  • Figure 17 is an illustration of one embodiment of a SoftSpot descriptor.
  • Figures 18A and 18B are an illustration of one embodiment of a Uniform SoftSpot Descriptor file.
  • FIG. 1 is a simplified block diagram of one embodiment of a computer server employing a Highly Distributed Architecture (HDA) and an HDA Server operating system.
  • HDA Highly Distributed Architecture
  • the HDA Server 100 having a distributed architecture in accordance with the present invention takes advantage of both physical and logical resources, irrespective of physical "location.”
  • Computer servers 1-3 represent the physical resources available to HDA Server 100; servers 1-3 are physical machines (PMs) comprising electrical and electromechanical components, such as conventional file servers, application servers, or mini-mainframes, for example.
  • PMs physical machines
  • the Realm 10 may function as a logical container for various elements of HDA Server 100, such as an HDA Server operating system (OS) and the applications and data resources it serves.
  • the various system elements contained in Realm 10 are not bounded by physical location, and may utilize available system resources at any of servers 1-3, either individually or in combination.
  • Realm 10 may contain one or more virtual machines (VMs), i.e. logical resources supported by PMs.
  • VMs virtual machines
  • PMs virtual machines
  • the number of VMs, as well as the "location" (i.e. the physical resources utilized) of each VM may change dynamically over time in accordance with system demands.
  • the location or physical resources utilized by Realm 10 may also change dynamically over time.
  • servers 1-3 may be conventional file servers, application servers, mini-mainframe computers, or other types of PMs known in the art.
  • the functionality of HDA Server 100 and the HDA Server OS is not dependent upon the nature or the specific arrangement of the components employed by servers 1-3; for example, chip sets, bus architectures, memory configurations, and the like, may vary from server 1 to server 3 without affecting the operation of HDA Server 100. Invocation and general management of VMs is known in the relevant art.
  • the general embodiment of HDA Server 100 depicted in Fig. 1 is illustrated by way of introduction and example only, and not by way of limitation.
  • the HDA system architecture of the present invention may support a desired number of VMs in more than one Realm, such as Realm 10, across a desired number of PMs, such as servers 1-3.
  • Fig. 1 only illustrates one Realm (Realm 10) and three servers (servers 1-3)
  • one embodiment of the present invention operates with three Realms, as discussed below with reference to Fig. 2.
  • a single Realm may span any number of PMs and may incorporate any number of VMs.
  • FIG. 2 is a simplified block diagram of one embodiment of an HDA computer server employing an HDA computer operating system and connected to a network.
  • the HDA Server 200 generally corresponds to the distributed HDA Server 100 in Fig. 1; it will be appreciated that the elements of HDA Server 200 described below may be distributed across, and utilize the physical resources of, one or more PMs as set forth above with reference to Fig. 1.
  • HDA Server 200 is generally constituted by the following components: Realms 210, 220, and 230; a Registry component 240; an Adaptive Messaging Services component 250; and a Transaction Services Manager component 260.
  • a Server Gateway 270 and a SoftSpot Gateway 280 may combine to enable data communication between HDA Server 200 and a network 999 such as the Internet, for example.
  • a network 999 such as the Internet, for example.
  • the foregoing elements, and parts thereof, may be contained in one or more VMs distributed across multiple PMs.
  • a Data Layer Realm 220 may have access to data records maintained in one or more databases 290.
  • An accessible database 290 may be maintained at HDA Server 200, or may be maintained at one or more remote servers.
  • database 290 may be distributed across multiple VMs at different PMs in a similar fashion as the server elements described above.
  • the arrangement of HDA Server 200 is representative of an exemplary HDA.
  • the architecture and operation of HDA Server 200 are very different from tier-based server platforms.
  • the arrangement (of PMs, VMs, and Realms 210, 220, and 230) which forms the foundation of the architecture for HDA Server 200 may be viewed as a medium through which objects may "flow" or migrate between VMs (and PMs).
  • object refers to computer programming code components created in accordance with object-oriented programming techniques.
  • HDA Server 200 is more flexible than n-tier architectures.
  • conventional n-tier servers employ a hierarchical structure of independent physical machines, generally involving at least one "main" or “master” PM operating at a level or tier above the other PMs, such a multilevel or multi-tier hierarchy does not exist in the inventive HDA Server 200.
  • the physical resources of HDA Server 200 may generally be constituted by a non- hierarchical array of cooperating PMs across which logical resources and objects may be distributed.
  • the term "non-hierarchical" in this context refers to the fact that HDA Server 200 may be considered to be a single tier, or at least not requiring that the PMs be linked in a multiplicity of levels or tiers.
  • HDA Server 200 may advantageously employ a
  • Realms 210, 220, and 230 are logical containers for the various components of HDA Server 200.
  • a Realm may be viewed as an "internal server" of sorts, performing specific functions and serving particular needs within HDA Server 200.
  • a Data Layer Realm 220 may perform such an internal server function, for example, serving data in an Extensible Markup Language (XML) format which other applications (served by HDA Server 200) can understand.
  • XML Extensible Markup Language
  • Realms 210, 220, and 230 may contain VMs.
  • the number of VMs in any particular Realm may change dynamically over time.
  • the "location" of a Realm and its VMs i.e. the physical resources utilized by each) may advantageously change dynamically over time.
  • Each Realm 210, 200, and 230 manages a set of VMs and PMs.
  • One Realm in particular (Operating System (OS) Realm 210) is unique in that may actually manage other Realms.
  • the OS Realm 210 represents the HDA Server operating system which manages all the VMs and physical hardware resources available to HDA Server 200.
  • Other Realms 220 and 230 may communicate with OS Realm 210.
  • OS Realm 210 may manage the low-level resources, while the "higher-level" Realms 220 and 230 may manage the application logic components.
  • Realms 220 and 230 may be assigned management of only one type of component directly, namely, Services (such as denoted as 250 and 260 in Fig. 2).
  • Services such as denoted as 250 and 260 in Fig. 2.
  • the functions of Realms 220 and 230 may involve keeping Services alive, spawning Services at boot time or when new instances are needed, and generally implementing any directive given by OS Realm 210.
  • OS Realm 210 may follow the same rules as other Realms 220 and 230; in other words, the OS resource management may be managed at the Services level as set forth below.
  • a Realm employs a set of VMs allocated to that particular Realm.
  • a Realm may be considered a volume (drive); a Service may be considered a folder; and a Plugin may be considered a file.
  • the Application Realm 230 may maintain all the Applications which are installed in HDA Server 200, i.e. the Applications which may be served.
  • the term "installed” denotes Applications which were installed by installation software code or programming scripts provided by OS Realm 210.
  • the term “Applnstaller” (discussed below) is used to denote this installation software.
  • Application Realm 230 may only manage the Services that govern Applications. In operation, Application Realm 230 may be the most dynamic Realm in HDA Server 200. Consequently, resource allocation and management in Application Realm 230 may be substantially more active than with other Realms; this does not imply any architectural issues, but rather system resource deployment issues.
  • Application Realm 230 may receive more physical and virtual resources at deployment time than the other Realms. It will be appreciated from the following discussion that other Realms may be just as "active" in terms of processes as Application Realm 230, but since these other
  • Realms are less "dynamic" in terms of resource demands, caching and other optimization algorithms may be used to ease the resource management burden on the other Realms in a way which may not be applicable to Application Realm 230.
  • Application Realm 230 may have the highest priority with the OS employed by HDA Server 200; the OS may monitor processes taking place in Application Realm 230 intensively. Additionally, Application Realm 230 may generally be the last Realm to be booted by the Launcher (discussed below). That is, Application Realm 230 may advantageously be booted after both the Data Layer Realm 220 and OS Realm 210 have booted successfully.
  • the Data Layer Realm 220 may maintain Services which deal with querying and updating various types and quantities of data sources.
  • Data Layer Realm 220 may handle Java Database Connectivity (JDBC) Services, Common Object Request Broker Architecture (CORBA) Services, as well as Simple Object Access Protocol (SOAP) Services.
  • JDBC Java Database Connectivity
  • CORBA Common Object Request Broker Architecture
  • SOAP Simple Object Access Protocol
  • Data Layer Realm 220 may be much less dynamic than Application Realm 230, but not less active. New types of data Services may be installed dynamically into Data Layer Realm 220.
  • Data Layer Realm 220 may be the second Realm to be booted by the Launcher (i.e. subsequent to successful boot of OS Realm 210).
  • Data Layer Realm 220 may employ many optimizations (such as advanced caching, templating, and pooling, for example) which are relatively easier to implement in Data Layer Realm 220 due to its fairly "static" nature; these optimizations may dramatically speed up performance of processes in Data Layer Realm 220.
  • optimizations such as advanced caching, templating, and pooling, for example
  • Data Layer Realm 220 it may be desirable to configure Data Layer Realm 220 such that the input to the Data Layer from the other elements of HDA Server 200 may be exclusively in a single markup language, XML or Document Object Model (DOM) format, for example; output from the Data Layer to HDA Server 200 may be in the same markup. Input and output format to and from the various data sources, however, may depend upon the specific data source. In other words, all data communications which are internal to HDA Server 200 may be in a single desired markup, while external communication markup may be dictated by the external source.
  • DOM Document Object Model
  • OS Realm 210 may encapsulate the server-side HDA computer OS which governs low-level operation of HDA Server 200.
  • the OS employed by HDA Server 200 may be assigned the task of managing resources and processes while providing software "drivers" for use by applications.
  • One major distinction between the OS employed by HDA Server 200 and conventional systems is the fact that the OS of the present invention may advantageously be designed as natively distributed; that is, in one embodiment, the OS is engineered from the bottom up to be a full server-side distributed OS for use with HDA Server 200.
  • OS Realm 210 may maintain a set of Services which govern various OS functions; as noted above, OS Realm 210 may be different from other Realms in this sense, because it may be operating entirely "offline.” This means that OS Realm 210 may not be affected by user Sessions (at least not directly) as described below. Those of skill in the art will appreciate from the foregoing that OS Realm 210 may preferably be the first Realm to be booted by the Launcher.
  • Services are the actual processes which drive the operation of HDA Server 200; these Services may generally be managed by software code provided in each respective Realm, as discussed above. In one embodiment of HDA Server 200, however, Services may be oblivious to the operations carried out in the various Realms, and may not interact with Realms 210, 220, and 230 directly. In this embodiment, two kinds of Services may be employed by HDA Server 200: SessionAwareServices; and DaemonServices.
  • Most Services may be SessionAware, or Session dependent. It may be desirable, however, that all Services in OS Realm 210 be Daemon Services, which may be viewed as maintaining an "eternal Session.”
  • Services in this embodiment may be transient software components
  • Data Layer Realm 220 and Application Realm 230 may never need to be continually maintained, since they may not directly hold any critical data. Services may easily be selectively spawned or terminated at any time by the associated Realm.
  • the main functionality of Services is to act as a container for Plugins and to manage Messaging safely and independently. All Services may be, in fact, identical software components; that is, the Services may differ only with respect to the components each contains. These components may advantageously be assigned to each Service dynamically. Every Service may be contained in a managerial and administrative object, referred to herein as ServiceManager, which essentially keeps track of the various instances of the same Service.
  • ServiceManager a managerial and administrative object
  • SessionAware Services may constantly manage Services for a specific set of Sessions. Contrary to all common and conventional design methodologies, internal Session management in HDA Server 200 may not involve using different threads for each active Session. Rather, the Service may maintain a list of DOM documents, each of which may contain data relating to a specific Session. All communication between Services and other entities may be accomplished through the Messaging System (discussed below) which may deliver DOM documents stamped with an associated SessionlD. This is a much faster and more efficient (i.e. less resource-consuming) approach than the common method of using Java threads, for example.
  • Daemon Services may constantly perform a task for a single "eternal" Session. Daemon Services may be considered to be semi-mission-critical; as such, these particular Services may be managed differently by the associated Realm. For example, in the case where one of these Services fails for any reason, spawning a new instance of such a failed Service may receive the highest priority. Again, as with SessionAware Services, there is no issue of persisting data, and consequently, spawning a new Daemon Service may be virtually instantaneous.
  • Fig. 2 illustrates a Registry component as element 240.
  • Registry 240 may control and maintain the publish/subscribe capabilities of HDA Server 200. Since HDA Server 200 utilizes a loosely coupled architecture, inter-component interaction need not be hardcoded; in fact, in one desirable embodiment, the various components are not aware of any other entity except Registry 240.
  • Registry 240 may be implemented around a replicated Lightweight Directory Access Protocol (LDAP) server. LDAP is fast and efficient, and as a result, assists HDA Server 200 in allocating system resources quickly and economically. Such an arrangement also allows HDA Server 200 to make use of the advanced security features of the Java Naming and Directory Interface (JNDI) which communicates with LDAP.
  • JNDI Java Naming and Directory Interface
  • Figure 3 is a simplified block diagram of one embodiment of a Registry component employed by an HDA Server.
  • the Registry component 240 of Fig. 2 may be generally constituted by four components: Registry 340; Antenna 343; Multicaster 344; and Directory Interface 341.
  • Antenna 343 may serve to receive published events and to convey or to direct those published events to Registry 340; in that regard, the term "antenna" is intended to be descriptive of the function, but not necessarily the structure, of Antenna 343.
  • Registry 340 may be the only access point to the LDAP server 349; this access may be through Directory Interface 341 (JNDI interface wrapper). Registry 340 may store all published events and a list of all subscribers for each event in LDAP server 349.
  • Registry 340 may function as a sort of "central nervous system” for the HDA system of the present invention. Registry 340 may act as an interface to a private set of objects which provide runtime data management Services.
  • Registry 340 may aggregate three components shown in Fig. 3 (i.e. Antenna 343, Multicaster 344, and Directory Interface 341) and may manage data communication between them.
  • Registry 340 may receive event names from Antenna 343 and transfer those event names to Directory Interface 341 for comparison with a list of subscribers maintained in LDAP server 349; the list of subscribers, in turn, may then be transmitted to Multicaster 344 for subsequent transmission of data to all subscribers.
  • Registry 340 may also receive binding information from ORB 342 or Multicaster 344 and transmit that information to LDAP server 349 through Directory Interface 341.
  • Antenna 343 may be a "well-known" recipient of published events, i.e. Antenna 343 may be a subscriber to every system event.
  • events may be published by any component bound to Registry 340 (typically a Service or a Transmitter, discussed below) through Antenna 343. That is, when Antenna 343 receives an event, it may extract the event's name and transfer it to Registry 340. If there is at least one subscriber to the event, Registry 340 may add the event's data to the Message object to be sent through Multicaster 344 to subscribing components.
  • any component may be able to use the Service or the Transmitter to publish events by proxy; the Server Gateway 270 in Fig. 2 and discussed below, is a subclass of a Transmitter.
  • the Sever Gateway 270 In one embodiment of an HDA system connected to an external network, such as the Internet, for example, most event publishing may be issued by Sever Gateway 270.
  • FIG 4 is a simplified flow chart illustrating one embodiment of the operation of a Registry component of an HDA Server.
  • an event may be published, for example, by the Server Gateway 270 illustrated in Fig. 2.
  • the event arrives at Antenna 343 where Antenna 343 may separate the event's NAME from its DATA, and delegate the event's NAME to the Registry 340 (this is illustrated at blocks 402-404).
  • Registry 340 may query LDAP Server 349 though Directory Interface 341 to obtain a list of subscribers to the event; if at least one subscriber is found, Registry 340 retrieves the event's DATA and adds it to the Message object (block 406) which is to be multicast to all subscribers; Registry 340 then may pass this Message object to Multicaster 344 at block 407. Finally, Multicaster 344 transmits the full event (i.e. the same Message object that was received by Antenna 343) to all subscribers simultaneously, as noted at block 408. It will be appreciated that the foregoing sequence of events is provided by way of example only, and that other embodiments or configurations of the elements shown in Fig. 3 may be desirable. For example, Antenna 343 and Multicaster 344 may be combined into a single, dual-function transceiver element.
  • FIG. 5 is a simplified flow chart illustrating one embodiment of the data flow through the Adaptive Messaging Services component.
  • the Adaptive Messaging Services component 250 of Fig. 2 may be generally constituted by computer programming code generating data Message objects to be distributed throughout HDA Server 200; this programming code is represented by the MessageFactory 551 in Fig. 5.
  • the Server Gateway 270 of Fig. 2 is illustrated in Fig. 5 as communicating with Session 571.
  • Message objects in the HDA Server system may generally be considered containers and transport vehicles for XML/DOM documents.
  • a Message object may be created by any Transmitter through MessageFactory 551.
  • Message objects are moveable objects, i.e. they are ORB agents which the Registry and Transmitters may send to other objects.
  • Messages may communicate with objects which implement an appropriate interface, for example, which enables the objects to interpret Message objects from MessageFactory 551; this interface is referred to as a MessageListener interface.
  • an incoming message from a remote location may contain XML data as shown in Fig. 5.
  • This XML data arrives at Server Gateway 270, which may transmit the received XML data to Session 571.
  • Session 571 may translate the XML to DOM (creating a DOM document);
  • Session 571 is preferably a Transmitter object, i.e. Session 571 may call a function of MessageFactory 551 to create a new Message object.
  • MessageFactory 551 may be a proprietary markup language, or any other desirable data format such as XML or DOM, for example, at least for data communications internal to the HDA Server of the present invention. It is within the scope and contemplation of the present invention to make use of any and all suitable markup languages for internal data transfer.
  • the Transaction Services Manager component 260 may maintain and manage various transactional interactions between the several components of HDA Server 200.
  • the following discussion provides an introduction to Transactions.
  • all Transactions may share the following properties (referred to in the art as ACID): Atomicity; Consistency; Isolation; and Durability.
  • Atomicity any indivisible operation (one which will either complete fully or not at all) is said to be atomic.
  • Consistency A Transaction must transition persistent data from one consistent state to another. If a failure occurs during processing, the data must be restored to the state it was in prior to the incomplete or failed Transaction.
  • a Transaction may end in one of two ways: a "commit,” which represents the successful execution of each step in the Transaction; or a “rollback,” which guarantees that none of the steps are executed due to an error in one of those steps.
  • the HDA system of the present invention may employ Java Transaction API (JTA) to control Transactions. Transactions may be critical to several operations in the HDA Server.
  • JTA Java Transaction API
  • the persistence of subscriber information in the LDAP server is critical, and that action is Transactional; maintaining this persistence may be the responsibility of the Directory Interface.
  • Some database updates may be critical, and are therefore Transactional.
  • some processes in the OS Realm are also Transactional; for example, booting the system and the migrative process of various components (both of which are discussed below in detail) may both be considered Transactional.
  • Plugins are components which may be "plugged in” to an HDA system configured in accordance with the present invention.
  • the loosely coupled architecture of an HDA system allows a Plugin to be installed while the HDA Server is up and running.
  • all such Plugin components may be installed, for example, through a management console or other installation interface (referred to as a Management Console, or MC), in such a way that the HDA Server OS may load each Plugin' s respective program elements and attach each to the proper Services in the correct Realm. Plugins may also be removed and modified without any effect on system performance.
  • MC Management Console
  • Plugins may be installed through the use of a computer programming script or installation software code providing a suitable interface, called a Pluginlnstaller interface.
  • Plugins may be installed into the Application Realm, and a suitable installer program may be, for example, Applnstaller discussed above.
  • An actual Plugin will typically extend the abstract Plugin class and optionally implement a SessionAwarePlugin interface.
  • a Plugin may be treated by the system as a resource capable of handling requests via its implementation of a RequestHandler interface.
  • a SessionAware Plugin may keep state between consecutive calls to handle request.
  • Plugins and their implementation are known in the art.
  • the following example provides a detailed description of a Plugin installation procedure; this description assumes that the Plugin to be installed is already in a recognized location, for example, PLUGIN_PATH.
  • a Plugin manager software code or script (for example, program code referred to as a PluginManager object) in the OS Realm may un-jar (or uncompress) the jar file containing the Plugin and store the uncompressed data into a temporary directory created for this particular Plugin.
  • the PluginManager may first read security information.
  • the Plugin may be assigned certain restrictions. For example, the restricted Plugin may not be able to read or to write files, or it may be restricted with respect to memory allocation.
  • the default security profile for all Plugins may be ⁇ fully-restricted>.
  • the PluginManager may then transfer a Plugin descriptor to the installer specified in the ⁇ installer> element of the Plugin descriptor (such an ⁇ installer> element may not be required, since, for example, a default installer may be provided for the
  • the Applnstaller object may publish an event, for example a ⁇ new-plugin> event, to alert other elements of the HDA Server that a new Plugin is being made available.
  • a Realm management software script in this example, referred to as RealmManager
  • RealmManager keeps handles to all Realms and is further subscribed to the ⁇ new- plugin> event.
  • RealmManager may be apprised of the publication of the event by Applnstaller.
  • RealmManager may read the DOM document of the event (the ⁇ realm> element of the Plugin descriptor), and may return a handle to the Application Realm.
  • the Applnstaller may then move to the Application Realm (the Applnstaller is an ORB agent and may move to any known address space). In conjunction with the move, Applnstaller may bring copies of all the necessary resources (the Plugin jar file and any resource defined by the ⁇ resource> element, for example).
  • the Application Realm may then be notified of the arrival of Applnstaller. (In this regard, Application Realm may implement an interface through which it may be notified of the arrival of, and communicate with, the Applnstaller).
  • Applnstaller may then transfer the desired ⁇ plugin> node from the Plugin descriptor to the Application Realm's notifyPluginManager(Node, String) program script, thus transferring control to the Application Realm.
  • Application Realm may call createPluginDescriptor(Node, String) on PluginManager and transfer the ⁇ plugin> node as the DOM document.
  • PluginManager may then load the resources specified in the ⁇ resources> element.
  • PluginManager may use java.util.jar.JarFile to load the classes of the "required” jar files. Additionally, a mechanism may be provided for "lazy” or "as needed” initialization of resources.
  • PluginManager may attach the Plugin to the specified Service type (for example, a SessionAware Service). Additionally, PluginManager may call the TopologyManager (part of the HDA Server OS, discussed below) to find the best VM (e.g. the VM with the most available resource capacity) in the Application Realm for installation of the Plugin.
  • the specified Service type for example, a SessionAware Service.
  • PluginManager may call the TopologyManager (part of the HDA Server OS, discussed below) to find the best VM (e.g. the VM with the most available resource capacity) in the Application Realm for installation of the Plugin.
  • PluginManager may load and initialize the Plugin.
  • the Plugin then may finish initialization by loading and initializing its Formatter and Transmitter.
  • the Plugin may then call PluginManager' s function devoted to updating the Registry; as a result, PluginManager may notify the Registry, transmitting all the data provided by the Plugin.
  • the Plugin' s internal state may be set to running, at which time the Plugin is open to receive incoming events. It should be noted that the foregoing example of Plugin installation is a Transactional process, i.e. guaranteed either to succeed without a single fault or to fail completely.
  • an Application is entities which form the "business logic" of the HDA Server.
  • an Application may generally be composed of a set of Java objects and XML descriptors.
  • all Applications may share the same contract of mandatory and optional objects and a common set of descriptors.
  • the contract is designed to allow Applications to be installed automatically, deployed, and activated in the system simply by placing a compressed jar file containing all the required resources in a particular recognizable location.
  • all Application resources may be found in a particular path, e.g. APPLICATION PATH, which may be managed by the Applnstaller (a component of the OS Realm as described above).
  • the contract may also allow different Applications to communicate with each other and to share data.
  • the life cycle of an Application may begin upon loading the jar file that contains all the resources for the particular Application to the APPLICATION PATH.
  • This loading may be accomplished through an MC as described above with reference to Plugins in general.
  • the MC may be command-line based, for example.
  • Installation program code such as Applnstaller, may learn of the existence of the new Application (discussed in detail below) and subsequently decompress the jar file into all relevant locations across all relevant address spaces.
  • the deployment topology may be defined in the XML descriptor files packaged with the Application. After deployment has been successfully executed, Applnstaller may notify the Registry that a new Application is available.
  • Applnstaller publishes an event indicative of the installation, for example, a "NEW_APPLICATION" event, to the Registry.
  • the Registry may notify the subscribers to this event (an example of which may be the Application Realm's Receiver Service, described below) that this event has occurred.
  • This event an example of which may be the Application Realm's Receiver Service, described below.
  • the process may be similar every time the system boots.
  • FIG. 6 is a simplified flow chart illustrating one embodiment of the procedures for runtime activation and initialization of Application software employed by an HDA
  • the Receiver gets a notification from the Registry that an event, for example, NEW APPLICATION, has occurred, at which point the Receiver may request the DOM document describing the Application from the originator of the event, for example, Applnstaller (this request is reflected in block 602).
  • the Receiver may create an Application Descriptor (WADescriptor) object containing a descriptive DOM document at block 603, and may subsequently transfer the WADescriptor to an Application Manager (WAManager) at block 604.
  • WAManager Application Manager
  • the WAManager may initialize the "main" class for an Application object (block 605) before the Application object begins its own initialization (block 606).
  • the first step in the initialization process involves the Application (subclass) object reading the WADescriptor and initializing its various components (block 607). Sub-components (contained) are initialized as shown at block 608. Those of skill in the art will appreciate that the Application object may manage any number of other objects. This object aggregation deals with application behavior and may preferably be exclusively Java based. Additionally, the Application's Formatter object is initialized at block 609 (the Formatter object attached to the Application may generally be a "contractual" container for the Application's formatting data which is contained in its DOM document).
  • the Application also preferably contains Transmitter and Receiver objects. These objects may contain and maintain the data manipulation DOM document for the associated Application object. These components may form the part of the Application which may engage in system-wide "publish/subscribe" through the Registry as described above. Initialization of the Application's Transmitter and Receiver objects is illustrated at block 610.
  • An event for example, ⁇ user-event> containing ⁇ customer_inquiry> data, arrives at the Server Gateway.
  • the event may arrive from a "known" customer, i.e. one who has a known customer ID.
  • the event's data may be contained in a DOM document (i.e. the Server Gateway may translate the XML received from the Hypertext Transfer Protocol (HTTP) Stream to DOM as described above).
  • HTTP Hypertext Transfer Protocol
  • the Server Gateway may open a new Session or join an existing Session;
  • Session format may be WAP, for example.
  • the Session may publish the event, stamped with a unique SessionID, to the Registry such that the Registry may multicast the event to all subscribers.
  • a CustomerService object may be one of the subscribers (or the only subscriber), so it will be notified of the event.
  • CustomerService a subclass of Application, has a Transmitter object attached to it as described above with reference to Fig. 6. It will be appreciated that CustomerService previously had to use the its Transmitter object in order to subscribe to an event (in this case the ⁇ customer_inquiry> event). CustomerService may delegate the event's DOM document to the appropriate EventListener.
  • an appropriate EventListener may be capable of "reading" the event's DOM document; the EventListener may then process the data contained in the DOM document according to the logic defined for this particular event.
  • EventListener may "publish" an internal event, for example ⁇ QueryEvent>, to which the Data Layer is subscribed, and ask for a receipt or confirmation, as described in the example below.
  • an internal event for example ⁇ QueryEvent>
  • ⁇ QueryEvent> logs the inquiry to the appropriate table in the database and asks to send a confirmation email back to the customer: ⁇ QueryEvent>
  • the Data Layer Realm's Structured Query Language Service, SQLService may then update the database table(s) and field(s) indicated in the ⁇ query> element.
  • components in the Data Layer may process the instructions in the ⁇ fmally> element.
  • the Data Layer sends a confirmation email to the customer using the template "inquiry-response.”
  • the Data Layer Realm may support SOAP for inter-application interaction and interactions with external software packages such as Exchange Server and others.
  • components in the Data Layer may return a receipt to the CustomerService' s EventListener which originated the query as an indication of success or failure.
  • the EventListener then may transfer control to the Formatter, which in turn may send the appropriate confirmation Message object, in XML or an internal data communication dialect, to the Server Gateway for external transmission.
  • the Server Gateway may then redirect the data to the Soft-Spot Gateway shown in Fig. 2.
  • the SoftSpots may transform the internal communication markup, for example, into the appropriate client-side markup, which may be WAP/WML, HTML, XML, VoiceXML, and so forth.
  • the functionality of the system is markup language independent.
  • the HDA Server OS represents one embodiment of the infrastructure which provides operating system level Services to the HDA Server of the present invention.
  • the HDA Server OS may manage typical low level operating system tasks, such as: resource management; application instantiation; application updates; booting and shutting down; low-level messaging; robustness or fault-tolerance, for example, dealing with system failure (overload, unexpected VM exit, hardware failure, network failure, and so forth); and security and authentication.
  • resource management such as: resource management; application instantiation; application updates; booting and shutting down; low-level messaging; robustness or fault-tolerance, for example, dealing with system failure (overload, unexpected VM exit, hardware failure, network failure, and so forth); and security and authentication.
  • FIG. 7 is a simplified block diagram illustrating one embodiment of an HDA
  • the HDA Server OS 700 resides in the OS Realm 210 illustrated in Fig. 2.
  • the elements shown in OS Realm 210 are also shown in Fig. 7 as components of OS 700.
  • OS 700 may generally include a Migrator 701, a Packager 702, the Applnstaller 703 discussed above, a Servicelnstaller 704, a VMInstaller 705, a BootManager 706, a TopologyManager 707, a TopologyDescriptor 708, and VM Monitors 709. Examples of the general operation of each of these components are set forth below.
  • OS 700 is a unique operating system.
  • OS 700 is a high-level operating system designed specifically to be distributed; whereas typical operating systems are designed to operate within particular hardware constraints, the distributed architecture of OS 700 enables the OS to recognize available "hardware" as only a set of VMs.
  • OS 700 may maintain a Topology list of all available and dormant VMs; the Topology list may be changed dynamically.
  • OS 700 employs a VMMonitor 709 which monitors the VM's usage (load, memory, performance).
  • VMMonitor 709 may be in the form of a static Java class that may be "planted" by OS 700 at boot time (or any time a VM is spawned). VMMonitor 709 may advantageously use Java's Runtime information, but other embodiments are within the scope of the invention; for example, VMMonitor may employ the more advanced profiling available in version 1.2 of the Java Development Kit (JDK1.2). Importantly, VMMonitor 709 is a Transmitter, i.e. it may publish low-level events such as ⁇ memory- threshold-exceeded> or ⁇ load-critical>, which events may then be addressed by other components of OS 700.
  • JDK1.2 Java Development Kit
  • a VMMonitor 709 is a "dumb monitor," which means it never takes action.
  • Another important feature of VMMonitor 709 is the fact that it serves as an anchor (or a port) to the actual JavaVM it monitors. In one embodiment, it may be desirable that only one VMMonitor 709 be assigned per JavaVM.
  • Every VMMonitor 709 may have a unique ID which serves as identification with respect to the rest of the OS. This ID also serves as the entry for an individual VMMonitor 709 in the Registry, so that each VMMonitor 709 may be called by other objects. Any object having a reference to VMMonitor 709 may call it to request realtime information related to the JavaVM it represents. For example, an object may call getAvailableMemoryO or getLoad(). Generally, a VMMonitor 709 may be non- migrative (Migration is discussed in detail below) and may not be serializable.
  • VM may not necessarily be intended as a specific reference the JavaVM directly. That is, when reference is made to a VM, operation of the VMMonitor 709 is generally implicated.
  • Topology objects may be used collectively to define a landscape of PMs and
  • Topology management is crucial to the proper functioning of the HDA Server OS, since both Migration and Installation depend upon the current Topology.
  • Topology is not a static concept; rather, in a running HDA Server OS, Topology changes constantly as new PMs and VMs are added or removed from the system architecture.
  • the data managed by the TopologyManager 707 is mission-critical.
  • a current snap-shot of the available Topology may preferably persist and be maintained in the LDAP server (discussed above with reference to Fig. 3) so that the Topology may easily be reconstructed if the TopologyDescriptor 708 fails at any time during runtime.
  • Topology data may be contained in
  • TopologyDescriptor object 708 illustrates an XML representation of one embodiment of a TopologyDescriptor employed by an HDA Server OS. Though represented in XML form in Figs. 8A and 8B, TopologyDescriptor 708 may actually be maintained as a DOM document.
  • the ⁇ vm> element actually refers to a VMMonitor object, such as VMMonitor 709, and not a JavaVM.
  • the TopologyDescriptor object 708 holds and maintains Topology data (as a
  • New data such as a newly available resource (i.e. a new VM or host PM), for example, may be dynamically added through the Management Console (MC).
  • the MC may notify TopologyManager 707 of the availability of the new resource, whereupon TopologyManager 707 may call the installer, for example VMInstaller 704, to install the resource and its VMMonitor 709. After the VMMonitor 709 is installed for each new resource, TopologyManager 707 may register each new resource with the Registry and add the entry describing the resource into TopologyDescriptor 708.
  • TopologyManager 707 may also maintain a list of dormant VMs, i.e. VMs which are not presently active but which may be spawned at runtime by request. MIGRATION
  • the OS may enable Migration and Adaptivity.
  • Migration means that running application components may be selectively moved to a different address space (i.e. to a different PM or VM) at runtime.
  • the entire HDA Server may move to new hardware resources while running.
  • Migrator object 701 is different from an Installer.
  • Migrator 701 may specifically handle the moving or migration of "living" runtime objects that are currently known and even presently in use by the HDA Server.
  • Installers on the other hand (such as Applnstaller 703, Servicelnstaller 704, and VMInstaller 705, for example) may only move un- initialized (i.e. presently unknown and unusable) application components.
  • Migration may be achieved through rigorous Topology management and resource allocation.
  • the HDA Server OS 700 manages Migration on the Topology assigned to it. Migration takes into account the instantaneous state of each VM at the moment the MigrationRequest object is published.
  • TopologyManager 707 may maintain a list of available and dormant VMs. This list may be dynamically sorted by resource consumption; in such an embodiment, for instance, the first VM on the list may be the least active, and consequently the least loaded VM in the Topology, whereas the last VM on the list may be the VM with the highest load, and consequently the fewest available resources.
  • Migrator 701 may attempt to migrate application components first to those VMs identified as relatively "empty" or with very little load. Migrator 701 may also calculate in advance the implications of a prospective Migration. For example, Migrator 701 may not migrate an application component to a VM which is currently (pre-Migration) at a 55% load if the contemplated Migration may cause the VM to be at a 65% load (post-Migration) when the load threshold for that VM is only 60%.
  • Migrator 701 may request the OS 700 to spawn new VMs dynamically and to use the newly-spawned VMs for accommodating the migrated load.
  • Migration may advantageously be managed in two ways during runtime: automatically, i.e. initiated and controlled completely by OS 700; and through the MC discussed above, i.e. with human intervention. Migration may generally occur at any given time, when desired. This is so because the HDA Server itself may never be apprised of the machines (PMs and VMs) on which it is running ⁇ in this embodiment, the HDA Server OS 700 takes care of hiding those details from the HDA Server. Moving objects from one VM to another across the network is practically instantaneous; the object "stub" at its original location may preferably stay active until the object clone at the new location is ready, at which time the original object is freed to be "garbage collected” and its resources become available to other system components.
  • the HDA Server OS 700 acknowledges resource limitations.
  • the VMMonitor 709 may evaluate any Migration request and automatically build an internal list of Migration targets. This list can only be built by elements of OS 700. In other words, an operator cannot decide the order in which components should or will Migrate (unless such an operator were to issue individual MigrationRequest objects for a single component at a time). This is so because OS 700 is the only entity which has runtime information about all the resources available for Migration, as well as load and memory information.
  • a MigrationManager (the Service that manages Migrator objects) may begin executing the Migration in order, spawning new Migrator instances for each object to be moved. Again, Migration happens while a component or object is actually running; end-users will not be noticeably affected.
  • the Migrator object may de-register the old object and register the new object in the Registry.
  • ORBs are suitable for moving objects.
  • Object 01 runs on VM1.
  • ORB uses VM1 to create an exact copy of 01 's current state when the MigrationRequest is issued (this process is commonly called marshalling).
  • ORB creates a new instance of 01 (02) on VM2 and initializes it to the same state as 01.
  • ORB frees 01 to be garbage collected (i.e. 01 dies).
  • 02 is an exact copy of 01 on a different VM (which may be anywhere on the network).
  • O2's initialization all requests to 01 are immediately re-routed to 02.
  • Figure 9A is an example of one embodiment of a Migration Request event.
  • Figure 9B is a simplified flow chart illustrating one embodiment of a Migration employed by an HDA Server OS.
  • Fig. 9 A it will be appreciated that there may be several Migration targets in any given MigrationRequest event, as indicated by the ". . . etc.” line. Further, the order in which each ⁇ target> is listed in the event document is not important; MigrationManager may make an internal decision based upon each target's priority and statistics, as well as the current Topology and system load conditions.
  • the ⁇ migration-request> event is issued within a MigrationRequest object (a subclass of Message) at block 901.
  • the ⁇ migration-request> event may be routed by the Registry to the MigrationManager Service as shown at block 902;
  • MigrationManager provides a new instance of a Migrator object (element 701 in Fig. 7) at block 903 to handle this ⁇ migration-request> (as noted above, Migrator 701 is an ORB agent, and is therefore, movable).
  • MigrationManager calls the TopologyManager' s getMigrateableResources() object. This call may return a collection of VMMonitors (709 in Fig. 7), which may be sorted from least heavily loaded to most heavily loaded (this list may exclude those resources which are loaded beyond a predefined threshold).
  • MigrationManager may attempt to find the best VM for accommodating the Migration, i.e. the VM which has the lightest load and can support the target (judging from the VM's statistics).
  • MigrationManager may request TopologyManager (707 in Fig. 7) to spawn a new VM (block 906) from a list of Dormant VMs maintained by TopologyManager 707.
  • MigrationManager may update the Migrator's DOM document with the ⁇ target-vm> element which may have been identified either in block 905 or 906.
  • the ⁇ target-vm> element may also be added to the ⁇ migration-request> document shown in Fig. 9A.
  • Migrator may move to the target object and optionally calls its prepareToMigrate() method, if the object has been provided with one (block 908). This may be characterized as an optional step, because even in the case where an object does not have a prepareToMigrate() method, the object may still Migrate.
  • Migrator may return an error message to the MigrationManager, which in turn may notify the MC that Migration may fail.
  • Migrator may employ an ORB so as to move the target object to the VM (or more precisely, the VMMonitor) identified in the ⁇ target-vm> element.
  • Block 910 represents a "clean up" step of sorts, where the Migrator may notify the Registry of a successful relocation of the application component, and the ORB may free the target object from its original location to be garbage collected. The Migrator dies upon successful completion of the Migration, as represented at block 911.
  • the term "Packaging" in the HDA Server OS generally refers to the process of retaining data and object state so that both data and state may be retrieved and reconstructed in case of failure.
  • the Packager module 702 may also be in communication with the LDAP server 349, although in a slightly different way than the Registry 340.
  • the PackagingManager Service manages Packager objects which may be used by the HDA Server OS at critical moments. In effect, the PackagingManager may function very much like a specialized miniature Registry.
  • objects which request (or are required by the interfaces they implement, for example) to be packaged may register with PackagingManager and use that Service to retain their state in the LDAP server at critical moments.
  • Objects are retained by being serialized (Java serialization, for example, is known in the art) into XML. These objects may subsequently be de-serialized easily from XML. De- Serialization may also managed by Packager 702.
  • the HDA Server OS may iterate through all the serialized objects which need recovery and may then call Packager 702 to re-create each object with the state it had prior to the failure.
  • a process of Packaging an object's state may proceed substantially as follows.
  • the TopologyManager may register at the PackagingManager Service to enable subsequent packaging when desired. After registration, the TopologyManager may send a ⁇ packaging-request> event to the PackagingManager, at which time the PackagingManager may provide a new instance of a Packager object and pass (to the Packager object's constructor) a reference to the TopologyManager. The Packager may then serialize the TopologyManager into XML and save the resulting XML to the LDAP server.
  • the HDA Server OS allows the HDA Server to install new application components during runtime. Under certain conditions, it may also be desirable to remove application components during runtime.
  • Install and un-install processes may be managed by the HDA Server OS using Installer objects.
  • the InstallerManager Service may maintain and launch the various Installers, such as elements 703-705 in Fig. 7, for example. All Installer objects may be ORB agents, i.e. they may be moveable as discussed above. Installer objects generally function in a similar fashion as Migrator objects detailed above, however, Installers work with application components which are not yet part of the HDA Server runtime.
  • all concrete Installers may be characterized as Pluginlnstallers.
  • Pluginlnstallers only Plugin objects may be installed, each type of Installer may be considered a specialized case of Pluginlnstaller, and all Applications or application components may be considered a type of Plugin. While Applications, Plugins, application components, and Services may all be installed during runtime, Realms may not.
  • Applnstaller 703, Servicelnstaller 704, and VMInstaller 705 may be considered as part of the base class (Pluginlnstaller), and the components installed may be considered Plugins. Recalling the installation of a Plugin discussed above, the
  • Applnstaller may transfer control of the installation to the Application Realm, which then delegates the process to PluginManager.
  • PluginManager may attach the Plugin to the specified ⁇ parent> element in the specified Service type (for example, a SessionAware Service: WAManager), and call the HDA Server OS's TopologyManager to find the best or most suitable VM for installation in the Application Realm.
  • PluginManager may load and initialize the Plugin.
  • the Plugin may finish initialization by loading and initializing its own Formatter and Transmitter.
  • Applnstaller installer which may itself be a type of Pluginlnstaller. That is, Applnstaller may typically deal only with Applications in the Application Realm, whereas the base class (Pluginlnstaller) may deal directly with installing the application components which are not specifically self- contained Applications. Additionally, the Applnstaller may install components used by larger Applications or system-level applications (e.g. data access Services).
  • any Plugin i.e. Application, application component, VM, Service, and so forth
  • the "location" at which a given Plugin is to be installed is defined in the Plugin' s descriptor file.
  • the actual success of installation or removal of the Plugin may ultimately depend upon runtime vetoing power exercised by the component to which the Plugin is to be attached. For example, a Service may reject a Plugin installation request.
  • the process for installing an application component may be almost identical to the Application installation process as set forth in detail above with reference to Plugins. To distinguish the type of installation at the system level, however, a minor difference may be provided in the Plugin descriptor for an application component, i.e. its ⁇ plugin- type> may be defined as "application component" and it may further include a ⁇ parent> component, which represents the Application to which the application component belongs.
  • PluginManager may call registerApplicationComponent(Plugin) (a method common to all application-level components in the HDA system) on the ⁇ parent> component.
  • the ⁇ parent> component may then add the new Plugin to its list of application components, and follow any instructions provided in the ⁇ parent> element.
  • a ⁇ sub-process> instruction may be provided in the plugin descriptor, which may refer to the type of Services for which the parent may use the new Plugin.
  • a request which begins with the string "pag” may be understood to be a "pager" communication event, for example, and may be interpreted by the parent as delegable to the newly installed Plugin.
  • Another instruction for example ⁇ propagate-events>, may mean that the new Plugin will subscribe to events through the parent component, and not independently. It will be appreciated that many different types of instructions may be provided in the ⁇ parent> component so as to govern interaction between the parent and the new component in a desired manner.
  • the parent component may then call PluginManager' s updateRegistry(Plugin,
  • PluginManager may notify the Registry and pass along all the data provided by the Plugin.
  • this is a Transactional process, as discussed above
  • the Plugin' s internal state is set to running, at which time to is open to receive incoming events.
  • New Services may also be installed during runtime.
  • Services may be considered Plugins, and therefore the installation process may be very similar to that detailed above. Services are very simple Plugins in terms of installation.
  • the Servicelnstaller (704 in Fig. 7) only targets the Realm into which the Service will be installed and updates the Registry. Since a new Service does not hold any Applications or application components, installation may be a very simple and straightforward process.
  • a minor difference may be provided in the Plugin descriptor for a Service, i.e. its ⁇ plugin-type> may be defined as "service” and it may further include a ⁇ ServiceInstaller> element, which may include a declaration attribute only applicable to Services.
  • PluginManager may be configured accordingly (PluginManager knows a Service is to be installed, since control of the installation was passed by the Servicelnstaller). PluginManager may then call registerService(Plugin) (a method which may be common to all Realms) on the ⁇ realm> component of the Plugin descriptor. The Realm may then add the service to its list of available Services. The Realm may then read the "declare" attribute of the ⁇ ServiceInstaller> element discussed above.
  • the declare attribute refers to Realm-level declarations and relates to defining or "declaring" the Service to be a SessionAware Service or a Daemon Service.
  • the Realm may next call PluginManager' s updateRegistry(Plugin, String,
  • the loosely coupled architecture of the HDA Server and OS allow for efficient resource management and runtime stability through effective implementation of VMs.
  • VMMonitors may be considered Plugins; in fact, VMMonitors are very simple Plugins in terms of installation.
  • the VMInstaller 705 Similar to a Servicelnstaller 704, the VMInstaller 705 only targets the Realm into which the VMMonitor (and its JavaVM) will be installed. Since VMs are not registered (i.e. they are "dumb" monitors), the Registry need not be updated with each installation or removal; further, no involvement of PluginManager is required in the VMMonitor installation process.
  • the HDA Server OS making a system call which spawns a new VM process (i.e. in the form of "Java VMInstaller").
  • the process ends with the updating of the TopologyDescriptor.
  • the Plugin descriptor for a VMMonitor may be provided with a ⁇ plugin-type> of "vm.”
  • the runtime installation may generally be initiated by TopologyManager (at boot time, the installation may be initialized by BootManager, discussed below).
  • VMInstaller may spawn a new JavaVM by calling "Java VMMonitor” ⁇ this is a system call.
  • VMInstaller may then load and initialize the VMMonitor and call the HDA Server OS to update the TopologyManager on the newly available VM.
  • Booting the HDA Server system is a complex process; boot-up generally may be characterized as having two distinct phases.
  • booting requires setting up the HDA Server OS on a set of PMs and VMs and readying the HDA Server OS for the applications which run on it.
  • the HDA Server OS spreads its Realms across the resources available to it and hooks into the Management Console discussed above. As far as the HDA Server OS is concerned, this is the end of the boot process, even though Services, Applications, and other Plugins may not yet be initialized.
  • the second phase of booting involves the Application Layer, i.e. installing the various Plugins (Services, Applications, and application components). The end of the second phase of the boot process occurs when the Server Gateway and one or more optional gateways are attached to the HDA Server.
  • BootManager may be a Java application which preferably exits as soon as it has finished booting the HDA Server OS.
  • Figure 10 illustrates one embodiment of an svcs.xml file which may govern the boot process for an HDA system.
  • Figure 11 is a simplified flow chart illustrating one embodiment of an HDA system boot process. The following description, with reference to Figs. 10 and 11, is provided by way of example only, and not by way of limitation.
  • an administrator or other operator may first prepare a running ORB server on each PM which will be available to the HDA system.
  • booting may begin in the normal Java command-line fashion, for example:
  • BootManager may first obtain and follow the instructions provided in the svcs.xml file (one example of which is shown in Fig. 10).
  • the svcs.xml file defines the system- level Services which need to be initialized external to the Realms, for example the Registry, the MessageFactory, the TransactionFactory, and the Server Gateway.
  • the initialization of these Services by the BootManager is represented at block 1102.
  • BootManager may initialize a TopologyManager and build a TopologyDescriptor from the TopologyDesc.xml file (or any file that was passed to it as its "topology" command-line argument).
  • An example of a runtime TopologyDescriptor is shown in Figs. 8A and 8B; in a boot-time TopologyDescriptor, IDs may not have been assigned to the VMs in the case where the XML file does not contain VM IDs.
  • TopologyManager may then initialize the various OS Realm Services, i.e. Migrator, Installers, and Packager, and register these Services and itself with the various OS Realm Services, i.e. Migrator, Installers, and Packager, and register these Services and itself with the various OS Realm Services, i.e. Migrator, Installers, and Packager, and register these Services and itself with the various OS Realm Services, i.e. Migrator, Installers, and Packager, and register these Services and itself with the
  • BootManager may publish a ⁇ boot> event to the Registry at block
  • TopologyManager may begin deploying VMInstallers according to the addresses in the TopologyDescriptor. Once the VMs have been installed successfully, the TopologyManager may assign Realms to the VMs and register the Realms with the Registry (blocks 1106 and 1107).
  • TopologyManager publishes ⁇ boot-one-complete> event (block 1108). At this moment, as indicated at block 1109, phase one of the boot process is complete.
  • Phase two of the boot process may begin when the BootManager receives the
  • BootManager may publish an ⁇ initialize-Realms> event; all Realms may preferably be subscribed to this event by default. Next, Realms begin initialization.
  • BootManager may launch all the Plugins and Applications (block
  • software is launched in the following order: Applications; other Plugins; and Resources. Finally, all Plugins and Applications which require registration take action to register themselves in the Registry, as indicated at block
  • phase two of the boot process is complete (block 1125).
  • Figure 12 is a simplified block diagram illustrating one embodiment of a Server
  • HDA Server 200 which may be employed by an HDA Server.
  • Server Gateway 270 which may be employed by an HDA Server.
  • Registry 240 correspond to the elements shown in
  • Gateway Server 270 Session 571, and MessageFactory 551 was set forth above with reference to Fig. 3. It should be noted that SoftSpot Gateway (element 280 in Fig. 2) has been omitted from Fig. 12 for clarity.
  • the Server Gateway 270 may contain the logic for managing Sessions 571 for the entire HDA Server 200.
  • a SessionManager at Server Gateway 270 may control the Session by allocating a Session object 571 to that connection.
  • the Session 571 is the interface between SoftSpot 281 and HDA Server 200. From the SoftSpot's perspective, the Session object 571 is the HDA Server 200; conversely, from the HDA system's perspective, the Session object 571 is the SoftSpot 281.
  • Session class Two concrete classes may be provided for implementation of the Session functionality: the Session class; and the SecureSession class, which may provide additional infrastructure for validation of requests from a client such as SoftSpot 281.
  • Session 571 in HDA Server 200 may be managed at the Message level.
  • One advantage to this methodology is that, as the various components of HDA Server 200 are negotiating a query or a Transaction, each component may always pass the associated Session ID simultaneously with Message data so that all the data belonging to a specific client Session will remain constant.
  • a data Message having the appropriate Session ID appended thereto is denoted as element 572 in Fig. 12.
  • SoftSpot 281 may be a Servlet responsible for pumping events through Server Gateway 270 (in the form of Sessions 571) to HDA Server 200. As HDA Server 200 generates responses to those events, the responses go back to Session 571, and subsequently to the appropriate SoftSpot 281.
  • the HDA Server OS may be uninvolved in these activities until Session 571 shuts down or is otherwise terminated, at which point a SessionManager may execute clean-up work and reallocate system resources accordingly.
  • Server Gateway 270 wraps incoming events (which, as noted above, may be in XML or some other markup language) into a Message object 572 which contains a DOM representation of the original message data. Server Gateway 270 may then append additional information such as, for example, SessionID, a client identifier (clientJD), a seller identifier (sellerlD), and so forth. Server Gateway 270 may also subscribe to the return event of Message 572. In another embodiment, Server Gateway 270 may be incorporated in an Enterprise Java Bean (EJB)-compliant architecture, for example.
  • EJB Enterprise Java Bean
  • SoftSpots 281 may be built around the Java Servlet API, for example. SoftSpots 281 may serve as "proxies" which may adapt the User Interface (UI) markup which is sent to the client's User Agent in real-time. For example, SoftSpots 281 may adapt or convert the "neutral" markup (SOAP, for instance) produced through HDA Server 200 to WAP/WML, VoiceXML, HTML, or any other XML dialect.
  • SOAP "neutral" markup
  • FIG 13 is a simplified flow chart illustrating the operation of one embodiment of a Server Gateway employed in conjunction with an HDA Server.
  • a SoftSpot holds a remote reference to the Server Gateway through an ORB.
  • SoftSpot asks the Server Gateway to create a new Session for this client (for example, by remotely calling the createNewSession() on Server Gateway).
  • SoftSpot sends a request in an XML String to the Server Gateway, as indicated at block 1301 in Fig. 13. If this is a new Session, Server Gateway may spawn a new instance of a Session (from the pool); in every case (whether the Session is new or not),
  • Server Gateway may relay the XML String to the Session responsible for this SoftSpot
  • Session may call the static createMessage(String) on MessageFactory, which may then create a DOM representation of the XML String as a Message object (block 1303).
  • MessageFactory preferably adds a list of target "subscribers" to the Message object.
  • MessageFactory may stamp a MessagelD to the Message, so that the Message may be identified; additionally, Session may subscribe to the ⁇ response> event with its SessionID and the specific MessagelD for this Message.
  • Session may then multicast the Message to all subscribers.
  • Session may receive the ⁇ response> Message from the HDA Server and transmit that Message to the SoftSpot which issued the request.
  • Database access may be the exclusive responsibility of the Data Layer Realm and may generally be accomplished through a set of Plugins which may be modified, installed, or removed during runtime as discussed above; this runtime alteration is important for adding new data manipulation Services to the HDA Server.
  • Querying a database may be done by sending a Message object with a request to which the Data Layer is subscribed, for example (by default: a ⁇ data> Message).
  • FIG 14 is a simplified block diagram illustrating the operation of one embodiment of database access which may be employed by an HDA Server.
  • One or more database access Plugins such as Structured Query Language (SQL) Plugin 1401 and Common Object Request Broker Architecture (CORBA) Plugin 1402, may be installed in the Data Layer Realm. It will be appreciated by those of skill in the art that other Plugins may perform similarly; only two are illustrated in Fig. 14 for clarity.
  • the Plugin 1401,1402 may decide to which Business object (database interface) to transmit the Message object which contains a request for data.
  • the Business object has the actual "knowledge" of the data source (i.e. it is the Business object which is capable of JDBC data transfer).
  • the Plugin 1401, 1402 may work against a remote object 1403, which serves as a front end to the database 1404.
  • the data source i.e. database 1404, may be an SQL server, an object database, or an ORB which serves as a front end to data on a mainframe computer, for example. Operation of the present invention is not limited by the nature or architecture of database 1404.
  • the Plugin may determine the type of database request and may subsequently invoke the appropriate method on the front end to the database 1404. This may typically be accomplished through a "translation" of the incoming ⁇ data> element's markup into the appropriate query language. For example, if the database 1404 is SQL, the ⁇ data> will be embedded into an SQL query. In order to be more efficient, a ⁇ data> query may "force" a query statement by directly specifying the query string in a ⁇ force-query> element. This may advantageously spare the translation phase.
  • the Plugin 1401,1402 and the front end 1403 may be on different machines; in such a situation, the interaction between these two elements may be effectuated through ORB remote method calls.
  • FIG. 15 is a simplified block diagram illustrating the operation of another embodiment of database access which may be employed by an HDA Server.
  • a Fig. 15 embodiment an implementation of database access for Open Data Base Connectivity (ODBC) is shown.
  • ODBC Open Data Base Connectivity
  • the front end 1503, generally corresponding to front end 1403, may serve two functions in particular: ensuring that the SQL is properly formatted for communication with data server 1505; and pooling connections.
  • pooling those of skill in the art will recognize that connections to the database 1504 are expensive (in terms of system resources) to create.
  • connection pooling may be implemented to save system overhead and to expedite the query process.
  • the connection When the Data Layer is finished with a connection, the connection may not be terminated, but rather may be saved in a pool of inactive connections. This inactive connection may be reused (without having to be recreated) as system load requires.
  • system resources which may be relevant to operation of the HDA Server include at least the following: memory; CPU time; and threads.
  • At least two types of load balancing methodologies may be considered by the system: optimistic; and pessimistic.
  • optimistic load balancing techniques the initial deployment of the system may take advantage of available hardware by deploying a system which is capable of handling any reasonable amount of load.
  • "reasonable" may defined by the enterprise, based upon projected usage.
  • pessimistic load balancing techniques the original assumptions may be determined to have been incorrect, i.e. the system did not properly take into account site demand. A surge of demand may exceed the constraints set for the optimistic load balancing.
  • optimistic load balancing generally occurs as the OS attempts to determine the best location to create Sessions and Plugins; this may typically be done at boot and when initially creating an instance of a Plugin.
  • Pessimistic load balancing may involve duplicating Plugins (i.e. creating new instances), and may be accomplished dynamically during runtime.
  • An underlying assumption of optimistic load balancing is that, for the lifetime of a user Session on a given site, an upper threshold of required system resources may be determined or approximated. For a typical Session, for example, it may be assumed that the projected load, based for instance upon the number of Sessions, may be estimated. Given such an assumption, system load may be measured based upon the current number of Sessions. Following are two examples of proactive measures which may be taken to balance system load; these measures are provided by way of example, and are not intended to be representative of an exhaustive list.
  • the system may duplicate Plugins by creating new instances on other machines; this enables the HDA Server to spread events between and among the several instances of the same Plugin in accordance with system demand and the load at each respective VM. Consequently, multiple requests to a particular Plugin may be advantageously distributed around the system.
  • the Session object is the interface between the SoftSpot and the HDA Server. In terms of overall system resources, it may be beneficial to limit the number of Sessions running on the same machine.
  • the decision concerning Session location may be made when each particular Session is created. That is, in one embodiment, when a new Session is created, the Session Manager may determine which VM has the lightest load and assign the Session object to that VM.
  • the HDA Server OS may exercise discretion, whether at boot or during runtime, concerning where to install the Plugin.
  • the Pluginlnstaller engineer may also determine which VM has the lightest load at a given instant and install the Plugin on that VM.
  • Plugin resource consumption may vary considerably depending upon the Plugin type and demand. While it is generally desirable to install more expensive Plugins on more powerful CPUs, determining or accurately projecting the cost of a particular Plugin during creation (installation) may be complicated; such a determination may, itself, present a substantial cost in terms of system resources. As a result, a Plugin may initially be installed in a location which later proves to be less than optimal. In this regard, the migrative aspect of the HDA Server and OS of the present invention provides an efficient solution to the load balancing problem. That is, if a very expensive Plugin is installed in an inappropriate location, the HDA system may migrate that Plugin to a more acceptable location.
  • pessimistic load balancing methodologies assist the system through creating new Plugin instances; an important consideration may be to determine how to route Message objects to the new instances of the Plugin.
  • the system When the system decides, through evaluation of load conditions, to create a new instance of a Plugin, the best VM candidate to receive the new instance must be identified. In one embodiment, for example, the system may execute the following procedure to identify the best VM.
  • the system must identify the correct space, e.g. the Realm to which the correct space is identified.
  • the system may iterate across all the VMs in the Realm to locate the VM with the lightest load. If the identified VM has not passed an upper load threshold, and installing the new Plugin instance will not bring the load over the threshold, the VM may be used for the new instance. If the VM is over the threshold, the system may attempt to create a new VM, for example, by looking up the adhoc VMs section of the boot.xml file.
  • the adhoc VMs section may advantageously list where and with what settings a VM may be created in an emergency.
  • This section of the boot file may be determined or altered by a system administrator, for example, and may be based upon how extra hardware is to be allocated to the system when necessary.
  • the system may create the VM and delete the corresponding portion of the adhoc VMs list; subsequently, the system may allocate the next instance of the Plugin to the newly created VM. If every VM is over its respective assigned load threshold, and system resources are inadequate to create a new VM, the pessimistic load balancing procedure may fail. In such a case, the system may investigate alternative resource allocation schemes, for example, migrating applications and objects in order to optimize overall system resource allocation.
  • Plugins may not have more than one instance; similarly, it may be preferable that a particular Plugin run on a particular VM. In such circumstances, a system administrator may be able to override the foregoing load balancing mechanisms for such Plugins.
  • a Plugin.xml file may be amended to include instructions limiting the number of allowable instances of a particular Plugin or specifying a VM required or preferred for one or more instances.
  • the HDA Server and OS may employ a TagChangeHandler for handling changes in the Tags for one or more particular multiplexers (MUXs).
  • MUXs multiplexers
  • the MUX With every change in a Plugin, i.e. a new instance or a Migration, for example, the MUX has to determine if its Tags have changed. If the Tags have changed, the MUX may pass that information to the TagChangeHandler, which may respond in two possible ways.
  • the TagChangeHandler may notify a Transmitter object about the changes; the Transmitter, then, may pass Tag change information on to the other elements of the system through publication.
  • the TagChangeHandler may simply store the Tag changes internally; in such an embodiment, for example, the TagChangeHandler may apprise other elements of the system of changes upon request.
  • Service Level Resources are resources available to Plugins and MUXs.
  • Service Level Resources may be provided by invoking a ServiceResourceProvider interface with the name of the resource desired; ServiceResourceProvider may then return an instance of that Resource. For example, in the case where a Plugin wants to create a Message object, the Plugin may call the ServiceResourceProvider for the MessageFactory; as a result, the Plugin may receive a reference to the MessageFactory.
  • the HDA Server and OS may employ a DefaultRequestMultiplexer to provide the general infrastructure for managing Tags to all its subclasses.
  • a subclass may query, "My child has changed its Tags, what does that mean to me?"
  • Tag management may be accomplished in a "Magic Box.”
  • the Magic Box knows the MUX's current Tag set; may be apprised of the change in the child's Tags; may determine how the change affects the MUX's Tag set; and additionally, may report the actual changes in the MUX's Tag set (typically to the TagChangeHandler) .
  • Magic Box may return an object (called, for example, TagDeltas), which may contain two collections: the Tags which have been added; and the Tags which have been removed. It will be appreciated that both collections may be null (indicating no changes).
  • the DefaultCoreMultiplexer may employ an EventPostingTagChangeHandler implementation, i.e. the embodiment in which the TagChangeHandler publishes Tag set changes to the system.
  • the RequestMultiplexer may include setParent, getParent, and getChildren methods, i.e. the mechanisms to assign a MUX a parent as well as to query concerning a MUX's parent. In accordance with these methods, the MUX tree may be traversed in both directions.
  • the addRequestHandler method may create a RequestHandler object.
  • the RequestMultiplexer may assess whether the child is a
  • RequestMultiplexer the child could be a Plugin, for example
  • the parent may invoke the setParent method to inform the child of its parent.
  • the parent may then call the Magic Box, apprise it of the added RequestHandler, and request the "deltas" or changes.
  • the Magic Box may then return the TagDeltas object (tags added and removed).
  • the RequestMultiplexer may invoke a handleChanges method to evaluate and to deal with the changes forwarded from the Magic Box. If there has been a change in the Tag set, the TagChangeHandler' s postNewTagChangeEvent method may invoke EventPostingTagChangeHandler to multicast the change to the system.
  • a SoftSpot or other client may request the SessionManager to create a Session upon login. If requested by the client or required by the HDA Server, for example, the SessionManager may create a SecureSession, which may implement a security mechanism.
  • a SecureSession may generally be created and managed as follows.
  • a SecureSession may generate a special Message object requesting profile data, for example, user ID and password, which may be supplied by the SoftSpot which requested the SecureSession.
  • a Plugin may receive the Message object and check the profile data in the database. The Plugin may then reply to the SecureSession with the user profile.
  • the profile data may be stored internally by the SecureSession and appended to every subsequent Message object created by the SecureSession. Recipients of these Messages may decide, based upon the profile data, whether to process them.
  • enterprise-level security may be managed in a fashion similar to Session-level security. Every HDA Server or OS entity concerned with security may be provided with a set of access permissions for user, group, and others. Read permission may enable access to entity properties or contents. Write permission may enable access to update controls or to enable/disable functionality. Execute permission may enable invocation of an executable script or program.
  • a Secure Session may be created for that user.
  • Session retrieves the appropriate profile data for that user from the database.
  • profile data may be appended to every Message object during the Secure Session.
  • a system administrator may set the permissions for all HDA Server and OS entities; permission data may be stored in the database.
  • permission data may be stored in the database.
  • an entity a Plugin, for example
  • the entity may be informed of its own permissions, or set those permissions itself.
  • BSPs Application Service Providers
  • CLECs telecommunications service providers
  • ISPs Internet Service Providers
  • SoftSpot technology described herein is designed not only to be easily embedded in a Web site (should the seller already have one), but also to be useful to those sellers for whom having a Web site is not a viable option. All types of sellers may use WAP, PQA, or Voice-driven SoftSpots.
  • a very graceful solution may be to provide the restaurant with a SoftSpot through which a customer can interact (via a wireless device or a regular telephone, for example) with the restaurant's host; advantageously, such interactions may be linked and integrated into the restaurant's data systems immediately. This is an appealing and economical solution since it requires no installation of new hardware or software.
  • the SoftSpot architecture creates a portable client-agnostic user-interface.
  • SoftSpots are user interfaces which adapt, in real-time, to the user-agent requesting them.
  • the same SoftSpot may be served to a WAP-enabled device (such as an Internet-enabled wireless telephone, for example), to a "standard,” HTML-based Web browser, or to any voice user-agent (such as a telephone).
  • the SoftSpot architecture is not merely a request-time adaptation of content; the architecture contemplates providing services which are accessible not only from any networked device, but also from any number of different "domains." For example, a user employing a particular device to access the system from a specific domain may receive a domain-specific SoftSpot specifically formatted to the protocol required by the device. Similarly, another user accessing the system from a different domain may receive a different domain-specific SoftSpot.
  • SoftSpot Servlets are clients to a Marketplace Management System (MMS) maintained in the form of Services on the HDA Server described above. Consequently, the SoftSpots must interface with the Server Gateway and request and accept XML data in a desired markup language.
  • MMS Marketplace Management System
  • the SoftSpot architecture may be readily adapted to integrate seamlessly with the following technologies: adaptive XML-based content delivery; support for every type of networked UI; HTML (XHTML); WAP/WML; Applets; Shockwave/Flash; VXML; and PQA (Palm Query Application).
  • HTML XHTML
  • WAP/WML Wireless Fidelity
  • Applets Shockwave/Flash
  • VXML Visual Query Application
  • system architecture of the present invention provides fast and reliable content delivery, secure and private communications and domain-specific "look
  • SoftSpots may easily be configured by the seller, easily personalized for end users, and may offer persistent and stateful communications.
  • the HDA system described above may be configured to host and to manage a powerful, adaptive Customer Relationship Management (aCRM) application based upon SoftSpot technology.
  • aCRM Customer Relationship Management
  • small business aggregators may use the HDA Server and OS system as a web-deployed ASP service to offer their sellers a rich and full set of online aCRM tools.
  • the HDA Server may easily be configured to provide Services which may be embedded (transparently, for example) in aggregators' sites; the HDA Server's flexible, loosely coupled architecture and adaptive, migrative load management capabilities enable management of any number of such sites (or domains).
  • the HDA system may interface with the aggregators' various servers, applying "adaptation rules" to events that flow through the servers.
  • Adaptation rules may be employed to configure seller data to Domain Rules, Business Rules, and Seller Rules.
  • Use of the HDA system may be Session-based as described above, and the system may process data dynamically for every request.
  • a request may arrive from a seller who sells on a particular domain; upon receiving the request, the system may retrieve requested data from a database (as noted above, data in the system may be maintained in XML or DOM form); the system may then apply Domain Rules to the data XML (if needed, as required by the domain), such as language, local currency, and other domain-specific formats, etc.; the system may additionally apply Business Rules to the data XML (if needed, as required by the type of transaction), such as pricing, profiling, etc.; finally, the system may apply Seller Rules to the data XML (if needed, as requested by the particular seller), such as alert preferences, tool set, etc.
  • the client of this data may be a SoftSpot, an Adaptive User
  • Virtually infinite scalability meaning that more hardware and software resources may selectively be added, changed, or removed without taking the system down and without having any adverse impact on runtime system performance.
  • Adaptable — new aCRM applications may be deployed without bringing the system down and without adversely affecting runtime performance.
  • Flexible - the system may be configured easily to integrate with external, third- party applications through a flexible Server Gateway during runtime without bringing the system down and without adversely affecting runtime performance.
  • an adaptive Customer Relationship Management system In one embodiment, an adaptive Customer Relationship Management system
  • aCRM is provided; the aCRM is generally constituted by elements deployed in each of the three Realms discussed in detail above.
  • the "Application Layer" (in the Application Realm) may hold the actual Business Logic for the various aCRM Applications.
  • Applications may be pluggable components, and may advantageously be written according to a standard HDA deployment protocol. Applications may interact with other Applications to discover each other's Services and to subscribe and/or publish data among themselves. Applications may be automatically deployable; that is, adding, modifying, or removing an Application may be done during runtime through the Management Console (MC) as set forth above.
  • MC Management Console
  • the "OS Layer” (in the OS Realm) is preferably a highly distributed server-side operating system such as the HDA Server OS detailed above.
  • the OS may provide low-level system Services such as, for example, process management, memory management, resource management, load balancing, replication (spawning new Plugin instances), and the like.
  • the OS may provide innovative Services such as Migration, Topology Management and VM Management as discussed above.
  • Data Layer in the Data Layer Realm may interface with any number of data sources and supply data in XML or DOM form to the HDA Server.
  • the Data Layer may also interface with various types of data sources such as relational databases, object-oriented databases, and the like, and additionally or alternatively use CORBA to interface with legacy systems.
  • the HDA Server's Data Layer may also support SOAP - - an upcoming industry standard for inter-application communication.
  • an aCRM By virtue of the HDA system upon which the aCRM is structured, an aCRM
  • Application has the following important features: it may be installed, modified, or removed seamlessly; it may be infinitely scalable to support hundreds of aggregators and thousands of end-users; its User Interface may be adapted in real time to wireless, voice, or HTML devices; aggregators and end-users may decide which set of Applications to use; and data may be utilized from many data sources of various types simultaneously.
  • SoftSpots may have the following attributes, identified in accordance with two phases: presentation; and communication.
  • SoftSpots may generally represent one item type ⁇ not a quantity.
  • one SoftSpot may represent a set of items, such as a set of three books.
  • SoftSpots may contain at least the seller's name and/or alias, a short item description, and the communication type(s) employed by the seller. Additionally, SoftSpots may also contain information concerning the following: seller rating; deal-type; time remaining until a deal closes or an offer expires (for example, during a timed auction); media files such as image, video, audio, and the like; and so forth.
  • the SoftSpot may be expected to handle either a sale, a request for more information, or a request for customer service, for example.
  • SoftSpots may open a communication channel between buyers and sellers; the SoftSpot communications channel may be real-time (as in chat) or deferred (as in email), for example.
  • SoftSpots may support flexible, secure and private transactions in a wide variety of ways.
  • the SoftSpot communications channel may be tied into the sellers' respective PO (Purchase Order) consoles.
  • a seller may build a PO in real-time and send it to the buyer for approval.
  • the SoftSpot communications channel may be conveniently tied into a seller's inventory management system; in this embodiment, for example, inventory information may be retrieved from the inventory management system in addition to, or as an alternative to, retrieving the information directly from the seller.
  • the SoftSpot communications channel may additionally be tied into the sellers' respective customer service systems, for example, a software Customer Service Console (CSC) application; in this embodiment, information may be retrieved from a Frequently Asked Questions (FAQ) database, for example, through an interface with the CSC.
  • CSC software Customer Service Console
  • FAQ Frequently Asked Questions
  • the HDA system of the present invention may allow a seller to issue any number of SoftSpots.
  • SoftSpot "issuance" may be done automatically by the system, and additionally may be directly tied to the seller's console.
  • Sellers may set preferences to control SoftSpot behavior as well as the "look & feel" of the UI.
  • a specific backdrop or frame arrangement may be provided for applicable Graphical User Interfaces (GUIs).
  • GUIs Graphical User Interfaces
  • it may be desirable to restrict a SoftSpot' s access to certain communication types (such as telephone or real-time chat) whereas, on the other hand, it may be desirable to attach a pager or a cellular telephone alert to a SoftSpot event.
  • the system may be adapted to support logging of any or all types of SoftSpot activity.
  • SoftSpots are highly focused AUI components; the operability of SoftSpots may be selectively limited to two primary functions: to serve as a networked customer interface for the seller; and to allow maximum flexibility for customer acquisition, satisfaction, and retention.
  • SoftSpots may not create (what are commonly known as) Web pages; that is, SoftSpots may only interact with and serve as gateways to a common communications channel.
  • the email or chat applications mentioned briefly above may not be part of the SoftSpot itself, but may be operable through the SoftSpot.
  • SoftSpots only interact with, and are gateways to, the MMS (the MMS may comprise a transaction and payment recordation and reporting system) maintained on the HDA system. That is, the SoftSpots themselves may not be part of the MMS. Communication between the SoftSpot Servlets and the MMS may be through XML at the juncture of the SoftSpot Gateway 280 and the Server Gateway 270 shown in Fig. 2.
  • the SoftSpot may be represented by a Servlet:
  • SoftSpotManager spawns a new Servlet container (for example, Enhydra) spawns a new Servlet container (for example, Enhydra) spawns a new Servlet container (for example, Enhydra) spawns a new Servlet container (for example, Enhydra) spawns a new Servlet container (for example, Enhydra) spawns a new Servlet container (for example, Enhydra) spawns a new
  • SoftSpotManager which in turn spawns an instance of SoftSpotSession for every Session.
  • SoftSpotSession may maintain one or more protected instances of one or more SoftSpot objects.
  • a SoftSpot may use a descriptor file, for example, SoftSpotDesc, to load, to read, and to store XML directives for the SoftSpot.
  • Figure 16 is a simplified flow chart illustrating one embodiment of the life cycle of a SoftSpot which may interact with an HDA Server. The following process may repeated for every incoming request. It should be noted that Fig. 16 and the description set forth below is provided by way of example only.
  • the SoftSpot may receive an HTTPRequest from SoftSpotManager.
  • the SoftSpot may create a UserAgentlnfo object containing data extracted from the http header, and additionally may determine the following data points from the http header: type of User-Agent (WAP browser, Voice browser, HTML browser, and so on); type and size of display; the particular domain from which the request originated; user ID (if applicable); and Session-ID (if applicable).
  • This data extraction is represented at block 1602.
  • the SoftSpot may send an XML Message request to a SoftSpotMMSConnector object, which enables communication with the HDA Server.
  • an exclusive or a proprietary markup language or Message language may be employed for all internal data transfers on HDA Server, as discussed above (XML or DOM, for example, may be the exclusive language used for internal data communications); in such an embodiment, as shown at block 1604, the SoftSpotMMSConnector may reformat the Message into the proper language (if required) and route it to the MMS.
  • the SoftSpotMMSConnector may next register the SoftSpot as a SoftSpotMMSResultListener for obtaining results or a reply from the MMS (block 1605).
  • the SoftSpot may receive data returned from the MMS, as shown at block 1606.
  • the SoftSpot may receive the following information as a result set: seller rating, name, and alias; short and long item descriptions; item images in color and/or in black and white; item and/or deal-type; time remaining for deal completion; communication type (chat, email, telephone, etc.); and the like.
  • the foregoing list is not intended to be exhaustive. It will be appreciated that the information requested or required by the SoftSpot may vary according to application and the context of the interaction.
  • the SoftSpot may assign a SoftSpotProcessor object to format the data set into an appropriate format which will be readable by the user-agent or the type of device employed (e.g. a desktop computer terminal capable of reading HTML, a hand-held device capable of reading Hand-Held Device Markup Language (HDML), a Web-enabled telephone capable of reading Wireless Markup Language (WML), and so forth).
  • a desktop computer terminal capable of reading HTML
  • a hand-held device capable of reading Hand-Held Device Markup Language (HDML)
  • HDML Hand-Held Device Markup Language
  • WML Wireless Markup Language
  • the SoftSpotProcessor may apply rules directed to translating the resulting data (in DOM, for example) as follows: transform raw XML data to domain-specific XML according to Domain Rules; transform domain-styled XML to merchant-styled XML according to Seller Rules; transform merchant-styled XML to user-agent-specific XML depending upon the device used and the connection which is established.
  • the SoftSpot may send the resulting XML Message data to the SoftSpotManager (block 1608), which may pack the XML data into the HTTPResponse stream.
  • the SoftSpotSession may time out, or alternatively, the user may log out (block 1609).
  • the SoftSpotManager may be destroyed or may be returned to a pool (block 1610).
  • pooling inactive SoftSpotManager instances may economize on system overhead.
  • the inactive SoftSpotManager may be made active again, to be reused (without having to be recreated) as system load requires.
  • the SoftSpotManager is the master Servlet controlling specific SoftSpot instances per SoftSpotSession. As discussed above with reference to Fig. 16, as a Servlet, the SoftSpotManager may handle the HTTPRequest and HTTPResult objects, though it may have no knowledge of the content of the streams it handles. As with other elements of the HDA Server, the SoftSpotManager may be load balanced using the various Migration and load balancing schemes discussed above. The SoftSpotManager may have several SoftSpots which it manages in a single SoftSpotSession; in one embodiment, each SoftSpot may be run in its own thread.
  • SoftSpot is not a Servlet (although it may use a small portion of the Servlet API). Instead, it is "contained" in the SoftSpotManager, which manages it. As noted above, there may be several different SoftSpots per SoftSpotSession.
  • SoftSpot generally may maintain a SoftSpotDesc descriptor which holds all the properties and descriptions for the particular SoftSpot.
  • SoftSpots may recognize some Servlet semantics, such as HTTPRequest and HTTPResult, for example.
  • SoftSpots may spawn threads which govern access to other applications, such as communications consoles and the MMS, for example.
  • FIG. 17 is an illustration of one embodiment of a SoftSpot descriptor.
  • the SoftSpot has received data concerning the following attributes from the http header (HTTPRequest): user-agent; domain type; user-ID; and Session ID. This data extraction is illustrated in context in Fig. 16 at block
  • the SoftSpot architecture has a very unique notion of a Session. Specifically, support is provided for simultaneous access by different user-agents, i.e. a SoftSpotSession may be joined by another SoftSpotSession which maintains a different SoftSpot.
  • a Uniform SoftSpot Descriptor (USSD) file is the core of the SoftSpot architecture.
  • the USSD may serve as a master template for the Servlet which controls a specific SoftSpot instance.
  • Each particular SoftSpot may be associated with its own USSD file which controls the overall description of that SoftSpot.
  • a USSD Client such as a Servlet, for example, the
  • SoftSpotManager discussed above may find all the relevant transformation instructions and automatically download, cache, and format the markup output (such as HTML). These transformation instructions may be contained in transformation files which may be denoted, for example, by the TXN suffix.
  • USSD is designed to be extended during runtime.
  • the USSD file may be an
  • FIGS. 18A and 18B are an illustration of one embodiment of a USSD file.
  • the transformation files for the SoftSpot may be required to be cached locally for the duration of the SoftSpot Session.
  • SoftSpot caching may be provided by the SoftSpotDesc object. At least two types of cache may be available for SoftSpots: in- memory; and file-based. In effect, the main caching of the USSD file and its extensions may be added to the SoftSpotDesc dynamically as needed.
  • the USSD file's role is to describe where the pertinent transformation file can be downloaded from (if needed) and the order in which the transformation rules are to be applied; the USSD may also define some security information, for example.
  • the SoftSpotDesc may be made aware that a new transform file has been downloaded successfully or, in case of failure, that the required transform file is missing. The SoftSpotDesc may then add the new XML data to its DOM tree (the internal cache).
  • the SoftSpotDesc may begin to prepare itself immediately upon spawn of a Session, and may in fact run concurrently with the rest of the process.
  • the SoftSpotManager may neither be expected nor required to wait for the SoftSpotDesc for proper operation.
  • SoftSpotDesc fails, a lowest common denominator (i.e. a generic, or default, SoftSpotDesc) may be used. Failure can occur in one of the following ways, for example: the SoftSpotDesc cannot download one or more transformation files; the SoftSpotDesc fails to parse one or more transformation files; the SoftSpotDesc has not completed download of all the required transformation files; or the SoftSpotDesc exits abnormally (VM exit).
  • the SoftSpotDesc and the SoftSpotManager may be expected to run at least in the same Servlet container (Servlet runner), if not in the same address space. Communication between these objects may be done through the HTTP stream.
  • an HDA Server system having an HDA Server OS provides a powerful and versatile computer server architecture which overcomes many bandwidth and scalability complications.
  • the preferred embodiments disclosed herein have been described and illustrated by way of example only, and not by way of limitation. Other modifications and variations to the invention will be apparent to those skilled in the art from the foregoing detailed disclosure. While only certain embodiments of the invention have been specifically described herein, it will be apparent that numerous modifications may be made thereto without departing from the spirit and scope of the invention.

Abstract

L'invention concerne un système de serveur informatique à architecture hautement distribuée (HDA), qui comprend généralement un réseau non hiérarchique de machines physiques, chaque machine possédant des ressources (1-3) physiques et logiques (c'est-à-dire, virtuelles), un réseau permettant de transmettre des données entre et dans les machines physiques, et un code de programme permettant d'attribuer et de gérer des ressources système (10). Le code qui permet d'attribuer et de gérer des ressources système peut se présenter sous la forme d'un système d'exploitation (100) de serveur HDA conçu pour bénéficier des avantages de l'architecture de serveur distribuée. Les avantages du système HDA (100) peuvent consister en une adaptation rapide aux événements du système, et en une migration de composants d'application dans les ressources (1-3) physiques et logiques. Selon un mode de réalisation exemplaire, un système comprenant un serveur informatique HDA qui possède un système d'exploitation (100) de serveur HDA peut servir de plate-forme destinée à faciliter des transactions Internet à l'aide d'interfaces utilisateur adaptatives (AUI), de façon à établir une communication entre le système HDA (100) et un client externe. Ce système HDA fournit une gestion de ressources système globale efficace, d'excellentes caractéristiques de tolérance d'erreur (c'est-à-dire, stabilité et fiabilité), et une extensibilité virtuelle infinie.
PCT/US2000/031108 1999-11-12 2000-11-13 Architecture de serveur informatique hautement distribuee et systeme d'exploitation WO2001035242A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU16021/01A AU1602101A (en) 1999-11-12 2000-11-13 Highly distributed computer server architecture and operating system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16486599P 1999-11-12 1999-11-12
US60/164,865 1999-11-12

Publications (1)

Publication Number Publication Date
WO2001035242A1 true WO2001035242A1 (fr) 2001-05-17

Family

ID=22596416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/031108 WO2001035242A1 (fr) 1999-11-12 2000-11-13 Architecture de serveur informatique hautement distribuee et systeme d'exploitation

Country Status (2)

Country Link
AU (1) AU1602101A (fr)
WO (1) WO2001035242A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002103990A2 (fr) * 2001-06-19 2002-12-27 Siemens Aktiengesellschaft Gestion centralisee d'un centre d'appels
FR2838841A1 (fr) * 2002-04-22 2003-10-24 Mitsubishi Electric Corp Appareil et procede de communication, et procede de commande de modules d'extension.
EP1418501A1 (fr) * 2002-11-08 2004-05-12 Dunes Technologies S.A. Méthode d'administration d'applications sur des machines virtuelles
EP1443380A2 (fr) * 2003-01-14 2004-08-04 Yamaha Corporation Appareil et programme de traitement de contenu
EP1463991A1 (fr) * 2002-01-11 2004-10-06 Akamai Technologies, Inc. Cadre d'applications java utilisable dans un reseau de diffusion de contenu (cdn)
WO2005109195A2 (fr) * 2004-05-08 2005-11-17 International Business Machines Corporation Migration dynamique de programmes de machine virtuelle
WO2006044701A1 (fr) * 2004-10-15 2006-04-27 Emc Corporation Configuration, controle et/ou gestion de groupes de ressources comprenant une machine virtuelle
WO2006044702A1 (fr) * 2004-10-15 2006-04-27 Emc Corporation Configuration, surveillance et/ou gestion de groupes de ressources
DE102009005455A1 (de) * 2009-01-21 2010-07-22 Siemens Aktiengesellschaft Computersystem zum Verwalten, Speichern und Austausch von computergestützten medizinischen Taskflows
US7772980B2 (en) * 2006-04-12 2010-08-10 International Business Machines Corporation Method and systems for localizing objects using capacitively coupled RFIDs
KR101113943B1 (ko) * 2008-12-22 2012-03-05 한국전자통신연구원 워크로드 관리 방법과 장치 및 이를 이용한 분산 컴퓨팅 시스템
US8214882B2 (en) * 2004-05-17 2012-07-03 International Business Machines Corporation Server discovery, spawning collector threads to collect information from servers, and reporting information
US20180053159A1 (en) * 2016-08-18 2018-02-22 Mastercard International Incorporated Transaction control management

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5560038A (en) * 1994-07-22 1996-09-24 Network Peripherals, Inc. Apparatus for translating frames of data transferred between heterogeneous local area networks
US5566092A (en) * 1993-12-30 1996-10-15 Caterpillar Inc. Machine fault diagnostics system and method
US5604867A (en) * 1994-07-22 1997-02-18 Network Peripherals System for transmitting data between bus and network having device comprising first counter for providing transmitting rate and second counter for limiting frames exceeding rate
US6061360A (en) * 1998-02-24 2000-05-09 Seagate Technology, Inc. Method and apparatus for preserving loop fairness with dynamic half-duplex
US6085247A (en) * 1998-06-08 2000-07-04 Microsoft Corporation Server operating system for supporting multiple client-server sessions and dynamic reconnection of users to previous sessions using different computers

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566092A (en) * 1993-12-30 1996-10-15 Caterpillar Inc. Machine fault diagnostics system and method
US5560038A (en) * 1994-07-22 1996-09-24 Network Peripherals, Inc. Apparatus for translating frames of data transferred between heterogeneous local area networks
US5604867A (en) * 1994-07-22 1997-02-18 Network Peripherals System for transmitting data between bus and network having device comprising first counter for providing transmitting rate and second counter for limiting frames exceeding rate
US5655140A (en) * 1994-07-22 1997-08-05 Network Peripherals Apparatus for translating frames of data transferred between heterogeneous local area networks
US6061360A (en) * 1998-02-24 2000-05-09 Seagate Technology, Inc. Method and apparatus for preserving loop fairness with dynamic half-duplex
US6085247A (en) * 1998-06-08 2000-07-04 Microsoft Corporation Server operating system for supporting multiple client-server sessions and dynamic reconnection of users to previous sessions using different computers

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002103990A2 (fr) * 2001-06-19 2002-12-27 Siemens Aktiengesellschaft Gestion centralisee d'un centre d'appels
WO2002103990A3 (fr) * 2001-06-19 2003-05-30 Siemens Ag Gestion centralisee d'un centre d'appels
EP1463991A4 (fr) * 2002-01-11 2008-08-06 Akamai Tech Inc Cadre d'applications java utilisable dans un reseau de diffusion de contenu (cdn)
EP1463991A1 (fr) * 2002-01-11 2004-10-06 Akamai Technologies, Inc. Cadre d'applications java utilisable dans un reseau de diffusion de contenu (cdn)
FR2838841A1 (fr) * 2002-04-22 2003-10-24 Mitsubishi Electric Corp Appareil et procede de communication, et procede de commande de modules d'extension.
WO2004042575A1 (fr) 2002-11-08 2004-05-21 Dunes Technologies S.A. Methode d'administration de applications sur des machines virtuelles
US7802248B2 (en) 2002-11-08 2010-09-21 Vmware, Inc. Managing a service having a plurality of applications using virtual machines
EP1418501A1 (fr) * 2002-11-08 2004-05-12 Dunes Technologies S.A. Méthode d'administration d'applications sur des machines virtuelles
JP2006505842A (ja) * 2002-11-08 2006-02-16 デューヌ テクノロジー エス アー バーチャルマシン上のアプリケーションの管理方法
EP1443380A2 (fr) * 2003-01-14 2004-08-04 Yamaha Corporation Appareil et programme de traitement de contenu
WO2005109195A3 (fr) * 2004-05-08 2006-02-02 Ibm Migration dynamique de programmes de machine virtuelle
CN1947096B (zh) * 2004-05-08 2011-01-12 国际商业机器公司 用于虚拟机计算机程序的动态迁移的方法和系统
US8566825B2 (en) 2004-05-08 2013-10-22 International Business Machines Corporation Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US8156490B2 (en) 2004-05-08 2012-04-10 International Business Machines Corporation Dynamic migration of virtual machine computer programs upon satisfaction of conditions
WO2005109195A2 (fr) * 2004-05-08 2005-11-17 International Business Machines Corporation Migration dynamique de programmes de machine virtuelle
US8214882B2 (en) * 2004-05-17 2012-07-03 International Business Machines Corporation Server discovery, spawning collector threads to collect information from servers, and reporting information
WO2006044701A1 (fr) * 2004-10-15 2006-04-27 Emc Corporation Configuration, controle et/ou gestion de groupes de ressources comprenant une machine virtuelle
WO2006044702A1 (fr) * 2004-10-15 2006-04-27 Emc Corporation Configuration, surveillance et/ou gestion de groupes de ressources
US9329905B2 (en) * 2004-10-15 2016-05-03 Emc Corporation Method and apparatus for configuring, monitoring and/or managing resource groups including a virtual machine
US7772980B2 (en) * 2006-04-12 2010-08-10 International Business Machines Corporation Method and systems for localizing objects using capacitively coupled RFIDs
KR101113943B1 (ko) * 2008-12-22 2012-03-05 한국전자통신연구원 워크로드 관리 방법과 장치 및 이를 이용한 분산 컴퓨팅 시스템
DE102009005455A1 (de) * 2009-01-21 2010-07-22 Siemens Aktiengesellschaft Computersystem zum Verwalten, Speichern und Austausch von computergestützten medizinischen Taskflows
US11610186B2 (en) * 2016-08-18 2023-03-21 Mastercard International Incorporated Transaction control management
US20180053159A1 (en) * 2016-08-18 2018-02-22 Mastercard International Incorporated Transaction control management

Also Published As

Publication number Publication date
AU1602101A (en) 2001-06-06

Similar Documents

Publication Publication Date Title
CA2471855C (fr) Cadre d'applications java utilisable dans un reseau de diffusion de contenu (cdn)
US6609159B1 (en) Methods, systems, and machine readable programming for interposing front end servers between servers and clients
CA2279382C (fr) Systeme de courtage pour demandes dans le web commandant des operations multiples
US7203769B2 (en) Bootstrapping technique for distributed object client systems
US20180004503A1 (en) Automated upgradesystem for a service-based distributed computer system
US6016496A (en) Method and apparatus for an object-oriented object for retrieving information from local and remote databases
US7480679B2 (en) Duplicated naming service in a distributed processing system
EP1438672B1 (fr) Procede, appareil et systeme pour client web mobile
US8103760B2 (en) Dynamic provisioning of service components in a distributed system
US7086065B1 (en) Functional enterprise bean
US20020078255A1 (en) Pluggable instantiable distributed objects
US20060112398A1 (en) System and Methodology Providing Service Invocation for Occasionally Connected Computing Devices
US20030131084A1 (en) Extended environment data structure for distributed digital assets over a multi-tier computer network
US20040205101A1 (en) Systems, methods, and articles of manufacture for aligning service containers
JP2000500940A (ja) レジストリ通信ミドルウェア
US20030055862A1 (en) Methods, systems, and articles of manufacture for managing systems using operation objects
US20080288622A1 (en) Managing Server Farms
WO2001035242A1 (fr) Architecture de serveur informatique hautement distribuee et systeme d'exploitation
Duvos et al. An infrastructure for the dynamic distribution of web applications and services
US7503050B2 (en) Transaction polymorphism
Luo et al. System support for scalable, reliable, and highly manageable Web hosting service
Silis et al. World wide web server technology and interfaces for distributed, high-performance computing systems
JP4950389B2 (ja) ネットワークベースのアプリケーション、それを処理するためのアーキテクチャ及びシステム、ならびにそれを実行するための方法
AU2012203811A1 (en) Java application framework for use in a content delivery network (CDN)
Raza et al. Plug-and-Play Network Service Configuration Using CORBA

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase