WO2002039215A9 - Distributed dynamic data system and method - Google Patents

Distributed dynamic data system and method

Info

Publication number
WO2002039215A9
WO2002039215A9 PCT/US2001/043745 US0143745W WO0239215A9 WO 2002039215 A9 WO2002039215 A9 WO 2002039215A9 US 0143745 W US0143745 W US 0143745W WO 0239215 A9 WO0239215 A9 WO 0239215A9
Authority
WO
WIPO (PCT)
Prior art keywords
user
dynamic
network
dynamic data
internet
Prior art date
Application number
PCT/US2001/043745
Other languages
French (fr)
Other versions
WO2002039215A2 (en
WO2002039215A3 (en
Inventor
Lance Booth
James Fallon
Peter Thimmesch
Original Assignee
Visitalk Com Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visitalk Com Inc filed Critical Visitalk Com Inc
Priority to AU2002230461A priority Critical patent/AU2002230461A1/en
Publication of WO2002039215A2 publication Critical patent/WO2002039215A2/en
Publication of WO2002039215A3 publication Critical patent/WO2002039215A3/en
Publication of WO2002039215A9 publication Critical patent/WO2002039215A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1818Conference organisation arrangements, e.g. handling schedules, setting up parameters needed by nodes to attend a conference, booking network resources, notifying involved parties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4511Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4505Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols
    • H04L61/4523Network directories; Name-to-address mapping using standardised directories; using standardised directory access protocols using lightweight directory access protocol [LDAP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/45Network directories; Name-to-address mapping
    • H04L61/4547Network directories; Name-to-address mapping for personal communications, i.e. using a personal identifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1073Registration or de-registration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1025Dynamic adaptation of the criteria on which the server selection is based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1029Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers using data related to the state of servers by a load balancer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component

Definitions

  • the field of the present invention relates generally to services provided via a distributed electronic network and, in particular, to systems and methods for facilitating communication over a distributed electronic network such as the Internet.
  • IP Internet Protocol
  • Many new uses are being found for the Internet and computer networks.
  • the Internet is presently being used for communication applications.
  • Applications such as email and instant messaging have already become ubiquitous, while Internet telephony and Internet video communication are becoming increasingly available.
  • performing remote control or monitoring of various devices on a real-time basis via the Internet or other computer networks is also possible.
  • Peer-to-peer networking has been applied in the context of real-time (or nearly real-time) communications.
  • An early system to facilitate peer-to-peer connections for desktop video conferencing used a tool called the Internet Locator Service (ILS) to allow users to find other users presently logged on to a Web site. Once one user found another, peer-to-peer communication could be established using software such as Microsoft NetMeeting®.
  • a newer peer-to-peer communication software package is Microsoft Messenger®.
  • static information includes information that uniquely identifies a particular computer or person connected to the Internet or network. Such information may include a user's name, a fixed location for the user, and/or the user's email address. Dynamic information in this context is typically characterized as information that is subject to change.
  • dynamic information is dynamic addressing information, which is addressing information that may change according to a user's connection to the Internet or a network. For example, the IP address or other connection address for a device connected to the Internet or a network may change each time that the user re-connects, or further may even change while the user remains connected.
  • Another factor that complicates information using a peer-to-peer directory infrastructure is that an intended recipient of information may not always be connected to the particular network. While intermittent connectivity does not present a problem for one-way communication such as email, it presents a major impediment to two-way communication such as telephony, video communication, and feedback control. When an intended recipient disconnects or logs off, the user essentially disappears and becomes difficult to reach for purposes of carrying out dynamic communications.
  • ISP Internet Service Provider
  • a data system includes a dynamic directory that is completely independent from static data, such as may be maintained in a static directory.
  • the dynamic directory is used as a dynamic caching mechanism by the application that controls the static database. That is, since the dynamic database is actually a subset of information of the static directory, each partition of the dynamic database actually caches a small portion of the static database.
  • the static portion is responsible of keeping track of persistent user data, while the dynamic portion is responsible for keeping track of which members (more specifically, devices accessed by users) are connected to the system and each user's current connection address.
  • each user having a unique identifier (ID) is assigned a remote "service," preferably operating on a server.
  • a single service may serve many users, but multiple services, each serving many users, are contemplated.
  • the service assigned to a user handles dynamic data for that user. For instance, the service may keep track of a user's present connection address, and whether the user is presently connected to a network to receive immediate communication.
  • the service may also be used to store information (such as, for example, voice, text, or other messages) intended to be communicated to a user while the user is unable to receive immediate communications.
  • Another aspect of the invention employs a consistent mapping scheme that permits high refresh rates to be obtained without overloading any particular segment or device of the system, even though the distributed dynamic data system is highly segmented and highly distributed.
  • the consistent mapping scheme allows a particular service assigned to a user to be found rapidly, even when the number of users should grow into the millions. Since a service is used essentially as an intermediary for enabling communications to or from a specific user, the consistent mapping scheme provides greatly enhanced connectivity between users. Connectivity is even further enhanced by the non-hardware- specific nature of a user's access to its particular service: since access is based on a user's unique ID, a user may access to its particular service at different times with a variety of different devices.
  • a static data repository is somewhat centralized, generally stored at one or more servers. Another aspect of the invention provides for the decentralized storage of static data at many distributed transaction processors and/or sites.
  • the system is self-correcting; in other words, it is capable of detecting faults in a device operating a service and bringing a backup service device online as a replacement.
  • Uses for the invention include presence detection applications, Internet video gaming, voice communications, video communications, remote monitoring, remote feedback control, delivery of customized dynamic data, storing and forwarding data to a user, and instant messaging (including text or other message types). Other uses for the invention will be apparent to those skilled in the art upon review of the specification.
  • FIG. 1 illustrates a system cluster in accordance with the principles of one embodiment
  • FIG. 2 illustrates a first embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
  • FIG. 3 illustrates a second embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
  • FIG. 4 illustrates a management architecture of the system of FIG. 1 ;
  • FIG. 5 illustrates a third embodiment of virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
  • FIG. 6 illustrates a fourth embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
  • FIG. 7 is a block diagram of the directory architecture in a system cluster;
  • FIG. 8 is a flow diagram illustrating an example of the operation of the directories of FIG. 7;
  • FIG. 9 is a representation of a visual display of merged dynamic and static directories
  • FIG. 10 illustrates a system architecture in accordance with an embodiment as disclosed herein;
  • FIG. 11 illustrates directory information flow between system clusters in the system architecture of FIG. 10;
  • FIG. 12 is a flow diagram illustrating an example of a process for registering a user with a personal identification code
  • FIG. 13 is a flow diagram illustrating an example of a process for assigning a user a personal identification code
  • FIG. 14 is an Internet device
  • FIG. 15 is a block diagram of the Internet device of FIG. 14;
  • FIG. 16 is a flow diagram of an example of operation of the Internet device of FIG. 15;
  • FIG. 17 is a functional block diagram illustrating voice/video mail features as may be used in connection with the system of FIG. 1 ;
  • FIG. 18 is a schematic overview of a distributed dynamic DNS system
  • FIG. 19 is a top-level diagram of an Internet-based communications system having a directory of Permanent Communication Numbers
  • FIG. 20 is an architectural diagram of one embodiment of the Internet- based communications system illustrated in FIG. 19;
  • FIG. 21 is a top-level diagram of a network-based distributed dynamic data system according to one embodiment having a centralized static data repository;
  • FIG. 22 is a logical diagram of the distributed dynamic data system illustrated in FIG. 21 ;
  • FIG. 23 is a top-level diagram of a network-based distributed dynamic data system according to another embodiment
  • FIG. 24 is a top-level diagram of portion of a network-based distributed dynamic data system according to a further embodiment
  • FIG. 25 is a top-level diagram of a network-based distributed dynamic data system according to yet another embodiment including multiple D3 clusters;
  • FIG. 26 is a top-level diagram of a network-based distributed dynamic data system according to one embodiment lacking a centralized static data repository;
  • FIG. 27 is a logical diagram of the distributed dynamic data system illustrated in FIG. 26;
  • FIG. 28 is an exemplary representation of a distributed dynamic data service assignment scheme in the form of arrays;
  • FIG. 29 is an illustration of a distributed dynamic data service assignment scheme corresponding to the arrays illustrated in FIG. 28; and FIG. 30 is a top-level diagram of a network-based distributed dynamic data system according a further embodiment.
  • FIG. 21 is a top-level diagram of a network-based distributed communication system, illustrating various concepts as disclosed herein.
  • a network 2101 such as, for example, the Internet.
  • a private network may be used.
  • DTPs distributed transaction processors
  • a DTP 2110, 2111 connect to the network ( ) for the purpose of carrying out communication over the network ( ).
  • a DTP 2110, 2111 is an application residing on an electronic device capable of transmitting and receiving information over a network 2101.
  • a DTP 2110, 2111 is typically part of, or associated with, another application that, in order to provide some functionality, uses the DTP 2110, 2111 in order to connect to the network 2101.
  • Examples of devices on which a DTP 2110, 2111 might be installed include, but are not limited to, a computer, a wireless IP device, and a network- capable cellular phone.
  • a DTP 2110, 2111 has a protocol-specific connection address, or DTP address. Examples of potential DTP connection addresses include, but are not limited to, a TCP/IP address, a IPX/SPX address, or a NETBEUI computer name.
  • a distributed dynamic data system may not - and usually does not - know the DTP address for the desired recipient.
  • a further characteristic of such a system is that a DTP 2110, 2111 need not always be available to the network 2101. That is, a DTP application need not always be running, due to circumstances such as if the device associated with the DTP 2110, 2111 is turned off, or the DTP 2110, 2111 is disconnected from the network 2101. The availability of the DTP 2110, 2111 to the network 2101 may be controlled at the discretion of the application user.
  • a distributed dynamic data system 2100 includes a relatively centralized static data repository ("DR") 2115.
  • the DR 2115 connects to the network 2101 by way of a distributed transaction processing gateway ("DTPG") 2116.
  • DTPG distributed transaction processing gateway
  • the DR 2115 is typically, but not limited to, a member database; typically this database would reside on a data server or linked collection of data servers.
  • the DR 2115 may also be present at a website.
  • the DR 2115 may store information regarding a user - at a minimum, a unique identifier ("unique ID") for the user of a DTP 2110, 2111.
  • the user may be a person or an autonomous application.
  • a primary purpose of the DR 2115 is to identify a user so as to enable the user to receive other information.
  • the DR 2115 may optionally also include a distributed dynamic data key ("D3 Key").
  • D3 Key a distributed dynamic data key
  • Use of a D3 Key allows static load balancing to be performed based upon empirical data (e.g., by geography, by privacy requirements such as may be dictated by a corporate internal network, by particular Internet service provider, or by level of Internet quality of service).
  • the DTPG 2116 provides an interface between the DR 2115 and the network 2101.
  • the DTPG 2116 may be a login server.
  • a primary function of the DTPG 2116 is to provide the DR 2115 the ability to send and receive data over the network 2101.
  • FIG. 21 also illustrates multiple distributed dynamic data services ("D3Ss")
  • a D3S 2120, 2121 is a service, operating on one or more devices (e.g., servers), that stores dynamic addressing information (e.g. a connection address) for at least one DTP 2110, 2111.
  • dynamic addressing information e.g. a connection address
  • each unique ID corresponding to a user
  • a D3S 2120, 2121 may further store, process, and/or send other information. For example, information intended for a DTP may be posted to a D3S by a DTPG 2116 or other DTPs.
  • a D3S 2120, 2121 may execute logic against data. Preferably, if a connection address for a DTP 2110, 2111 changes - such as frequently occurs with dynamic IP addressing - then the DTP 2110, 2111 will post its new connection address to its corresponding D3S 2120, 2121. As compared to one another, each D3S 2120, 2121 usually provides equivalent functionality, but for a different group of unique IDs. Also illustrated in FIG. 21 is a distributed dynamic data executive service
  • D3X 2130 is responsible for providing a D3S assignment scheme, which provides consistent (i.e. predictable and repeatable) mapping of a unique ID to a particular D3S 2120, 2121.
  • the D3X 2130 Upon initial connection of a DTP 2110, 2111 to the network 2101 , the D3X 2130 derives the particular D3S 2120, 2121 to which the DTP 2110, 2111 should connect, based on the unique ID of the user utilizing the DTP 2110, 2111.
  • the D3X 2130 also checks the statuses of the D3Ss 2120, 2121 and, based on the result of this status check, updates the assignment scheme.
  • the assignment scheme which dictates the assignment of unique IDs to particular D3Ss 2120, 2121 indicates the availability of particular D3Ss 2120, 2121. It is preferred for the D3X 2130 to run on an independent physical server to promote reliability; alternatively, however, the D3X 2130 can run as a service on the same device as any D3S 2120, 2121.
  • a D3S 2120, 2121 is preferably capable of performing most or all of the functions of the D3X 2130 if necessary.
  • the D3X 2130 is preferably capable of performing the functions of a D3S 2120, 2121 so that if a D3S (such as D3S1 2120) should fail, then its functionality may be temporarily provided by the D3X 2130 until the assignment scheme is modified to permit another D3S to be reassigned in its place.
  • a D3S such as D3S1 2120
  • a distributed dynamic data cluster (“D3 cluster”) 2140 includes the combination of a D3X 2130 and multiple associated D3Ss (e.g., D3S1 2120 and D3S2 2121). While only a single D3 cluster 2140 is provided in FIG. 21 , multiple D3 clusters are contemplated in other embodiments, as will be discussed hereinafter.
  • D3 cluster distributed dynamic data cluster
  • an embodiment having a DR 2115 might have only a single DTP (such as DTP1 2110) present. In such an instance, then it is contemplated that most communication or information traffic flows from the DTPG 2116 to the DTP 2110. For example, a website may send custom dynamic data from the DTPG 2116 to a DTP 2110 on a streaming basis. If further DTPs are added, then information may also be sent from one DTP (e.g., DTP1 2110) and be received by another DTP (e.g., DTP2 2111).
  • DTP DTP1 2110
  • a DTP (e.g., DTP1 2110) connects for the first time to a data system 2100 according to an embodiment illustrated in FIG. 21 , a few procedural steps are involved for a D3S 2120, 2121 to be assigned to the DTP 2110.
  • the DTP 2110 initiates contact with the DTPG 2116 to register a unique ID with the DR 2115.
  • this unique ID identifies the user of a DTP 2110.
  • the registration procedure preferably includes a suggestion by the user of a DTP 2110 for a particular unique ID, such as a user's email address or Permanent Communication Number or PCN SM .
  • An alternative registration procedure may include a suggestion - not initiated by the user, but instead by the DR 2115, the DTPG 2116, or an associated application - for a suitable unique ID, which suggestion may be accepted or rejected by an DTP user.
  • a query may be performed of the DR 2115 to verify that the suggested unique ID is, in fact, unique to the DR 2115.
  • the DTPG 2116 typically provides verification and further responds to the DTP 2110 with sufficient information, such as a connection address, for the DTP 2110 to connect to a D3X 2130.
  • the next procedural step includes the DTP 2110 contacting a D3X 2130 to determine which D3S 2120, 2121 the DTP 2110 should connect with, e.g., D3S1 2120.
  • the D3X 2130 provides this information to the DTP 2110.
  • the DTP 2110 connects to its assigned D3S 2120 to communicate its DTP connection address to the D3S 2120 and thereby enable further communications.
  • the DTP connection address is stored or cached at the D3S 2120.
  • the D3S address for the particular D3S 2120 assigned to a DTP 2110 is preferably stored or cached at the DTP 2110.
  • FIG. 22 Logical connections between various components of a distributed dynamic data system 2100 according to a the embodiment described in FIG. 21 are illustrated in FIG. 22.
  • the foregoing discussion of the procedural steps involved in assigning a particular D3S (e.g., D3S1 2120) to a DTP (e.g. DTP1 2110) may be accomplished with the connection types provided in FIG. 22, assuming that D3S1 2120 is assigned to a unique ID operating DTP1 2110 and D3S2 2121 is assigned to a unique ID operating DTP2 2111.
  • the DR 2115 and associated DTPG 2116 are relatively isolated from the D3Ss 2120, 2121 in that the D3Ss 2120, 2121 cannot initiate direct contact with the DTPG 2116 or DR 2115.
  • This separation between the static portion (e.g., DR 2115 and DTPG 2116) and dynamic portion (e.g., D3S1 2120 and D3S2 2121) is desirable to minimize traffic on the DR 2115.
  • Minimizing traffic on the DR 2115 is consistent with the design of this embodiment to store dynamic addressing information in a dynamic data portion (e.g., at D3Ss 2120, 2121) and store static information in a static data portion (e.g., the DR 2115).
  • One aspect of the present invention includes a procedure for consistently and repeatably resolving a user's unique ID to a particular D3S 2120, 2121. This is done not only to initially assign a D3S 2120, 2121 to a unique ID, but also to permit a sender desiring to send information to a recipient user operating a DTP 2110, 2111 to locate a D3S 2120, 2121 assigned to that user by way of the recipient user's unique ID.
  • this procedure is done in such a manner to minimize traffic on the static portion of the data system 2100.
  • Procedural steps for communicating information to a DTP follow.
  • a sender e.g., a DTP or DTPG
  • a recipient DTP e.g., DTP2 2111
  • the sender may already have the recipient's unique ID, but if the sender does not, then the sender may connect to the DR 2115 by way of the DTPG 2116 to query the DR 2115 for the desired unique ID using whatever identifying information the sender might have for the recipient (e.g., name, email address, or telephone number). Once the unique ID for the recipient user is obtained, then the sender connects to the D3X 2130 with the unique ID to learn which D3S 2120, 2121 is assigned to the recipient user. After the D3X 2130 indicates which D3S (e.g., D3S2 2121) is assigned to the recipient user, then one of several events might occur.
  • D3X 2130 indicates which D3S (e.g., D3S2 2121) is assigned to the recipient user, then one of several events might occur.
  • the sender might send information to the D3S 2121 for storage and subsequent retrieval by the recipient user.
  • the sender might obtain the current DTP connection address associated with the recipient user and attempt to contact the recipient user directly - possibly to engage in two-way communication - preferably subject to consent of the recipient user for a connection to be established.
  • the sender might post the sender's connection address the D3S 2121 for the recipient user with a request for the recipient user to initiate a return contact using the sender's connection address.
  • a DR 2115 may store the D3S connection address for the D3S 2120 mapped to the unique ID. That is, the D3S connection address for the D3S 2120 assigned to a unique ID (the D3S to which a DTP 2110 associated with the unique ID connects) may be stored at the DR 2115 in addition to the user's unique ID. If this storage step is employed, then it avoids the need for a DTPG 2116 to connect with a D3X 2130 to find a D3S connection address for the specific D3S 2120 mapped to a unique ID before each subsequent contact with the D3S 2120 is initiated.
  • a DTPG 2116 to connect with a D3X 2130 to find a D3S connection address for the specific D3S 2120 mapped to a unique ID before each subsequent contact with the D3S 2120 is initiated.
  • the D3S connection address should not be updated on a constant basis; instead, it should only be updated when there is a change to the D3S connection address for the D3S 2120 assigned to a user - a change that should occur only when the D3S assignment scheme is modified, and such modification happens to affect the particular D3S 2120 assigned to the user. Even though a DTP connection address may change frequently, the association of a particular D3S with a unique ID (and therefore a DTP associated with that unique ID) should not change so frequently.
  • FIG. 23 is a top-level diagram of a distributed dynamic data system 2200 according to another embodiment.
  • a D3 cluster 2240 are spare D3Ss 2222, 2223 provided as backup to D3S1 2220 and D3S2 2221.
  • a D3S in service e.g., D3S1 2220
  • a spare D3S such as D3SN 2222
  • the D3X 2230 will change the assignment scheme to swap a spare D3S 2222 for the failed D3S 2220.
  • the mapping information indicating which D3S should be accessed (e.g., D3SN 2222), is then communicated to the DTP (e.g., DTP1 2210) mapped to the particular D3S 2222.
  • Such communication may be accomplished, for example, by automatically failing over a D3S to a D3X (e.g., D3X1 2230), and then by having the D3X 2230 communicate the new D3S connection address to the DTP 2210.
  • Inserting a spare D3S 2222 into service for the failed D3S 2220 limits the impact of the failure only to the traffic intended for the failed D3S 2220, rather than having the failure affect the entire system 2200. Rather than changing the entire D3 assignment scheme to reflect the update, preferably only the portion of the scheme corresponding to the failed D3S 2220 is changed. When a failed D3S 2220 becomes ready for service again, it may assume the status of an available spare, ready to be swapped into service in case another D3S (e.g., D3S2 2221 or D3SN 2222) should fail.
  • another D3S e.g., D3S2 2221 or D3SN 2222
  • multiple D3Xs 2230, 2231 are provided in the D3 cluster 2240.
  • Each of the D3Xs 2230, 2231 are preferably redundant and capable of handling the same traffic, so as to enhance reliability in case one D3X should fail.
  • one D3X 2230, 2231 is selected as the "lead" D3X to assume responsibilities including generation of the D3S assignment scheme.
  • One way of selecting the lead D3X is for all D3Xs 2230, 2231 to perform a lexicographical election. Such an election process implies a connection between the D3Xs 2230, 2231 within the D3 cluster 2240.
  • the process of selecting the lead D3X includes a health check of all D3Xs 2230, 2231.
  • the lead D3X may communicate with all D3Xs 2230, 2231 to ensure that current D3S assignment scheme is present at all times on all D3Xs 2230, 2231.
  • This enhancement provides additional reliability.
  • the lead D3X e.g., D3X1 2230
  • another D3X e.g., D3X2 2231
  • ensuring currency of the D3S assignment scheme includes the use of an assignment scheme identifier that is generated by the lead D3X (e.g., D3X1 2230) (or top-level executive service, as will be described hereinafter) each time that the assignment scheme is changed.
  • the identifier preferably is further stored, passed with each communication, and compared.
  • an assignment scheme identifier is generated by the D3X (e.g., D3X1 2230) to differentiate the current assignment scheme from a previous assignment scheme, so as to identify whether the D3S connection address was generated under the current assignment scheme.
  • Possible examples for the assignment scheme identifier include an incrementing serial number or a time stamp.
  • a D3S (e.g., D3S1 2220) associated with a particular DTP (e.g., DTP1 2210) may change due to failure of a D3S (e.g., 2220)
  • the connection address previously stored by the DTP (e.g., DTP1 2210) may no longer be correct.
  • Use of the assignment scheme identifier allows a quick determination whether a particular map (providing the connection address to a D3S) is consistent with the current assignment scheme. Once the assignment scheme identifier is generated, it is preferably stored at an affected DTP (e.g., DTP1 2210), the DR 2215, and all D3Ss 2220, 2221 , 2222, 2223.
  • the assignment scheme identifier is preferably stored along with the D3S connection address for a DTP 2210, 2211. Storage of the assignment scheme identifier along with the connection address enables the identifier to be passed with all communications to the D3S (e.g., D3S1 2220), so as to permit the D3S 2220 to check whether the identifier has aged. This avoid the need for the connection address to be validated (i.e. by checking with a D3X (e.g., D3X1 2230) before each attempted contact between a DTP (e.g., DTP1 2210) and the D3S (e.g., D3S1 2220).
  • D3X e.g., D3X1 2230
  • a D3S 2220 may compare the communicated assignment scheme identifier to the "current" identifier stored at the D3S 2220. If the communicated identifier has aged, then the request directed to the D3S 2220 can be returned to the contactor with an instruction to contact the D3X (e.g., D3X1 2230). By virtue of this comparison, the assignment scheme identifier is used to prevent a DTP (e.g., DTP1 2210) from using what it thinks is a correct D3S connection address to connect to the "wrong" D3S (due to a change in assignment). The assignment scheme identifier thus provides a rapid ability to verify that the connection request is based on the current assignment scheme.
  • a DTP e.g., DTP1 2210
  • a virtual connection address may be employed for the multiple D3Xs 2230, 2231 provided in the D3 cluster 2240.
  • Use of a virtual connection address permit either (or any) D3X (e.g., D3X1 2230) to respond to connection requests, with the other(s) D3X (e.g., D3X2 2231) available as a backup. This allows devices to be addressed by the same connection address rather than separate connection addresses.
  • VIP virtual IP address
  • FIG. 24 is a top level diagram for a portion of a distributed dynamic data system according to a further embodiment.
  • a load balancing device 2350 associated with a D3 cluster 2340 allows each D3X 2330, 2331 to handle a proportional load of the requests for connection addresses.
  • using a load-balancing device 2350 permits distributed loading between D3Xs 2330, 2331.
  • the load balancing device 2350 is capable of checking the status of the D3Xs 2330, 2331. There still remains the need for a single lead D3X.
  • Using a load balancing device 2350 does not require that a virtual connection address be employed for the multiple D3Xs 2330, 2331.
  • the load balancing device 2350 may be a packet-based Layer 4 load balancing switch capable of switching on IP addresses, such as a load balancing switch manufactured by Alteon WebSystems. Product information regarding Alteon WebSystems' switches is available at "http://www.alteonwebsystems.com".
  • the D3Ss 2320, 2321 , 2322, 2323 in a D3 cluster 2340 may connect to the network 2301 behind a switch that uses a D3S address to direct traffic to a particular D3S (e.g., D3S1 2320). As provided in FIG.
  • the functions of load balancing between multiple D3Xs 2330, 2331 and switching for the multiple D3Ss 2320, 2321 , 2322, 2323 may be performed by the same load balancing switch 2350, or group of parallel load balancing switches, so long as the switching device(s) has sufficient capacity to do both load balancing and switching.
  • the switch 2350 may respond to an IP address and further information contained in the URL string. Information contained in the URL string may be read by the switch and used to direct traffic to the proper D3S.
  • multiple (redundant) load-balancing devices are interposed between a D3 cluster 2340 and the network 2350 to enhance reliability.
  • each load balancing device may have an available network connection to each D3X 2330, 2331 to permit each load balancing device to communicate with any D3X 2330, 2331 in case a single load balancing device should fail.
  • the network connection is a common network segment.
  • a virtual connection address (such as, for example, a VIP on a TCP/IP network) may be employed for each group of multiple load balancing devices to permit either (any) load balancing device to respond to connection requests to a D3X.
  • FIG. 24 further illustrates an available network connection between each D3X 2330, 2331 and each D3S 2320, 2321 , 2322, 2323 with a D3 cluster 2340. This promotes reliability, since each D3S 2320, 2321 , 2322, 2323 is capable of communicating with any D3X 2330, 2331 in case a device should fail.
  • the network connection is a common network segment.
  • FIG. 24 further illustrates a distributed dynamic data Master service (“D3 Master”) 2360 associated with D3 clusters 2340, 2341.
  • D3 Master distributed dynamic data Master service
  • a D3 Master service is a higher level executive service compared to a D3X (e.g., D3X1 , D3X2 2330, 2331); accordingly, where a D3 Master 2360 is present, it assumes some responsibilities formerly borne by the lead D3X. Namely, a D3 Master 2360 is responsible for generating the D3S assignment scheme and the assignment scheme identifier. Though depicted as separate services in FIG. 24, a D3 Master service may run as an extension of a D3X service.
  • a D3 Master does not necessarily operate on a different physical machine than a D3X, although the D3 Master and D3X operations may be segregated on different machines to enhance reliability.
  • Each D3X 2330, 2331 preferably checks the status of its subordinate D3Ss 2320, 2321 , 2322, 2323, and passes D3S status information to the D3 Master 2360.
  • the D3 Master 2360 may generate a new D3S assignment scheme based on any change in D3S status presented to it by the D3Xs 2330, 2331.
  • the new assignment scheme and related identifier may be communicated by the D3 Master 2360 to the D3Xs 2330, 2331 , which in turn may communicate these items to the subordinate D3Ss 2320, 2321 , 2322, 2323.
  • An additional function of a D3 Master 2360 generally is to coordinate multiple D3 clusters 2340, 2370 (also as shown in FIG. 25).
  • FIG. 25 illustrates a distributed dynamic data system 2500 according to another embodiment in which multiple D3 clusters 2540, 2570, each connecting to a network 2501 with a load balancing device 2550, 2551 , are provided.
  • at least one D3 Master e.g., D3 Master 1 2560
  • a D3 Master may redirect traffic between D3 clusters 2540, 2570 if necessary.
  • a D3 Master (e.g., D3 Master 1 2560) may maintain service availability information for each D3 cluster 2540, 2570 and, in the event that a D3 cluster (e.g., D3 cluster 1 2540) becomes unavailable, the D3 Master 2560 may redirect traffic to another D3 cluster (e.g., D3 cluster 2 2570) or group of further D3 clusters (not shown) by responding to a connection request with a correct connection address for an active D3 cluster (e.g., D3 cluster 2 2570) and providing an instruction (e.g., to a DTP (such as DTP1 2510) or a DTPG (such as DTPG1 2516) to direct connection requests to the different D3 cluster.
  • a DTP such as DTP1 2510
  • DTPG such as DTPG1 2516
  • more than one D3 Master is provided to enhance reliability in case one D3 Master should fail.
  • the illustrated embodiment provides even greater redundancy, since each cluster 2540, 2570 has two associated D3 Masters 2560, 2561 , 2562, 2563. Redundant D3 Masters are preferably capable of handling the same traffic. When multiple D3 Masters 2560, 2561 , 2562, 2563 are used, then a "lead" D3 Master is selected. One method for making this selection is for the D3 Masters 2560, 2561 , 2562, 2563 to perform a lexicographical election, implying a network connection between all D3 Masters 2560, 2561 , 2562, 2563.
  • this election includes a health check of all D3 Masters 2560, 2561 , 2562, 2563.
  • the lead D3 Master e.g., D3 Master 1 2560
  • the lead D3 Master may communicate with all other D3 Masters (e.g., D3 Masters 2, 3, and 4 2561 , 2562, 2563) to ensure that the current assignment scheme is present at all times on all D3 Masters 2560, 2561 , 2562, 2563. This promotes reliability, since If the lead D3 Master (e.g., D3 Master 1 2560) should fail, then another D3 Master can seamlessly assume the role of lead D3 Master.
  • each D3 Master 2560, 2561 may have an available connection to each D3X 2530, 2531 and D3S 2520, 2521 , 2522, 2523 within the cluster 2540, preferably via a common network segment. These available connections permit hierarchical failover in case of unit failure, from a D3S (e.g., D3S1 2520) to a D3X (e.g., D3X1 2530), and from a D3X (e.g., D3X1 2530) to a D3 Master (e.g., D3 Master 1 2560). Moreover, an entire cluster (e.g., cluster 1 2540) may failover to another cluster (e.g., cluster 2 2570). The result provides high reliability.
  • a D3S e.g., D3S1 2520
  • a D3X e.g., D3X1 2530
  • D3X e.g., D3X1 2530
  • D3 Master e.g., D
  • a virtual connection address may be employed for the multiple D3 Masters 2560, 2561. This permits either (any) D3 Master 2560, 2561 to respond to connection requests, with the other(s) available as a backup.
  • This architecture allows devices to be addressed by the same connection address rather than separate connection addresses.
  • one example of virtual connection addressing is the use of a VIP on a TCP/IP network.
  • An alternative to using a virtual connection address for multiple D3 Masters (e.g., D3 Masters 1 and 2 2560, 2561) at a D3 cluster (e.g., D3 cluster 1 2540) is to provide a load balancing device (not shown) between with the D3 Masters 2560, 2561 on the one hand and the D3Xs 2530, 2531 and D3Ss 2520, 2521 , 2522, 2523 on the other.
  • a load balancing device with D3 Masters 2560, 2561 allows each D3 Master 2560, 2561 within a D3 cluster 2540 to handle a proportional load of the requests for connection addresses; in other words, it permits distributed loading between D3 Masters 2560, 2561.
  • a load balancing device is capable of checking the statuses of the D3 Masters 2560, 2561. There still remains the need for a single lead D3 Master. Use of a load balancing device does not require that a virtual connection address be employed.
  • network connections are depicted between the D3 Masters 2560, 2561 , 2562, 2563, and also between each D3 Master and the D3Xs (e.g., D3X1 and D3X2 2530, 2531) and D3Ss (e.g., D3S1 , D3S2, D3SN, D3SN+1 2520, 2521 , 2522, 2523) in an intra-cluster network.
  • D3Xs e.g., D3X1 and D3X2 2530, 2531
  • D3Ss e.g., D3S1 , D3S2, D3SN, D3SN+1 2520, 2521 , 2522, 2523
  • Such connections may be made by a public network such as the Internet.
  • the D3 Masters, D3Xs, and D3Ss may be additionally connected by a virtual private network.
  • this virtual private network connection provides an avenue for system administration by an administering authority, such as visitalk.com.
  • DTPs 2510, 2511 While only two DTPs 2510, 2511 are illustrated in FIG. 25, it is contemplated that a very large number of users could be simultaneously connected to the distributed dynamic data system 2500.
  • a system 2500 may be scaled with additional D3 clusters (not shown) to support literally millions of simultaneous users. Importantly, this scalability may be achieved with the incremental addition of relatively inexpensive equipment. Rather than requiring a massively powerful and highly expensive central database server to attempt to maintain a single centralized database for a large number of users, the distributed nature of a distributed dynamic data system according to the present invention permits low cost server hardware to be utilized for maintaining the dynamic portion of the system data.
  • a distributed dynamic data system 2500 may include hardware devices or D3 clusters that are distributed over a wide geographical area. By virtue of network connections, multiple D3 clusters may be organized into virtual sites that may or may not be contained at the same location.
  • a D3S For communication between DTPs (e.g., DTP1 2510, DTP2 2511), a D3S (e.g., D3S1 2530 at D3 cluster 1 2540) may either enable direct point to point connection by providing DTP addressing information, or the D3S 2530 may act as a switch carrying information from one DTP to the other.
  • the latter functionality approach is less preferred from a system scalability perspective, however, since it generates a far greater amount of network traffic and consumes system resources.
  • FIG. 25 further illustrates the possibility of providing multiple DRs 2515, 2517 and associated DTPGs 2516, 2518 on the same system 2500.
  • Multiple DTPGs 2516, 2518 may send dynamic information to the same DTP (e.g., DTP1 2510).
  • DTP DTP1 2510
  • a DTP user may have two instances of a Web browser active or a single browser that supports multiple simultaneous source, and may receive streaming data or messages from multiple DTPGs 2516, 2518 over the same period of time.
  • a DTP as provided in any of the foregoing embodiments may include the ability to cache information so as to reduce load / traffic on a distributed dynamic data system.
  • Examples of information that might be cached by a DTP include: the D3S connection address for the D3S assigned to a DTP; D3S connection addresses D3Ss mapped to other DTPs; any connection information provided by a DR to speed re-connect; security items, including passwords; and data, including historical data, such as may be stored by a D3S assigned to a DTP.
  • messages may be transmitted with embedded security tokens.
  • Use of a security token appended to data or messages permits a sender to be authenticated.
  • a security token sent with a message may be compared to a token generated by a receiver to establish whether the receiver desires to receive the message.
  • Most any device such as a DTP, D3S, D3X, D3 Master, or DR/DTPG may constitute a sender or receiver for these purposes. Messages from unauthenticated senders may be rejected. While the foregoing embodiments described in connection with FIGS. 21- 23 and 25 each included a relatively centralized static data repository and associated DTPG, a distributed dynamic data system according to the present invention may be operated without a centralized static data repository. Reference is made to FIG.
  • DTP1 2610 provides multiple DTPs 2610, 2611 , multiple D3Ss 2620, 2621 , and a D3X 2630.
  • static data including, at a minimum, a unique ID
  • DTP e.g., DTP1 2610
  • DTP2 2611 e.g., DTP2 2611
  • the desired recipient e.g., DTP2 2611
  • the desired recipient may, (but not necessarily) not know the connection address of the sender (e.g., DTP1 2610).
  • Each DTP 2610, 2611 is generally available at the discretion of the application user, such that each DTP 2610, 2611 does not always need to be available to the network. 2601. Availability may be affected by the inactivity of DTP applications, or if a device operating a DTP is turned off.
  • a first DTP (e.g., DTP1 2610) initiates contact with the D3X 2630 to determine which D3S 2620, 2621 will be assigned to the DTP 2610 and to which the DTP 2610 should connect.
  • the D3X 2630 responds to this request by providing the desired information to the DTP 2610.
  • a connection is established between the DTP 2610 and a D3S (e.g., D3S1 2620) to permit data transfer and provide the D3S 2620 with the connection address for the DTP 2610.
  • a second DTP e.g., DTP2 2611
  • the D3X 2630 responds to this request from the second DTP 2611 by providing the desired information to the second DTP 2611.
  • a connection is then established between the second DTP 2611 and its assigned D3S (e.g., D3S2 2621) to permit data transfer and to provide the D3S 2621 with the connection address for the second DTP 2611.
  • D3Ss 2620, 2621 have been assigned to the DTPs 2610, 2611
  • the first DTP 2610 may initiate contact with the D3X 2630 to determine which D3S 2620, 2621 contains the connection address for the second DTP 2611. After this information is returned to the first DTP 2610, the first DTP 2610 may contact the D3S 2621 assigned to the second DTP 2611 to obtain the DTP connection address for the second DTP 2611 .
  • the first DTP 2610 may establish communication (direct or otherwise) or pass information to the second DTP 2611.
  • initial logical connections between various components of a distributed dynamic data system 27 according to the embodiment depicted in FIG. 26 are illustrated in FIG. 27.
  • a recipient DTP e.g., DTP2 2611
  • information intended for receipt by the recipient DTP 2611 may be stored at the D3S 2621 assigned to that DTP 2611 for retrieval when the recipient DTP 2611 connects.
  • This functionality requires D3Ss 2620, 2621 to be available to DTPs 2610, 2611 - not necessarily in a persistent fashion, but at least available on demand. If a first DTP 2610 is connected to the network 2601 , then a second DTP 2611 , using minimum contact information for the first DTP 2610, can obtain the DTP connection address for the first DTP 2610 and thereafter establish communication with the first DTP 2610.
  • the second DTP 2611 can establish direct communication with the first DTP 2610, i.e., without routing the substance of the communications through D3Ss 2620, 2621.
  • communications between the first and second DTPs 2610, 2611 may be routed through a D3S 2620, 2621.
  • caching so long as a DTP 2610, 2611 is connected to the network 2600, it may cache (or otherwise store) information of interest to that DTP 2610, 2611. This caching tends to reduce the load on the system 2600, since it avoids the need to reinitiate the entire log on / initial connection procedure.
  • the D3S assignment scheme referred to in conjunction with the foregoing embodiments will now be discussed, without reference to any particular preceding figure. Even though the distributed dynamic data system of the present invention may be highly segmented and highly distributed, the consistent assignment scheme enables high refresh rates to be obtained without overloading any particular segment or device of the system.
  • the assignment scheme i.e. the scheme of mapping unique IDs for DTP users to particular D3Ss
  • a D3S operates on hardware (such as a network server), the assignment scheme is hardware-dependent.
  • the assignment scheme preferably includes a D3 cluster array, a D3 key array, and an incrementing assignment scheme identifier (discussed previously).
  • the D3 assignment scheme is generated by the top-level executive service (e.g. lead D3X or, if present, D3 Master), but may also be stored an all D3Xs and D3Ss. Generation and updating of the assignment scheme are preferably initiated by a script. A new assignment scheme identifier is generated with each change to the assignment scheme.
  • the top-level executive service e.g. lead D3X or, if present, D3 Master
  • load balancing refers to the process of distributing communications activity evenly across a computer network so that no single device is overwhelmed. Load balancing is especially important for networks where it is difficult to predict the number of requests that will be issued to a server. a) Mapping to a particular D3S
  • a preferred D3 assignment scheme used with the present invention combines elements of both static and empirical load balancing to determine which particular D3S within a D3 cluster should be mapped to the unique Id corresponding to a DTP user.
  • the static load balancing is accomplished by hashing each unique ID into a numerical value according to a predetermined algorithm and distributing these values over the number of D3Ss available within a particular D3 cluster using a D3 cluster array.
  • a unique ID is an email address such as "john.doe@visitalk.com”
  • one way of hashing such an address into a numerical value is to extract the two characters on either side of the "@" symbol, replacing each character with its numerical position in the alphabet, multiplying these numbers by one another, eliminating all but the last four digits, dividing the resulting product by a scaling factor (such as 1x10 4 ), and then multiplying the resulting quotient by the number of D3 Services available at the particular D3 cluster.
  • a scaling factor such as 1x10 4
  • a D3 cluster array is used in conjunction with the hash value obtained from a unique ID.
  • Each D3 cluster has at least two D3Ss available, and the D3 cluster array contains positions for each available D3S in a D3 cluster.
  • Each available position in the D3 cluster array is populated with an identifier for a particular D3S within a D3 cluster.
  • the D3 cluster array is preferably a two-dimensional array.
  • the hash value for a unique ID marks a position in the D3S cluster array for a particular D3 cluster where the entry corresponding to a specific D3S may be found.
  • the hash value for a unique ID more specifically marks a position in the D3 cluster array for a particular D3S where an entry corresponding to a load balancing switch output group may be found. Assuming that a D3 cluster array has six positions, then all hash values from 0 up to 1 may be assigned to the first positional entry, all hash values from 1 up to 2 may be assigned to the second positional entry, and so on.
  • the algorithm may use integer division to ensure that all hash values result in whole numbers. Again, each positional entry identifies a specific D3S within a D3 cluster. In this manner, a predetermined hashing algorithm provides a repeatable method of allocating a particular D3S to a particular unique ID within a given D3 cluster, while balancing the load relatively evenly between the available D3Ss in a given cluster.
  • positional entries in the D3 cluster array permits additional D3Ss to be added to a particular D3 cluster without changing an entire assignment scheme. It is contemplated that the number of D3Ss available at a particular site will be controlled by an administrative authority such as visitalk.com. The ability to provide additional D3Ss at a particular site when needed permits a certain extent of empirical load balancing based on usage. The use of positional entries in the D3 cluster array provides the further benefit of allowing individual D3Ss to be replaced by spare D3Ss, when necessary, without changing the entire assignment scheme. b) Multiple clusters and use of D3 Key
  • a distributed dynamic data system includes only a single D3 cluster, then the above-mentioned method is sufficient to distribute loading between D3Ss. In a preferred embodiment, however, multiple D3 clusters are provided. If a large number of DTPs connect to a D3 network, then the issue of balancing load between the multiple D3 clusters must be addressed to prevent any particular device or node in the system from being overloaded. To respond to this concern, a further load balancing technique is used.
  • the invention contemplates use of a D3 Key that is assigned to groups of unique IDs for DTP users. Each unique ID for a DTP user should be assigned a corresponding D3 Key, but a D3 Key will not be unique to each DTP user.
  • a D3 Key is used to determine which D3 cluster contains the D3S mapped to a given unique ID.
  • the D3 Key may comprise a 10 digit number. The more digits that the D3 Key contains, the greater the number of possible D3 clusters that may be contained in a D3 system.
  • Each D3 cluster should have a minimum of one D3 Key, and more than one D3 Key can map to the same D3 cluster; however, a single D3 key should not map to more than one particular D3 cluster.
  • a central authority such as visitalk.com, will determine permissible values for D3 Keys based on the presence of particular D3 clusters.
  • permissible D3 Key values will be provided to each entity that maintains a static data repository used with the D3 system. However, it is contemplated that the actual assignment of D3 Key values to unique IDs will be at the discretion of each static data repository maintaining entity. Use of the D3 Key to determine a particular D3 cluster will now be explained.
  • the D3 key array is preferably a one-dimensional array having multiple positional entries populated with entries, each entry identifying a particular D3 cluster. It is contemplated that the D3 key array will be maintained and updated, and propagated to at least one component in the D3 system, by a central administrative authority, such as visitalk.com.
  • a D3 Key represents a position in the D3 key array where an entry identifying a D3 cluster may be found. That is, the permissible values for entries in the D3 key array are limited by the clusters actually present in the D3 system. If it is assumed that a D3 key array has 10 positions, but only two D3 clusters are present, then the 10 positions in the D3 array may be populated with entries corresponding to only the first or second D3 clusters.
  • the assignment scheme may include the information provided in FIG. 28, to which attention is now directed.
  • FIG. 28 provides an exemplary representation of a D3S assignment scheme. As implemented, the actual assignment scheme may or may not appear in the form of four quadrants. In the exemplary representation, however, the upper left quadrant 3001 signifies the assignment scheme identifier.
  • An example of such an identifier may be a serial number that increments sequentially with each change to the assignment scheme, up to a maximum value of 255, then returning to a value of 1.
  • the upper right quadrant 3002 signifies the D3 Key array, which contains positional entries, each identifying a D3 cluster.
  • the D3 Key array is only used if multiple D3 clusters are present.
  • the D3 Key array provides mapping to a particular D3 cluster based on a D3 Key, and allows empirical load balancing between clusters.
  • the lower left quadrant 3003 signifies one portion of the D3 Cluster array, providing the number of D3Ss operating (not standby spares) at each respective D3 cluster.
  • the lower right quadrant 3004 signifies the other portion of the D3 Cluster array, which provides positional entries, each identifying a D3S at a particular D3 cluster.
  • the result of applying the unique user ID and D3 Key to the D3 assignment scheme is either a D3 map (e.g., 201 ;2;004) or a D3 URL (e.g., D3_2.visitalk.com/004).
  • the upper left quadrant 3101 represents the assignment scheme identifier; the upper right quadrant 3102 represents the D3 Key array populated with entries corresponding to D3 clusters 1 and 2; the lower left quadrant 3103 represents a portion of the D3 cluster array, indicating the number of D3Ss operating at D3 clusters 1 and 2, and the lower right quadrant 3104 represents the other portion of the D3 cluster array, with entries each identifying particular available D3Ss.
  • the first step is to use the D3 Key in conjunction with the D3 Key array to determine which D3 cluster should be used.
  • the D3 Key value of "5" represents the fifth position, starting at zero, in the D3 key array (located in the upper-right quadrant 3102) where the D3 cluster identifier will be found.
  • the fifth position, starting from zero, is populated with the value of "2," as indicated by the upper arrow in FIG. 29. This means that D3 cluster 2 will be used.
  • D3 cluster 2 corresponds to the lower row, containing an entry of "6" in the lower left quadrant 3103 (meaning that 6 services are active at that cluster).
  • the next step is to apply a hashing technique and consistent formula to convert the unique ID to a number corresponding to a position in the D3 Cluster array.
  • One example of a known hashing technique is to extract the two characters before and after the "@" symbol in the unique ID, convert each character to numbers corresponding to numerical positions in the English alphabet, multiply these numbers together, eliminate all but the last 4 digits of the product, and divide the resulting product by a scaling factor (such as 1x10 4 ). For purposes of this example, assume that hash value obtained is 0.25. The hash value is then multiplied by the number of D3 Services available at D3 cluster 2 (here, the value indicated in the lower left quadrant 3103 corresponding to D3 cluster 2, or "6") to yield "1.5" as the result.
  • Assignment scheme identifier 21 (extracted from matrix used to generate the map)
  • D3_2.visitalk.com/003 a value of "D3_2.visitalk.com/003" is obtained.
  • the “2" following D3 refers to D3 cluster number 2, and the 7003" at the end of the URL refers to the particular D3S (D3S 3) located in D3 cluster number 2.
  • a D3 Key is not necessary, since the sole function of a D3 Key is to provide quick mapping information to a particular D3 cluster. If multiple D3 clusters are present, then the direct mapping information for a specific D3S may comprise the unique ID coupled with a D3 Key. However, a search for a D3S mapped to a particular user may still be performed with only a unique ID (i.e., without a D3S key) since the mapping scheme will direct a unique ID to a specific D3S in a cluster. Therefore, only one D3S per cluster must be searched.
  • Private subnetworks connected to the public Internet has been primarily driven by security concerns. Private subnetworks may be shielded from the public Internet by devices such as proxy servers or firewalls. While a client or DTP within a private subnetwork connected to the Internet is generally allowed to send communications to public clients or DTPs without difficulty, such a client / DTP is generally not allowed to receive unsolicited communications unless in direct reply to an outgoing communications sent by the private client / DTP.
  • the presence of the public Internet and private subnetworks means that five different possibilities exist for communications between clients: (1) public to public; (2) public to private; (3) private to public; (4) private to private on the same subnetwork; and (5) private to private on different subnetworks.
  • connection types public-public, public-private, private- public and private-private on same subnetwork
  • direct point-to-point communications may be established using known solutions.
  • a distributed dynamic data system according to the present invention may be utilized for exchanging communications according to all five of these connection types, including between two private clients / DTPs located on different subnetworks, as illustrated in the top-level diagram of a distributed dynamic data system provided in FIG. 30.
  • Two separate DTPs 3210, 3211 may connect to a public network 3201 through firewalls 3218, 3219 that shield private subnetworks 3208, 3209.
  • a first user on a first subnetwork 3208 may send a message through a first firewall 3218 to a D3S (e.g., D3S 2 3221) mapped to a second DTP 3211 on a second subnetwork 3209, and include in that message the connection address for a D3S (e.g., D3S 1 3220) mapped to the first DTP 3210.
  • a D3S e.g., D3S 1 3220
  • the second user utilizing the second DTP 3211 on a second subnetwork 3209 should query its own D3S (e.g., D3S 2 3221)
  • the communication initiated by the first DTP 3210 may be passed through the second firewall 3219 to the second user as part of the response to the query initiated by the second DTP 3211.
  • this method of communicating between the first and second users flows through the respective D3Ss 3220, 3221 , however, it consumes more resources than would direct point- to-point communications between the first and second DTPs 3210, 3211. Yet, a system according to the present invention will permit communications such as instant-type communications (including text or any other real-time messages) to proceed through the D3Ss 3220, 3221 mapped to the respective DTPs 3210, 3211 on different subnetworks 3208, 3209.
  • FIG. 18 is a schematic overview of a distributed dynamic DNS system 4000.
  • “Dynamic DNS” refers to a method used by client machines used to determine to which server a particular unique identification number, such Permanent Communication Number or PCN SM , should be registered.
  • a unique identification number may be any type of number or identifier (e.g., a 12-digit number) which is unique to a particular user, such as a PCN.
  • “Directory cluster” 4040 refers to a cluster of servers where a client 4010 registers a PCN so that other clients (not shown) may determine if it is online.
  • the distributed dynamic DNS system 4000 according to the present invention and illustrated in FIG.
  • the dynamic DNS cluster 4030 provides an algorithm and a list of dynamic directory clusters to a client (or user) computer 4010.
  • a client 4010 uses the algorithm in conjunction with a PCN to calculate which dynamic directory cluster (e.g., 4040) it should use to determine if the user associated with that PCN is online, or to register that PCN with the dynamic directory cluster.
  • the algorithm will be the PCN mode (modulo) by the number of dynamic directory clusters. Other algorithms may also be used.
  • Each dynamic DNS cluster (e.g., 4030) preferably consists of multiple load balanced web servers (not shown) for redundancy. Every web server in a dynamic DNS cluster has access to the same list of IP addresses for the dynamic directory clusters.
  • the list of dynamic directory clusters is relatively static, so it can be replicated between dynamic DNS clusters. Once a dynamic DNS cluster (e.g., 4030) has the list of dynamic directory clusters, it will check to see which of the clusters are online. It will then use the list of online dynamic directory clusters to fulfill requests from client computers.
  • a dynamic DNS cluster e.g. 4030
  • a DNS cluster 4030 could determine which directory clusters are online and able to register PCNs. If the DNS cluster 4030 itself determines the online status of the directory cluster, it is possible that it will not register some dynamic directory clusters because connectivity to a particular dynamic directory cluster is not available through the Internet. However, if a dynamic DNS cluster 4030 cannot locate a directory cluster (e.g., 4040), it can be presumed that a client 4010 requesting a list of dynamic directory clusters from the dynamic DNS cluster 4030 will likely not be able to locate the same dynamic directory clusters as the dynamic DNS cluster 4030. Each DNS cluster (e.g., 4030) preferably requests a health check from each directory cluster (e.g., 4040) intermittently.
  • a directory cluster 4040 is removed from a group of servers, the ability to force a client to retrieve the current list of directory clusters from a DNS cluster 4030 may be desirable.
  • the dynamic directory cluster 4040 can be taken offline by simply removing it from the list of dynamic directory clusters supplied to the dynamic DNS cluster 4030. Since dynamic DNS cluster 4030 only checks to see if those dynamic directory clusters in the list of dynamic directory clusters is online means that any dynamic directory clusters not included in the initial list will never be sent to client computer as possible online directories.
  • updating the dynamic directory list should be segmented in two phases. First, a client 4010 is forced to update its online status in two dynamic directory clusters. The first online status update is in accordance with the original dynamic directory cluster list, and the second online status update is in accordance with the revised dynamic directory cluster list.
  • a client 4010 updates its online status with certain periodicity. It can be certain that a client has updated its online status in both dynamic directory clusters once this period of time has elapsed. After every client has updated their online status in both dynamic directory clusters, a process of sending the revised dynamic directory list to clients checking online status may be initiated. Thereafter follows a waiting period for the time it takes for the list of dynamic directory clusters to expire. After this expiration, none of the clients checking an online status will attempt to use the dynamic directory cluster that has been removed from the list.
  • the advantage to this approach is that a directory cluster (e.g., 4040) may be taken offline without affecting a client's ability to determine if a particular PCN is online.
  • a second approach is to have the directory cluster that is being taken offline redirect the request to the appropriate directory cluster. Although this approach may result in the client ending up with an offline status when the correct status for a PCN is online, and the client requesting the PCN online status makes the request before the client setting the online status has a chance to update its online status.
  • the Directory cluster 4040 preferably has access to a database containing each of the online PCNs. b) Client Requests
  • the first is a client request from a web page
  • the second is a request by an application.
  • each of these requests is almost identical, they are handled differently in that one is handled solely by the client application, and the second is an HTTP request embedded in the web page by the web server.
  • the client computer 4010 or the web server will first check to see if its list of directory clusters is current. If it is not, it will request the algorithm and list of dynamic directory clusters from the dynamic DNS cluster 4030 (determined by some form of global server load balancing). It will use the algorithm in conjunction with a PCN to determine which dynamic directory cluster contains the online status of the PCN, making a request of the dynamic directory cluster for that PCN. The dynamic directory cluster 4040 returns the correct online status of the PCN. To list itself in the dynamic directory cluster, a client 4010 checks to see if its list of dynamic directory clusters is current. Once again, if it is not, it will request the algorithm and list of dynamic directory clusters from the dynamic DNS cluster 4030. It will then use the algorithm and its own PCN to determine which dynamic directory cluster (e.g., 4040) to register with. It will then send a registration request to that dynamic directory cluster.
  • the algorithm and list of dynamic directory clusters from the dynamic DNS cluster 4030 (determined by some form of global server load balancing). It will
  • FIG. 19 is a top-level diagram of an Internet-based communications system 21 , illustrating various concepts as disclosed herein.
  • a global electronic network such as, for example, the Internet 25.
  • a variety of Internet devices 22 connect to the Internet 25, for the purposes of carrying out communication over the Internet 25.
  • Internet device 22 is any device that is directly accessible by or has the capability to directly access the Internet.
  • An Internet device 22 may be, for example, a computer or a personal communication device (such as an IP telephone or videophone), and may access the Internet by wireless or non- wireless type of connection.
  • the particular method and manner in which an Internet device 22 accesses the Internet 25 is not important to the operation of the invention as broadly described herein.
  • a web site 26 connects to the Internet 25.
  • the web site 26 comprises a web server 27 and a directory of Permanent Communication Numbers stored, for example, in a database 28.
  • An example of at least a portion of the contents of a suitable directory is illustrated in FIG. 9, and includes such things as a user name, an indicator of the user's online status, and a unique Permanent Communication Number assigned to the user.
  • the database 28 may comprise a dynamic portion subject to relatively frequent updating, and a permanent portion subject to relatively infrequent updating.
  • a user may be assigned a Permanent Communication Number by, e.g., registering with the web site 26. Further details about the use of Permanent Communication Numbers are set forth later herein.
  • the first user When a first user desires to communicate with another user, the first user connects to the web site 26 and enters the target user's Permanent Communication Number, which gets transmitted to the web site 26. If the target user is on-line, the requesting user receives the target user's current IP address from the database 28 based upon the target user's Permanent Communication Number, and instant or live communication then ensues between the requesting user and the target user. In the event that the target user is not on-line, the web site 26 allows the requesting user to store a message in the video or voice mail for the target user. When the target user eventually comes on-line, the target user can retrieve messages from his or her mailbox.
  • FIG. 20 is an architectural diagram of one embodiment of the Internet- based communications system illustrated in FIG. 19. Similar to FIG. 19, and as illustrated in FIG. 20, at the core of the communications system 31 is a global electronic network such as, for example, the Internet 35. A variety of Internet devices 32 (which may be any of the types of devices as discussed with respect to Internet devices 22 in FIG. 19) are shown connected to the Internet 35, for the purposes of carrying out communication over the Internet 25. As further shown in FIG. 20, a web site 36 connected to the Internet 35 comprises a web server 37 and a directory of Permanent Communication Numbers stored, for example, in a static database 40. A dynamic database 41 stores the on-line status of users associated with the Permanent Communication Numbers stored in the static database 40.
  • Users seeking to communicate with other users may access the web site 36 and enter the Permanent Communication Number of the target user, and obtain the target user's most recent IP address and current on-line status thereby. If the target user is not on-line, then the requesting user may leave a message in a mailbox for the target user. The message may be retrieved when the target user comes on line.
  • FIG. 1 illustrates a system cluster configuration as may be utilized in connection with one or more embodiments as described herein.
  • the global computer network known as Internet 51 is represented as a cloud.
  • a co-location service 100 is also shown as a cloud in accordance with the convention of showing various network structures and functions as a cloud representation, where the specific details of the implementation of the particular structure or functionality are not particularly significant.
  • Co-location service 100 in the system of the illustrative embodiment is provided by GlobalCenter, but may be any similar entity that is in the business of co-locating web services. Information regarding GlobalCenter is available on the Internet at the Internet address www.globalcenter.com.
  • Co-location service 100 provides a large facility direct connection for continuous monitoring of the server site.
  • Co-location service 100 is linked to a router 101 via a link 113.
  • Router 101 may comprise any suitable router unit.
  • Router 100 provides connections to the Internet 51.
  • Router 101 provides a single point of entry from the system of the invention into the Internet 51. From a user's perspective, router 101 provides a single point of contact for users.
  • URL Uniform Resource Locator
  • router 101 is coupled to two director switches 103, 107 via links 115, 117, respectively.
  • Each director switch 103, 107 is a commercially available unit.
  • director switches 103, 107 comprise the ACE director available from Alteon WebSystems and described in detail in data sheets provided on-line at Alteon's web site located at "http://www.alteonwebstytems .com/products".
  • the Alteon ACE directors are characterized as having an 8 gigabyte backplane.
  • the ACEdirector is a Layer 4+ switch that includes software capability for high performance server load- balancing.
  • director switches 103, 107 are configured in a redundant configuration such that one director 103 is redundant to the other director 107 and only one director switch 103 or 107 is active at any time.
  • a link 119 is provided between director switches 103, 107.
  • Link 119 is preferably a high- capacity link which, in the illustrative embodiment, is a one gigabyte link. This link is a high capacity link so that, in the event failures occur in either link 115 or link 117, or if failures occur in the links between one director switch 103 or 109 and one of the enterprise switches 105, 109, then all traffic may be routed over link 119.
  • Each director switch 103, 107 is a switching network that balances traffic across multiple servers or other devices. Director switches comprise software and/or hardware configured so that each sends three streams of traffic into each of the enterprise switches 105, 109.
  • One director switch e.g., director switch 107
  • the other director switch i.e., director switch 103
  • the secondary director switch 103 remains quiescent or dormant until a fault or failure associated with the primary director 107 occurs.
  • the redundancy factor provided by the directors 103, 107 includes coverage of more than failure of one of the director switches 103, 107.
  • the failure could also include a failure of either one of the links 115, 117 coupling the director switches 104, 107 to router 10 1.
  • Each director switch 103, 107 routes traffic to enterprise switches 105, 109 and to the server cluster 111 beyond director switches 105, 109.
  • Link redundancy capability is provided between each director switch 103, 107 and the enterprise switches 105, 109.
  • three links are provided between each director switch and each enterprise switch.
  • Links 121 , 123, 127 connect director switch 103 to enterprise switch 105.
  • Links 122, 124, 126 connect director switch 103 to enterprise switch 109.
  • Links 131 , 133, 137 connect director switch 107 and enterprise switch 105.
  • Links 132, 134, 136 connect director switch 107 and enterprise switch 109.
  • each director switch 103, 107 and each enterprise switch 105, 109 corresponds to the number of networks included in each enterprise switch 105, 109.
  • each director 103, 107 is coupled to three network portions of each enterprise switch 105, 109. Utilizing three different traffic paths between each director 103, 107 and each enterprise switch 105, 109 increases the ability to push more traffic through the system and to segregate that traffic into additional networks at the server level. There are three routes of traffic into each enterprise switch 105, 109 from each director switch 103, 107.
  • Enterprise switches 105, 109 are, in the illustrated embodiment, very large capacity switches. Each enterprise switch 105, 109 is very redundant, i.e., each includes three networks and each can accommodate a very large numbers of users. Each enterprise switch 105, 109 has a large switching fabric through which a high volume of data may be switched.
  • the enterprise switches in the illustrative embodiment are characterized by 24 gigabit backplanes.
  • a particularly well suited enterprise switch which may be used in the system of the illustrated embodiment is the Catalyst 4000 Series available from Cisco and described in various documentation available at Cisco's web site located at "http://www.cisco.com".
  • Both enterprise switches 105, 107 are, in the illustrated embodiment, active all the time, but if a failure disables one of the enterprise switches 105, 109, only half the network will be lost. The loss affects capacity of the system, but it does not take down the network. If only one of the enterprise switches was active at a time, and the other enterprise switch ran in a hot standby mode, the performance of the system would be determined by only one enterprise network. By running both enterprise switches active, the overall system performance is doubled. Each enterprise switch 105, 109 independently communicates with the server cluster 111. If either one of enterprise switches 105, 109 goes down, the entire traffic load can be handled by the remaining enterprise switch. The functionality of the cluster may be maintained if one director switch 103, 107 is lost, but not necessarily the load.
  • V-LAN virtual local area networks
  • various servers can communicate with each other through the enterprise switches 105, 109.
  • This arrangement serves to divide up the traffic.
  • One VLAN cannot see another. Traffic is segregated, bandwidth is improved, and contention among resources on the network is reduced.
  • each VLAN shows the most relevant portion of the network to provide an understanding of its structure and functionality. For purposes of clarity, not all elements of FIG. 1 are repeated in each of the VLAN drawing figures.
  • LDAP lightweight directory access protocol
  • LDAP is used with a service that allows a user with a certain type of software to log on and connect to the server and see a directory of other people who are logged into that server.
  • VLAN2 shown in FIG. 2 is the VLAN that is used by Internet traffic users.
  • each server LDAP1 , LDAP2 has two Internet protocol (IP) addresses: Server LDAP1 has IP addresses 10.2.1.1 and 10.6.1.1 , and server LDAP2 has IP addresses 10.2.1.2 and 10.6.1.2.
  • traffic from router 101 and director switches 103, 105 is routed to enterprise switches 105, 109.
  • Director switches 103, 107 distribute that traffic to VLAN 2.
  • Director switches 103, 107 determine what type of traffic it is and determine which server the traffic goes to director switches 103, 107 forward the data packets to VLAN 2 and, ultimately, to the appropriate server.
  • Director switches 103, 107 determine that the traffic is LDAP traffic and determine which of the four LDAP IP addresses the traffic is going to.
  • each server LDAP1 , LDAP2 has two IP addresses, only one instance of LDAP is running on each server LDAP1 , LDAP2, but each server LDAP1 , LDAP2 may be served from either of its two IP addresses.
  • each IP address is associated with a separate network interface card at the server LDAP1 , LDAP2. Accordingly, each server LDAP1 , LDAP2 includes two network interface cards for redundancy. If one network interface card fails, the traffic is routed to the other network interface card of that server.
  • the network interface cards used in the servers of the illustrated embodiment are commercially available units. Each network interface card includes dual ports and therefore supports two link connections and can therefore have two IP addresses, one for each of the dual ports.
  • the director switches 103, 107 determine which server LDAP1 , LDAP2 has the least traffic, traffic is forwarded to the appropriate server IP address.
  • the selected server LDAP1 I or LDAP2 processes the user traffic and provides a response back through the VLAN.
  • LDAP1 the operation is like a typical server request in which the server is plugged into a network and is working.
  • Server LDAP1 , LDAP2 just sends a response back to the appropriate address, which is carried in the packet.
  • the system of the invention provides balancing at the network interface card level as contrasted with balancing at the server level.
  • the system of the invention includes a VLAN having three Internet Information Servers (IIS) IISMTS1 , IISMTS2, IISMTS3.
  • IIS Internet Information Servers
  • the VLAN including servers IISMTS1 , IISMTS2, IISMTS3, operate in exactly the same manner as VLAN 2 servers LDAP1 , LDAP2.
  • Each server IIMTSI, IISMTS2, IISMTS3 utilizes IIS software which is commercially available from Microsoft.
  • MTS Microsoft Transaction Server
  • MTS Microsoft Transaction Server
  • Each one of servers IISMTS1 , IISMTS2, IISMTS3 supports two functions. Each provides service via IIS and provides back-room software functionality with MTS.
  • MTS objects perform certain functionality on the network.
  • the servers IISMTS1 , IISMTS2, IISMTS3 are physically separate servers from the LDAP servers, LDAP1 , LDAP2, but each work in the same way.
  • Server IISMTS1 has two IP addresses, 10.2.1.3 and 10.6.1.3 and is linked to enterprise switch 105 via link 301 , and to enterprise switch 109 via link 303.
  • Server IISMTS2 has IP addresses 10.2.1.4 and 10.6.1.4, and is linked to enterprise switch 105 via link 305 and to enterprise switch 109 via link 307.
  • Server IISMTS3 has IP addresses 10.2.1.5 and 10.6.1.5 and is linked to enterprise switch 105 via link 309 and to enterprise switch 109 via link 311.
  • director switches 103, 107 sense when a user is utilizing a browser such as Netscape or Internet Explorer and the user requests a page by putting sending a URL. Director switches 103, 107 determine that the URL request is to be routed through IIS for page service.
  • Server IISMTS1 executes one or more objects that causes something else to occur. For example, another object may be displayed, an entry may be added to a data base, or an order may be processed.
  • Two network interface cards, each corresponding to one IP address in each server IISMTS1 , IISMTS2, IISMTS3, provide redundancy so if any one interface card fails the server switches activity to the second network interface card in the same server. If a server IISMTS1 , IISMTS2, IISMTS3 fails it fails over to the other two servers.
  • the system provides triple redundancy at the server level and single redundancy within a server for IIS and MTS.
  • this VLAN there is redundancy to each server IISMTS1 , IISMTS2, IISMTS3.
  • the VLAN management network of the system of the invention is shown as VLAN3.
  • VLAN3 includes, inter alia, a management server MGT.
  • This VLAN management network provides server management as well as switching infrastructure management. Remote management capability is provided by connection through a Point-to-Point Tunneling Protocol ("PPTP") link 400 from the Internet 51.
  • PPTP Point-to-Point Tunneling Protocol
  • VLAN management network VLAN3 is also used for Sequel Server connectivity as well as LDAP replication.
  • each of the network servers MGT, IISMTSI, IISMTS2, IISMTS3, LDAP1 , LDAP2, SQL1 and SQL2 having connections to both enterprise switches 105, 109 via network interface cards located at the respective servers.
  • the network interface cards are not shown in the drawing Figures to reduce drawing clutter, but those skilled in the art understand that each link connection to a server as shown in the various figures has a network interface card connection at the server.
  • Management server MGT has link 401 to enterprise switch 105 and link 403 to enterprise switch 109.
  • IISMTS servers IISMTS1 , IISMTS2, IISMTS3 have links 405, 409, 413 to enterprise switch 105 and links 407, 411 , 415 to enterprise switch 109.
  • LDAP servers LDAP1 , LDAP2, LDAP3 have links 417, 421 to enterprise switch 105 and links 419, 423 to enterprise switch 109.
  • Sequel servers SQL1 , SQL2 have links 425, 429 to enterprise switch 105 and links 427, 431 to enterprise switch 109.
  • EP address is assigned per server. The servers will fail over from one link to the other in the event of a network interface card failure. Upon occurrence of a network interface card failure, the IP address is automatically transferred to the active network interface card connection.
  • VLAN3 the two LDAP servers LDAP1 , LDAP2 are the same as shown in VLAN2 of FIG. 2 but their connections are different.
  • hardware is managed from the server MGT.
  • the management server MGT and each sequel server SQL1 , SQL2 each have two physical network interface cards, both dual port. Whenever an IISMTS server needs to talk directly to a sequel server, it will go through network VLAN3.
  • the sequel servers SQL1. SQL2 are the database depository for any data collected. Searches are conducted against the sequel server databases.
  • An Internet user will connect to one of the IIS servers IISMTS1 , IISMTS2, IISMTS3, but because director switches 103, 107 perform load balancing, the user can not predict which one he enters the system through via the URL address.
  • one of the director switches 103, 107 passes off the request to one of the IIS servers IISMTS1 , IISMTS2, IISMTS3.
  • the IISMTS server to which the user is connected passes a request within VLAN 3.
  • the request is routed to a sequel server SQL1 or SQL2 as the user request is for is a database operation.
  • a remote management facility can connect to management server MGT via the Internet 51 and link 400, and perform any management needed with the servers, such as reconfiguring software and monitoring resources to identify loading.
  • a primary purpose of this network VLAN3 is to support communication between servers and to facilitate control of the servers via a remote management station.
  • Management server MGT can access any of the servers IISMTS1 , IISMTS2, IISMTS3, LDAP1 , LDAP2, SQL1 , SQL2 and it can access enterprise switches 105, 109 and perform configuration tasks.
  • VLAN3 functions as an internal "housekeeping" network that maintains all database data and LDAP traffic.
  • the remote management station accesses the management server MGT via a point-to-point tunneling protocol, which is a way of accessing server MGT using encryption.
  • a further VLAN is provided in the system of the invention as shown in FIG.
  • VLAN network VLAN4 includes LDAP servers LDAP1 , LDAP2. Enterprise switches 105, 109 each have access to both LDAP servers LDAP1 , LDAP2.
  • LDAP server LDAP1 has, in the illustrated embodiment, IP addresses 10.4.1.1 and 10.7.1.1 and is linked to enterprise switch 105 via link 501 and to enterprise switch 109 via link 505.
  • LDAP server LDAP2 has IP addresses 10.4.1.2 and 10.7.1.2 and is linked to enterprise switch 105 via link 505 and to enterprise switch 109 via link 507.
  • LDAP server LDAP1 has two IP addresses.
  • VLAN4 serves provides a pool of the LDAP servers for internal system access only to the transaction servers IISMTS1 , IISMTS2, IISMTS3.
  • VLAN2 is for Internet users whereas VLAN4 is for transaction servers in the server cluster 111.
  • LDAP directory which may be supported almost entirely out of the box by any appropriate LDAP application, e.g., Microsoft LDAP, and a permanent directory which is a directory of all members to the service provided by the system.
  • the members identified in the permanent directory may or may not be currently on-line on the Internet.
  • This permanent directory database is maintained by the sequel servers SQL1 , SQL2.
  • a second directory provides a list of all the permanent directory members who are on-line at substantially the time a request is made.
  • One of the servers IISMTS1 , IISMTS2, IISMTS3 executes an MTS object to do a look up against active members and will indicate whether or not a member is on line. As will be explained elsewhere, if a member is on line, a call can be made to the active member and real-time communication can occur.
  • VLAN4 supports that kind of traffic so that IISMTS servers IISMTS1 , IISMTS2, IISMTS3 can fire MTS objects that perform certain operations against the LDAP directory. This traffic is segregated from all other traffic.
  • VLAN 5 is the network used for traffic destined for LDAP servers LDAP1 , LDAP2, LDAP3.
  • VLAN5 has one primary side and a standby side. The destination is to a virtual IP address that is provided by a director switch 103 or 107 . Once the virtual IP address is utilized, traffic will be load-balanced to the two LDAP servers LDAP1 , LDAP2.
  • Each IISMTS server IISMTS1 , IISMTS2, IISMTS3 has one IP address.
  • server IISMTS1 has address 10.5.1.3 and is linked to enterprise switch 105 via link 601 and is linked to enterprise switch 109 via link 603.
  • Server IISMTS2 has address 10.5.1.4 and is linked to enterprise switch 105 via link 605 and is linked to enterprise switch 109 via link 607.
  • Server IISMTS3 has address 10.5.1.5 and is linked to enterprise switch 105 via link 609 and is linked to enterprise switch 109 via link 611.
  • LDAP servers LDAP1 , LDAP2 and transmits the request to the more lightly loaded server.
  • the LDAP result is sent back to the director switch 103 which presents the results back to the requesting object.
  • a request will cause an MTS object on VLAN 5 to fire.
  • the system would then route the request back up to director switch 103 to VLAN 3.
  • one server talks to another server across virtual networks where the resource that it needs, such as the LDAP directory, is not on the same virtual network.
  • a significant advantage of the system of the present invention as illustrated is that it is an Ethernet type of network in which contention is reduced significantly. Contention is reduced by creating artificial separate networks so that, for example, whenever a sequel server SQL1 is talking to an LDAP server LDAP1 , that communication place over a particular VLAN. None of the other VLANs hears the communication. When MTS server IISMTS1 is talking to LDAP server LDAP1 , that happens over a particular VLAN and therefore does not interfere with other traffic. Thus contention is greatly reduced.
  • a very complex Ethernet type network is formed into multiple simpler Ethernet type networks, each of which is still contention-based but which has a reduced volume of traffic.
  • the system of the present invention provides a high level of security. Traffic cannot pass from one VLAN to the next without authority of either a director switch 103, 107 or an enterprise switch 105, 109.
  • the VLAN networks are effectively hidden from the Internet. In other systems, if a "hacker" hits a switch he will either get through the switch or not. In the present system, even if the hacker were to get through the switch, he could still not get into any VLAN. Depending on what kind of traffic the hacker is sending, not only would he have to spoof fool his way through the switch, but the hacker would have to know how to get from the switch into the particular VLAN that he wanted access to. However, the VLANs are hidden from the entire Internet via director switches.
  • Each VLAN though called a virtual local area network, is separate from each other.
  • Servers and switches are preferably on the same VLANs and on the same network, then they can talk to each other.
  • FIG. 3 shows IISMTS servers IISMTS1 , IISMTS2, IISMTS3 all on the same VLAN2.
  • Exemplary IP addresses for each server port are indicated.
  • the address includes a network number portion and a host address portion.
  • the network number portion is 10.2.1 and 3 is the actual host address portion.
  • the only other switches and servers that can communicate with IP address 10.2.1.3 are ones that have an IP address beginning with 10.2.1 , i.e., IP address 10.2.1 defines a network.
  • IISMTS server IISMTS1 has a second link connection to enterprise switch 109 which carries IP address 10.6.1.3. That IP address is on a completely different network which may be identified as network 10.6.1. So the only communications that can occur with IP address 10.6.1.3 are with other servers or switches with addresses 10.6.1 , which in VLAN 2 shown in FIG. 3 are servers IISMTS2, IISMTS3 with IP addresses 10.6.1.4 and 10.6.1.5. The VLANs in the system of the invention actually separate traffic. The only way to make two networks talk to each other is by a router. Each director switch 103, 107 is, among other things, a router. So each director switch 103, 107 can communicate with the different networks.
  • a director switch 103, 107 can communicate with either the 10.2 or the 10.6 side of the servers IISMTS1 , IISMTS2, IISMTS3 of VLAN2.
  • the system of the present invention provides both a dynamic and static directory.
  • the dynamic directory provides a list of
  • the static or permanent directory provides a list of members to services supplied by the server or related servers and system.
  • FIGS. 7 and 8 are useful for understanding the dynamic directory.
  • the client software When an Internet user initially turns on his or her computer, loads the appropriate communications software, e.g., a client program conforming to the H.323 specification, and enters the name or address of a site that he or she desires to access, the client software automatically connects to the server associated with that site.
  • the server obtains user information via the client software.
  • the user information is stored in an LDAP dynamic directory and for as long as the user is connected to the LDAP server, the user information is maintained. The information is not stored to a permanent directory and when the user drops his or her connection to the server, the user information is dropped.
  • a permanent directory which may be on a different server, or may be part of the same logical database.
  • the permanent directory includes all users which have chosen to register with the service provided.
  • an interaction is provided between the dynamic and permanent directories.
  • Users stored in the permanent directory are offered an opportunity to register at the site in return for various service and/or product offerings that are made available. Users register with their name, address, and all other relevant information.
  • the users become part of the permanent list whether they are connected to the server or not, and they are always on that list. Because it is desirable to develop an increasingly large permanent directory, the system of the present invention is unique in that it actively solicits membership. As shown in the flow diagram of, FIG.
  • the user's email address is extracted from the user's clients software at step 803 and their email address is added to the dynamic directory as indicated at step 805.
  • the permanent directory is accessed and the user's address is looked up at step 806. If the user is not already listed in the permanent directory, the user is listed in the permanent directory and flagged to indicate that the user has been sent an invitation to register at step 807.
  • An instant email is sent to the user based upon the email address provide to the server from the user's client software at step 808. The email will provide an invitation to join the permanent directory. If the user has previously registered, an email message may be automatically sent to him to provide specific information as indicated at step 813.
  • the information stored in the dynamic directory is relinquished.
  • the identifying information is not consciously provided for collection at the time it is collected.
  • the client software i.e., the H.232 software
  • the user enters an email address and other information so that any server to which the user connects to subsequently is provided that information.
  • the information provided is intentionally deceptive or inaccurate because the user does not want to have his or her real identity known.
  • the present system monitors email returns at step 819.
  • the presumption is that the email address is incorrect and the user will be dumped from the dynamic directory as indicated at step 821. This is done to for example eliminate pornographic, foul or obscene bogus email addresses which are frequently used where directory listings of users are accessible on the Internet. If no email is returned within the period time for an auto return, the registration process may be initiated at step 823.
  • users may become listed in the permanent directory in one of two ways. Either they visit the web site and register on their own, or else a trigger was fired which caused an entry to be made in the permanent directory without the user knowing that the entry was being made. Thus the user shows up in the permanent directory whether or not the user is currently on-line.
  • the permanent and dynamic directories are merged as shown in FIG. 9.
  • one tabular column of the display will include an indication that indicates which registrants are presently on-line.
  • a flashing spot giving the appearance of a flashing green light indicates that a registrant is on-line.
  • a user logs into the permanent directory, as the web site is downloading the permanent directory list to the user, it cross references with the dynamic memory to see if any of the permanent directory entrants are on-line. So as each registrant is listed the dynamic directory is checked to see if the registrant is on-line and a visual indication is provided on the displayed list.
  • the result of the utilization of the dynamic directory, the permanent directory and the merge is that users know if other registrants are on-line. This provides the capability of establishing real time communications via audio and/or audio-video communication.
  • the merged list as displayed include a connect button or icon which permits establishing communication in real time. In the event that a desired registrant is not currently online, other services may be utilized by the user.
  • each computer includes a connector object program which is loaded into the computer.
  • a connector object program may, for example, be downloaded into the computer from the visitalk.com website.
  • the connector object preferably runs in the background as a non-visual program.
  • the connector object maintains a connection to the non-visual directory. It does not return a list to the computer, but instead polls the system director 103, 107 (e.g., using a Ping command) to let the system director 103, 107 know that it is still on-line.
  • the permanent directory will know whether a user is on-line because the connector object maintains a connection to the system.
  • FIG. 10 illustrates a system architecture in accordance one embodiment as disclosed herein. As illustrated in FIG. 10, a plurality of system clusters 111 are connected to the Internet 51. Each system cluster 111 , which may be located in a different geographic area, serves as a communications portal to the Internet 51.
  • Each system cluster 111 is substantially the same from a functionality standpoint, but the various system clusters 111 may have different numbers of servers connected. Each system cluster of the invention is readily scaleable up in number of servers connected to the enterprise switches 105, 109. One reason for providing geographically separate clusters is so that long distance telephone access charges for users to access the system clusters 111 may be minimized.
  • Each system cluster 111 provides communication services for its geographic area via the Internet and between other geographic areas also via the Internet 51.
  • Each system cluster 111 may be accessed by users having a variety of Internet devices 71 which include, by way of example (not limitation), computer terminals and personal communication devices such as pagers, phones, video devices and the like.
  • management centers 81 provide the system cluster management functions described above in conjunction with the management server MGT.
  • the permanent directory provides information for users who have registered to use services provided by the system.
  • the dynamic directory provides information on users who are logged on to the web site serviced by a cluster in one embodiment and have their Internet device activated or turned on in another embodiment.
  • each system cluster 111 maintains its own directories. To assure that each system cluster directory contains up to date information regarding who is currently logged on to the system, communications paths are established between the system clusters 111 to exchange directory update information between the system clusters. Each system cluster 111 will thus maintain a substantially complete and updated directories.
  • each system cluster will periodically broadcast directory changes that have occurred during the immediately prior predetermined time period to all other system clusters 111.
  • FIG. 11 illustrates the Internet 51 connections 51 between several of the system clusters for broadcasting directory updates to other system clusters 111.
  • an Internet device is any device which can access or be accessed by the Internet and includes all manner of devices such as computers, communication devices such as telephones, videophones, cameras, keyboards and any other input/output device which is connectable to the Internet either directly or indirectly.
  • a personal communications device may be used as an Internet device.
  • each user of the system in one embodiment of the invention has a personal identification code or Permanent Communication Number.
  • the Permanent Communication Number is a permanent personal identification code that is pre-assigned to the registered user of the communication services provided by the system of the invention. Whenever a registered user enters his or her personal identification code into an Internet device, that Internet device becomes identified in the system directories as the Internet device at which the registered user is active. With this type of an arrangement, a registered user can receive communications at any Internet device so long as the user has entered his or her Permanent Communication Number on the Internet device.
  • the system of the invention will update the directory listing for each user to overwrite any entries for prior Internet devices at which the user has registered his or her Permanent Communication Number.
  • the interactive operation of the registration of a user using a Permanent Communication Number with the system of FIG. 10 is shown in the flow diagram of FIG. 12.
  • a user registers with system at step 1201 , providing identification information including name, a billing address and credit card information.
  • the system assigns a Permanent Communication Number to the user which is unique to the user at step 1203.
  • a permanent directory entry is made for the registered user at step 1204.
  • the user may, as indicated at step 1205 enter the personal identification code at any Internet device such as device 1007 shown in FIG. 10.
  • Internet device 1007 upon entry of the Permanent Communication Number at Internet device 1007, Internet device 1007, at step 1207 utilizing a connector object as described above accesses one of the system clusters 111.
  • the system cluster 111 verifies that the Permanent Communication Number is a valid code at step 1209. If the Permanent Communication Number received at the system cluster 111 is not a valid Permanent Communication Number, service to the Internet device 1007 is denied as indicated at step 1211. If the personal identification code is a valid Permanent Communication Number, the permanent directory is updated to indicate that the user is accessible on the system at step 1213.
  • the service cluster 111 in one embodiment of the invention will return information to the Internet device 1007 to indicate at step 1215 whether the Internet device 1007 is accessible via the system or whether service is denied.
  • Internet device may receive incoming calls via the system. Subsequently, if the user activates a second Internet device 1009 using the same Permanent Communication Number the process is repeated and the directory is updated with the IP address of the Internet device 1009. The prior directory entry is overwritten and all incoming calls to the user will now be routed to the Internet device 1009. In any instance when the directory at the system cluster 111 at which the user activates an Internet device 1007 or 1009, the directories at all the system clusters 111 will be updated to reflect the status of the user as being accessible on the system as described above.
  • the system of the invention may also be used to establish communications between Internet devices and non-Internet devices.
  • a user at Internet device 1009 desires to establish a communication with a conventional telephone type device
  • the registered user at Internet device 1009 can also access a telephone directory listing and launch a call to the conventional telephone device via the system of the invention.
  • a unique identification number such as a Permanent Communication Number SM
  • System cluster 111 which Internet device 1009 accesses receives the telephone number and through a directory lookup at step 1403 identifies the system cluster 111 in geographic proximity to the telephone switching center 1421 to which the telephone number is associated to minimize telephone costs associated with placing such a call.
  • the Permanent Communication Number is an identifier which is preferably uniquely assigned to an individual. The assignment of each Permanent
  • Each Permanent Communication Number is by a controlling entity that has responsibility for assigning the Permanent Communication Number upon request.
  • the assignee of the present invention for example, generates and assigns Permanent Communication Number.
  • Each Permanent Communication Number is preferably a 12 digit numeric code arranged in a format of "xyyy yyyy yyyy" where "x" is any number from 2 to 9 and "y” is any number from 0 to 9, although of course the Permanent Communication Number may be chosen to be any size, depending mainly upon the number of users there are expected to be.
  • assigning each Permanent Communication Number the assignment is generally made in a sequential fashion. As each Permanent Communication Number is assigned, a permanent directory entry is made for that Permanent Communication Number.
  • the Internet device When an Internet device user enters his or her Permanent Communication Number at the Internet device, the Internet device is uniquely associated with that individual until such time as he or she enters the unique Permanent Communication Number at another Internet device.
  • Each Internet device includes a unique device identifying code such that when an Internet device logs onto a system, the Internet device is specifically identified.
  • the Permanent Communication Number directory is updated to indicate the number of the Internet devices at which the user has entered his or her Permanent Communication Number.
  • the specific Internet device identity and the Permanent Communication Number are forwarded to the system directory.
  • the assignment of Internet device numbers is similar to or the same as the present assignment to each computer and each cellular phone presently manufactured of a unique equipment identification number.
  • an individual can receive communications directed directly to him or her at any Internet device located anywhere in the world thereby providing unparalleled communications capability and access.
  • a process for assigning Permanent Communication Numbers is illustrated in FIG. 13.
  • a request is received from a user for a Permanent Communication Number.
  • a determination is made as to whether or not the request is for a vanity number or not. If the request is not for a vanity number, the next available Permanent Communication Number is identified at step 1304. The available number is assigned to the user at step 1305. The permanent directory is updated at step 1307 to reflect the assigned Permanent Communication Number and the user information. The user is notified of his or her Permanent Communication Number at step 1309.
  • an Internet device is any device which is directly accessible by or has the capability to directly access the Internet and which receives a Permanent Communication Number.
  • the Internet device may be a computer, a personal communication device such as a telephone, videophone, and may access the Internet by wireless or hardline type of connection. The method and manner in which the Internet device accesses the Internet is not important to an understanding of the present invention.
  • Internet device 1401 is a personal communication device.
  • the device 1401 includes one or more data input devices such as keypad 1403, or microphone 1404, and touch screen 1405, or sensors 1406, or any other device or element for the inputting of personal identification information.
  • the keypad is used to enter the Permanent Communication Number of a user.
  • a display included in the Internet device 1401 may prompt the user to enter his or her Permanent Communication Number when the device 1401 is powered-up.
  • a block diagram of the Internet device 1401 is shown in FIG. 15. The
  • Internet device 1401 includes a processor 1501 and associated memory 1503, a receiver 1505, a transmitter 1507 and antenna 1509.
  • the operation of the device 1401 is substantially the same as commercially available digital cellular phones and commercially available digital personal communication devices. Reference may be made to any number of prior art documents that describe the general operation and architecture of prior digital cellular phones and digital personal communication devices.
  • One significant difference between the Internet device 1401 and various prior art personal communication devices and cellular phones and the like is that the Internet device 1401 is preferably compatible with the International Telecommunications Union (ITU) recommendations for implementing H.323 protocol.
  • ITU H.323 recommendation is a mutually agreed upon specification which defines how personal computers can inter-operate to share audio and video streams over computer networks including intranets and the public Internet.
  • the processor 1501 operates to display a prompt to the user of the Internet device 1401 to enter his or her Permanent Communication Number as indicated at step 1603.
  • the user then enters the Permanent Communication Number at step 1605.
  • the Internet device 1401 by use of a connector object transmits the received Permanent Communication Number to the Internet server at step 1607.
  • the Internet device 1401 transmits a unique equipment code identifying the particular Internet device 1401 to the server.
  • the server updates its directory to reflect the association between the Permanent Communication Number and the specific Internet device 1401.
  • the server will return information to the Internet device 1401 at step 1609 indicating that the Internet device has been denied service as indicated at step 1611 or that it is active as indicated at step 1611.
  • Internet Protocol communications may be received at Internet device 1401 from other users connected to the Internet.
  • the Internet device 1401 as long as it is powered up will periodically provide its equipment code and the entered Permanent Communication Number to the Internet server as indicated at step 1613 via a connector object to indicate that the user's Internet device is available for receiving incoming calls.
  • the Internet device includes memory 1507 for storing more than one Permanent Communication Number, thereby permitting an Internet device 1401 to be simultaneously accessible for calls for more than one individual, or for more than one purpose such as for business and personal use, or so that all members of a group may register for use of a common Internet device.
  • site cluster 111 functions as an IP switch for IP communications.
  • Each site cluster is part of the Internet as viewed by Internet devices and anyone on the Internet can access the system cluster.
  • the system cluster provides a directory of users, a listing of the Permanent Communication Numbers, voice mail, video mail, conferencing service, all the services that one would expect from traditional public switched digital telephone switching center.
  • prior art digital telephone switching systems provide dial tone, access, listings and directory services for traditional telephones coupled through analog circuits
  • the system cluster 111 provides the same functionality for Internet devices connected utilizing IP to the system cluster directory. The services are provided by the servers shown in the various figures.
  • each server cluster 111 may be viewed as operating as a PBX/Centrex service for Internet devices which access the cluster via the Internet.
  • the Internet in combination with a server cluster provides switching functionality for Internet devices allowing Incoming calls to be directed to specific Internet devices at a common geographic location or area or areas.
  • the architecture of certain embodiments of the system as described herein is readily expandable to permit additional servers to be added to provide additional features.
  • Sequel servers SQL1 , SQL2 permits directories or memories to be provided for the storage of voice and/or video mail for registered users who are not logged on to the Internet.
  • Mass memory 1701 , 1702 is provided for the storage of voice and video messages. Operation of the mass memories 1701 , 1702 for storage of messages is under control of the sequel servers SQL1 , SQL2.
  • servers SQL1 , SQL2 are utilized to provide for video conferencing by directing conference calls to existing video conference providers. This arrangement is utilized in conjunction with the dynamic/permanent directory aspect of the system described above.
  • a voice mail or video mail message may be left for the called registrant.
  • the message is stored by the sequel servers SQL1 or SQL2 at a voice mail / video mail messaging site.

Abstract

A system facilitating communication over a distributed electronic network (2101) is provided. A dynamic data system (2100) comprises a distributed network (2101), data services, and an executive service and users connected to the network (2101). A user having a unique identifier is assigned to a data service used to store dynamic information. Dynamic information, such as a user's connection address, is retrieved from the distributed data system to provide point-to-point communication. A mapping scheme for remote data services allows a particular service assigned to a user to be found with a unique identifier. In certain embodiments, a relatively centralized static data repository (2115) may be maintained separately from a dynamic data portion of the system. In other embodiments, static data is not centralized in a repository. Reliability of a distributed data system is furthered by detecting faults in a service, and bringing a backup service online as a replacement.

Description

DESCRIPTION
Distributed Dynamic Data System And Method
Background Of The Invention
1) Field of the Invention The field of the present invention relates generally to services provided via a distributed electronic network and, in particular, to systems and methods for facilitating communication over a distributed electronic network such as the Internet.
2) Background Computer networks generally, and particularly the worldwide computer information network now commonly referred to as the Internet, are increasingly being used to provide access to sources of information and data on a widespread and even worldwide basis. One of the bases for the universal success of the Internet as a tool for information exchange and electronic commerce has been the standardization of the Internet Protocol ("IP"). Many new uses are being found for the Internet and computer networks. For example, the Internet is presently being used for communication applications. Applications such as email and instant messaging have already become ubiquitous, while Internet telephony and Internet video communication are becoming increasingly available. As a further example, performing remote control or monitoring of various devices on a real-time basis via the Internet or other computer networks is also possible.
Peer-to-peer networking has been applied in the context of real-time (or nearly real-time) communications. An early system to facilitate peer-to-peer connections for desktop video conferencing used a tool called the Internet Locator Service (ILS) to allow users to find other users presently logged on to a Web site. Once one user found another, peer-to-peer communication could be established using software such as Microsoft NetMeeting®. A newer peer-to-peer communication software package is Microsoft Messenger®.
The presence of these existing technologies demonstrate the desirability of enabling direct exchange of services or data between computers or digital devices. A technology that enables such direct exchange may be termed a peer- to-peer directory infrastructure. When discussing a peer-to-peer directory infrastructure in the context of the Internet or a network, a general distinction may be drawn between static information and dynamic information. In this context, static information includes information that uniquely identifies a particular computer or person connected to the Internet or network. Such information may include a user's name, a fixed location for the user, and/or the user's email address. Dynamic information in this context is typically characterized as information that is subject to change. One type of dynamic information is dynamic addressing information, which is addressing information that may change according to a user's connection to the Internet or a network. For example, the IP address or other connection address for a device connected to the Internet or a network may change each time that the user re-connects, or further may even change while the user remains connected.
One factor that complicates direct information exchange using a peer-to- peer directory infrastructure is directory load. The vast majority of the load for a peer-to-peer directory is related to the maintenance of a customer's dynamic information, since most users sporadically connect to the Internet with a dynamic IP address.
Another factor that complicates information using a peer-to-peer directory infrastructure is that an intended recipient of information may not always be connected to the particular network. While intermittent connectivity does not present a problem for one-way communication such as email, it presents a major impediment to two-way communication such as telephony, video communication, and feedback control. When an intended recipient disconnects or logs off, the user essentially disappears and becomes difficult to reach for purposes of carrying out dynamic communications.
A further complicating factor, as mentioned above, is that a user's IP address or other address information may change with time. For example, most Internet users who connect via an Internet Service Provider ("ISP") do not use devices with static IP addresses. Instead, the user's IP address is usually different each time that the user connects to the Internet through an ISP, even if the connection is initiated each time from one device in a fixed location. Furthermore, a mobile user's IP address can change while the user is connected to the Internet. For example, when a user of an Internet-enabled cellular phone physically moves from one service region (or "cell') to another, the user's IP address may change without disrupting an established Internet connection.
Deploying peer-to-peer directories on a very large scale tends to involve extremely large databases utilizing transaction servers. One such a system reaches a critical size, a transaction server may get inundated with transaction requests and become overloaded.
It has been demonstrated that a need exists for facilitating peer-to-peer communication over the Internet or a distributed computer network. A need further exists for allowing communication with Internet users who are not connected or logged on to the Internet or network when communication is attempted. Still a further need exists to allow communication with Internet or network users who have IP or network connection addresses that are subject to change with time. Another need exists to provide a peer-to-peer directory infrastructure that permits scalability to an extremely large number of simultaneous users, and yet permits search queries to locate other users based on various identifying criteria.
Summary Of The Invention In one aspect of the invention, a data system includes a dynamic directory that is completely independent from static data, such as may be maintained in a static directory. The dynamic directory is used as a dynamic caching mechanism by the application that controls the static database. That is, since the dynamic database is actually a subset of information of the static directory, each partition of the dynamic database actually caches a small portion of the static database. The static portion is responsible of keeping track of persistent user data, while the dynamic portion is responsible for keeping track of which members (more specifically, devices accessed by users) are connected to the system and each user's current connection address. In another aspect of the invention, each user having a unique identifier (ID) is assigned a remote "service," preferably operating on a server. A single service may serve many users, but multiple services, each serving many users, are contemplated. The service assigned to a user handles dynamic data for that user. For instance, the service may keep track of a user's present connection address, and whether the user is presently connected to a network to receive immediate communication. The service may also be used to store information (such as, for example, voice, text, or other messages) intended to be communicated to a user while the user is unable to receive immediate communications.
Another aspect of the invention employs a consistent mapping scheme that permits high refresh rates to be obtained without overloading any particular segment or device of the system, even though the distributed dynamic data system is highly segmented and highly distributed. The consistent mapping scheme allows a particular service assigned to a user to be found rapidly, even when the number of users should grow into the millions. Since a service is used essentially as an intermediary for enabling communications to or from a specific user, the consistent mapping scheme provides greatly enhanced connectivity between users. Connectivity is even further enhanced by the non-hardware- specific nature of a user's access to its particular service: since access is based on a user's unique ID, a user may access to its particular service at different times with a variety of different devices. In another aspect of the invention, a static data repository is somewhat centralized, generally stored at one or more servers. Another aspect of the invention provides for the decentralized storage of static data at many distributed transaction processors and/or sites.
In yet another aspect of the invention, many of the management processes involved in maintaining the directory are automated. To enhance reliability, the system is self-correcting; in other words, it is capable of detecting faults in a device operating a service and bringing a backup service device online as a replacement.
Uses for the invention include presence detection applications, Internet video gaming, voice communications, video communications, remote monitoring, remote feedback control, delivery of customized dynamic data, storing and forwarding data to a user, and instant messaging (including text or other message types). Other uses for the invention will be apparent to those skilled in the art upon review of the specification.
Brief Description Of The Drawings
The invention will be better understood from a reading of the detailed description of preferred embodiments in conjunction with the drawings, in which like reference designations are used for like elements, and wherein:
FIG. 1 illustrates a system cluster in accordance with the principles of one embodiment;
FIG. 2 illustrates a first embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
FIG. 3 illustrates a second embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ; FIG. 4 illustrates a management architecture of the system of FIG. 1 ;
FIG. 5 illustrates a third embodiment of virtual local area network configuration as may be used in connection with the system of FIG. 1 ;
FIG. 6 illustrates a fourth embodiment of a virtual local area network configuration as may be used in connection with the system of FIG. 1 ; FIG. 7 is a block diagram of the directory architecture in a system cluster; FIG. 8 is a flow diagram illustrating an example of the operation of the directories of FIG. 7;
FIG. 9 is a representation of a visual display of merged dynamic and static directories; FIG. 10 illustrates a system architecture in accordance with an embodiment as disclosed herein;
FIG. 11 illustrates directory information flow between system clusters in the system architecture of FIG. 10;
FIG. 12 is a flow diagram illustrating an example of a process for registering a user with a personal identification code;
FIG. 13 is a flow diagram illustrating an example of a process for assigning a user a personal identification code; FIG. 14 is an Internet device;
FIG. 15 is a block diagram of the Internet device of FIG. 14; FIG. 16 is a flow diagram of an example of operation of the Internet device of FIG. 15;
FIG. 17 is a functional block diagram illustrating voice/video mail features as may be used in connection with the system of FIG. 1 ;
FIG. 18 is a schematic overview of a distributed dynamic DNS system; FIG. 19 is a top-level diagram of an Internet-based communications system having a directory of Permanent Communication Numbers;
FIG. 20 is an architectural diagram of one embodiment of the Internet- based communications system illustrated in FIG. 19;
FIG. 21 is a top-level diagram of a network-based distributed dynamic data system according to one embodiment having a centralized static data repository; FIG. 22 is a logical diagram of the distributed dynamic data system illustrated in FIG. 21 ;
FIG. 23 is a top-level diagram of a network-based distributed dynamic data system according to another embodiment; FIG. 24 is a top-level diagram of portion of a network-based distributed dynamic data system according to a further embodiment;
FIG. 25 is a top-level diagram of a network-based distributed dynamic data system according to yet another embodiment including multiple D3 clusters;
FIG. 26 is a top-level diagram of a network-based distributed dynamic data system according to one embodiment lacking a centralized static data repository; FIG. 27 is a logical diagram of the distributed dynamic data system illustrated in FIG. 26; FIG. 28 is an exemplary representation of a distributed dynamic data service assignment scheme in the form of arrays;
FIG. 29 is an illustration of a distributed dynamic data service assignment scheme corresponding to the arrays illustrated in FIG. 28; and FIG. 30 is a top-level diagram of a network-based distributed dynamic data system according a further embodiment.
Detailed Description
FIG. 21 is a top-level diagram of a network-based distributed communication system, illustrating various concepts as disclosed herein. As illustrated in FIG. 21 , at the core of a distributed dynamic data system 2100 according to one aspect of the invention is a network 2101 such as, for example, the Internet. Alternatively, a private network may be used. A variety of distributed transaction processors ("DTPs") 2110, 2111 connect to the network ( ) for the purpose of carrying out communication over the network ( ). As used herein, a DTP 2110, 2111 is an application residing on an electronic device capable of transmitting and receiving information over a network 2101. A DTP 2110, 2111 is typically part of, or associated with, another application that, in order to provide some functionality, uses the DTP 2110, 2111 in order to connect to the network 2101. Examples of devices on which a DTP 2110, 2111 might be installed include, but are not limited to, a computer, a wireless IP device, and a network- capable cellular phone. A DTP 2110, 2111 has a protocol-specific connection address, or DTP address. Examples of potential DTP connection addresses include, but are not limited to, a TCP/IP address, a IPX/SPX address, or a NETBEUI computer name. One characteristic of a distributed dynamic data system according to the present invention is that a user desiring to send information to a DTP 2110, 2111 may not - and usually does not - know the DTP address for the desired recipient. A further characteristic of such a system is that a DTP 2110, 2111 need not always be available to the network 2101. That is, a DTP application need not always be running, due to circumstances such as if the device associated with the DTP 2110, 2111 is turned off, or the DTP 2110, 2111 is disconnected from the network 2101. The availability of the DTP 2110, 2111 to the network 2101 may be controlled at the discretion of the application user.
In one embodiment, a distributed dynamic data system 2100 includes a relatively centralized static data repository ("DR") 2115. As shown in FIG. 21 , the DR 2115 connects to the network 2101 by way of a distributed transaction processing gateway ("DTPG") 2116. As used herein, the DR 2115 is typically, but not limited to, a member database; typically this database would reside on a data server or linked collection of data servers. The DR 2115 may also be present at a website. The DR 2115 may store information regarding a user - at a minimum, a unique identifier ("unique ID") for the user of a DTP 2110, 2111. The user may be a person or an autonomous application. While a unique ID such as an email address may not be fixed for the whole duration of a user's existence, the identifying information remains relatively static as compared to other information that may be elsewhere associated with a user, like a user's IP address or other network connection address. A primary purpose of the DR 2115 is to identify a user so as to enable the user to receive other information. The DR 2115 may optionally also include a distributed dynamic data key ("D3 Key"). Use of a D3 Key (as explained in more detail hereinafter) allows static load balancing to be performed based upon empirical data (e.g., by geography, by privacy requirements such as may be dictated by a corporate internal network, by particular Internet service provider, or by level of Internet quality of service).
The DTPG 2116 provides an interface between the DR 2115 and the network 2101. In one embodiment, the DTPG 2116 may be a login server. A primary function of the DTPG 2116 is to provide the DR 2115 the ability to send and receive data over the network 2101. FIG. 21 also illustrates multiple distributed dynamic data services ("D3Ss")
2120, 2121. As used herein, a D3S 2120, 2121 is a service, operating on one or more devices (e.g., servers), that stores dynamic addressing information (e.g. a connection address) for at least one DTP 2110, 2111. As will be described hereinafter, each unique ID (corresponding to a user) is assigned a particular D3S 2120, 2121 according to a D3S assignment scheme. In addition to storing dynamic addressing information, a D3S 2120, 2121 may further store, process, and/or send other information. For example, information intended for a DTP may be posted to a D3S by a DTPG 2116 or other DTPs. As a further example, a D3S 2120, 2121 may execute logic against data. Preferably, if a connection address for a DTP 2110, 2111 changes - such as frequently occurs with dynamic IP addressing - then the DTP 2110, 2111 will post its new connection address to its corresponding D3S 2120, 2121. As compared to one another, each D3S 2120, 2121 usually provides equivalent functionality, but for a different group of unique IDs. Also illustrated in FIG. 21 is a distributed dynamic data executive service
("D3X") 2130. Where a D3X 2130 is the highest level executive present, such as provided in the embodiment shown in FIG. 21 , then a D3X 2130 is responsible for providing a D3S assignment scheme, which provides consistent (i.e. predictable and repeatable) mapping of a unique ID to a particular D3S 2120, 2121. Upon initial connection of a DTP 2110, 2111 to the network 2101 , the D3X 2130 derives the particular D3S 2120, 2121 to which the DTP 2110, 2111 should connect, based on the unique ID of the user utilizing the DTP 2110, 2111. Preferably, the D3X 2130 also checks the statuses of the D3Ss 2120, 2121 and, based on the result of this status check, updates the assignment scheme. The assignment scheme, which dictates the assignment of unique IDs to particular D3Ss 2120, 2121 indicates the availability of particular D3Ss 2120, 2121. It is preferred for the D3X 2130 to run on an independent physical server to promote reliability; alternatively, however, the D3X 2130 can run as a service on the same device as any D3S 2120, 2121. A D3S 2120, 2121 is preferably capable of performing most or all of the functions of the D3X 2130 if necessary. The presence of this capability in a D3S 2120, 2121 allows potential reduction in traffic to the D3X 2130, and also enhances reliability in case the D3X application or the device on which the D3X 2130 operates should fail. Moreover, the D3X 2130 is preferably capable of performing the functions of a D3S 2120, 2121 so that if a D3S (such as D3S1 2120) should fail, then its functionality may be temporarily provided by the D3X 2130 until the assignment scheme is modified to permit another D3S to be reassigned in its place.
As used herein, a distributed dynamic data cluster ("D3 cluster") 2140 includes the combination of a D3X 2130 and multiple associated D3Ss (e.g., D3S1 2120 and D3S2 2121). While only a single D3 cluster 2140 is provided in FIG. 21 , multiple D3 clusters are contemplated in other embodiments, as will be discussed hereinafter.
It is contemplated that an embodiment having a DR 2115, similar to the embodiment shown in FIG. 21 , might have only a single DTP (such as DTP1 2110) present. In such an instance, then it is contemplated that most communication or information traffic flows from the DTPG 2116 to the DTP 2110. For example, a website may send custom dynamic data from the DTPG 2116 to a DTP 2110 on a streaming basis. If further DTPs are added, then information may also be sent from one DTP (e.g., DTP1 2110) and be received by another DTP (e.g., DTP2 2111).
When a DTP (e.g., DTP1 2110) connects for the first time to a data system 2100 according to an embodiment illustrated in FIG. 21 , a few procedural steps are involved for a D3S 2120, 2121 to be assigned to the DTP 2110. First, the DTP 2110 initiates contact with the DTPG 2116 to register a unique ID with the DR 2115. As indicated previously, this unique ID identifies the user of a DTP 2110. The registration procedure preferably includes a suggestion by the user of a DTP 2110 for a particular unique ID, such as a user's email address or Permanent Communication Number or PCNSM. An alternative registration procedure may include a suggestion - not initiated by the user, but instead by the DR 2115, the DTPG 2116, or an associated application - for a suitable unique ID, which suggestion may be accepted or rejected by an DTP user. A query may be performed of the DR 2115 to verify that the suggested unique ID is, in fact, unique to the DR 2115. Once a unique ID is registered with the DR 2115, then the DTPG 2116 typically provides verification and further responds to the DTP 2110 with sufficient information, such as a connection address, for the DTP 2110 to connect to a D3X 2130. The next procedural step includes the DTP 2110 contacting a D3X 2130 to determine which D3S 2120, 2121 the DTP 2110 should connect with, e.g., D3S1 2120. The D3X 2130 provides this information to the DTP 2110. Finally, the DTP 2110 connects to its assigned D3S 2120 to communicate its DTP connection address to the D3S 2120 and thereby enable further communications. Preferably, when a D3S 2120 is assigned to a DTP 2110, the DTP connection address is stored or cached at the D3S 2120. Likewise, following the initial connection procedure, the D3S address for the particular D3S 2120 assigned to a DTP 2110 is preferably stored or cached at the DTP 2110. After a DTP 2110 has connected to the data system 2100 for the first time, subsequent connections may be made according to a different, less cumbersome procedure.
Logical connections between various components of a distributed dynamic data system 2100 according to a the embodiment described in FIG. 21 are illustrated in FIG. 22. The foregoing discussion of the procedural steps involved in assigning a particular D3S (e.g., D3S1 2120) to a DTP (e.g. DTP1 2110) may be accomplished with the connection types provided in FIG. 22, assuming that D3S1 2120 is assigned to a unique ID operating DTP1 2110 and D3S2 2121 is assigned to a unique ID operating DTP2 2111. It is worth noting that in this first embodiment, the DR 2115 and associated DTPG 2116 are relatively isolated from the D3Ss 2120, 2121 in that the D3Ss 2120, 2121 cannot initiate direct contact with the DTPG 2116 or DR 2115. This separation between the static portion (e.g., DR 2115 and DTPG 2116) and dynamic portion (e.g., D3S1 2120 and D3S2 2121) is desirable to minimize traffic on the DR 2115. Minimizing traffic on the DR 2115 is consistent with the design of this embodiment to store dynamic addressing information in a dynamic data portion (e.g., at D3Ss 2120, 2121) and store static information in a static data portion (e.g., the DR 2115). One aspect of the present invention includes a procedure for consistently and repeatably resolving a user's unique ID to a particular D3S 2120, 2121. This is done not only to initially assign a D3S 2120, 2121 to a unique ID, but also to permit a sender desiring to send information to a recipient user operating a DTP 2110, 2111 to locate a D3S 2120, 2121 assigned to that user by way of the recipient user's unique ID. As described in more detail hereinafter, this procedure is done in such a manner to minimize traffic on the static portion of the data system 2100. Procedural steps for communicating information to a DTP follow. After a D3S 2120, 2121 has already been assigned to a unique user operating a DTP, then a sender (e.g., a DTP or DTPG) desiring to send information to a recipient DTP (e.g., DTP2 2111) first obtains the unique ID for the DTP 2111 associated with the unique ID. The sender may already have the recipient's unique ID, but if the sender does not, then the sender may connect to the DR 2115 by way of the DTPG 2116 to query the DR 2115 for the desired unique ID using whatever identifying information the sender might have for the recipient (e.g., name, email address, or telephone number). Once the unique ID for the recipient user is obtained, then the sender connects to the D3X 2130 with the unique ID to learn which D3S 2120, 2121 is assigned to the recipient user. After the D3X 2130 indicates which D3S (e.g., D3S2 2121) is assigned to the recipient user, then one of several events might occur. For example, the sender might send information to the D3S 2121 for storage and subsequent retrieval by the recipient user. Alternatively, the sender might obtain the current DTP connection address associated with the recipient user and attempt to contact the recipient user directly - possibly to engage in two-way communication - preferably subject to consent of the recipient user for a connection to be established. Yet another alternative would be for the sender to post the sender's connection address the D3S 2121 for the recipient user with a request for the recipient user to initiate a return contact using the sender's connection address.
After a D3S (e.g., D3S1 2120) is assigned to a unique ID, then a DR 2115 may store the D3S connection address for the D3S 2120 mapped to the unique ID. That is, the D3S connection address for the D3S 2120 assigned to a unique ID (the D3S to which a DTP 2110 associated with the unique ID connects) may be stored at the DR 2115 in addition to the user's unique ID. If this storage step is employed, then it avoids the need for a DTPG 2116 to connect with a D3X 2130 to find a D3S connection address for the specific D3S 2120 mapped to a unique ID before each subsequent contact with the D3S 2120 is initiated. If stored at the DR 2115, then the D3S connection address should not be updated on a constant basis; instead, it should only be updated when there is a change to the D3S connection address for the D3S 2120 assigned to a user - a change that should occur only when the D3S assignment scheme is modified, and such modification happens to affect the particular D3S 2120 assigned to the user. Even though a DTP connection address may change frequently, the association of a particular D3S with a unique ID (and therefore a DTP associated with that unique ID) should not change so frequently.
FIG. 23 is a top-level diagram of a distributed dynamic data system 2200 according to another embodiment. Within a D3 cluster 2240 are spare D3Ss 2222, 2223 provided as backup to D3S1 2220 and D3S2 2221. In the event that a D3S in service (e.g., D3S1 2220) should fail, having a spare D3S (such as D3SN 2222) available in the same D3 cluster 2240 as the failed D3S 2220 allows a spare D3S 2222 to take over duty for the failed D3S 2220. Assuming that failure of a D3S is detected during a status check performed by a D3X (such as D3X1 2230), the D3X 2230 will change the assignment scheme to swap a spare D3S 2222 for the failed D3S 2220. The mapping information, indicating which D3S should be accessed (e.g., D3SN 2222), is then communicated to the DTP (e.g., DTP1 2210) mapped to the particular D3S 2222. Such communication may be accomplished, for example, by automatically failing over a D3S to a D3X (e.g., D3X1 2230), and then by having the D3X 2230 communicate the new D3S connection address to the DTP 2210. Inserting a spare D3S 2222 into service for the failed D3S 2220 limits the impact of the failure only to the traffic intended for the failed D3S 2220, rather than having the failure affect the entire system 2200. Rather than changing the entire D3 assignment scheme to reflect the update, preferably only the portion of the scheme corresponding to the failed D3S 2220 is changed. When a failed D3S 2220 becomes ready for service again, it may assume the status of an available spare, ready to be swapped into service in case another D3S (e.g., D3S2 2221 or D3SN 2222) should fail.
In the embodiment illustrated in FIG. 23, multiple D3Xs 2230, 2231 are provided in the D3 cluster 2240. Each of the D3Xs 2230, 2231 are preferably redundant and capable of handling the same traffic, so as to enhance reliability in case one D3X should fail. Where multiple D3Xs 2230, 2231 are present and are the top-level executive services in the system 2200, one D3X 2230, 2231 is selected as the "lead" D3X to assume responsibilities including generation of the D3S assignment scheme. One way of selecting the lead D3X is for all D3Xs 2230, 2231 to perform a lexicographical election. Such an election process implies a connection between the D3Xs 2230, 2231 within the D3 cluster 2240. Preferably, the process of selecting the lead D3X includes a health check of all D3Xs 2230, 2231. Further preferably, the lead D3X may communicate with all D3Xs 2230, 2231 to ensure that current D3S assignment scheme is present at all times on all D3Xs 2230, 2231. This enhancement provides additional reliability. By ensuring the currency of the assignment scheme at all D3Xs 2230, 2231 , if the lead D3X (e.g., D3X1 2230) should fail, then another D3X (e.g., D3X2 2231) can seamlessly assume the role of lead D3X.
Preferably, ensuring currency of the D3S assignment scheme includes the use of an assignment scheme identifier that is generated by the lead D3X (e.g., D3X1 2230) (or top-level executive service, as will be described hereinafter) each time that the assignment scheme is changed. Upon generation, the identifier preferably is further stored, passed with each communication, and compared. Each time there is a change to a D3S connection address or to a group of unique IDs / DTPs, thus causing a change to the assignment scheme, an assignment scheme identifier is generated by the D3X (e.g., D3X1 2230) to differentiate the current assignment scheme from a previous assignment scheme, so as to identify whether the D3S connection address was generated under the current assignment scheme. Possible examples for the assignment scheme identifier include an incrementing serial number or a time stamp. Because a D3S (e.g., D3S1 2220) associated with a particular DTP (e.g., DTP1 2210) may change due to failure of a D3S (e.g., 2220), the connection address previously stored by the DTP (e.g., DTP1 2210) may no longer be correct. Use of the assignment scheme identifier allows a quick determination whether a particular map (providing the connection address to a D3S) is consistent with the current assignment scheme. Once the assignment scheme identifier is generated, it is preferably stored at an affected DTP (e.g., DTP1 2210), the DR 2215, and all D3Ss 2220, 2221 , 2222, 2223. The assignment scheme identifier is preferably stored along with the D3S connection address for a DTP 2210, 2211. Storage of the assignment scheme identifier along with the connection address enables the identifier to be passed with all communications to the D3S (e.g., D3S1 2220), so as to permit the D3S 2220 to check whether the identifier has aged. This avoid the need for the connection address to be validated (i.e. by checking with a D3X (e.g., D3X1 2230) before each attempted contact between a DTP (e.g., DTP1 2210) and the D3S (e.g., D3S1 2220). A D3S 2220 may compare the communicated assignment scheme identifier to the "current" identifier stored at the D3S 2220. If the communicated identifier has aged, then the request directed to the D3S 2220 can be returned to the contactor with an instruction to contact the D3X (e.g., D3X1 2230). By virtue of this comparison, the assignment scheme identifier is used to prevent a DTP (e.g., DTP1 2210) from using what it thinks is a correct D3S connection address to connect to the "wrong" D3S (due to a change in assignment). The assignment scheme identifier thus provides a rapid ability to verify that the connection request is based on the current assignment scheme.
In the embodiment shown in FIG. 23, a virtual connection address may be employed for the multiple D3Xs 2230, 2231 provided in the D3 cluster 2240. Use of a virtual connection address permit either (or any) D3X (e.g., D3X1 2230) to respond to connection requests, with the other(s) D3X (e.g., D3X2 2231) available as a backup. This allows devices to be addressed by the same connection address rather than separate connection addresses. One example is the use of a virtual IP address ("VIP") on a TCP/IP network
FIG. 24 is a top level diagram for a portion of a distributed dynamic data system according to a further embodiment. A load balancing device 2350 associated with a D3 cluster 2340 allows each D3X 2330, 2331 to handle a proportional load of the requests for connection addresses. In other words, using a load-balancing device 2350 permits distributed loading between D3Xs 2330, 2331. Preferably, the load balancing device 2350 is capable of checking the status of the D3Xs 2330, 2331. There still remains the need for a single lead D3X. Using a load balancing device 2350, however, does not require that a virtual connection address be employed for the multiple D3Xs 2330, 2331. The load balancing device 2350 may be a packet-based Layer 4 load balancing switch capable of switching on IP addresses, such as a load balancing switch manufactured by Alteon WebSystems. Product information regarding Alteon WebSystems' switches is available at "http://www.alteonwebsystems.com". As a further enhancement, the D3Ss 2320, 2321 , 2322, 2323 in a D3 cluster 2340 may connect to the network 2301 behind a switch that uses a D3S address to direct traffic to a particular D3S (e.g., D3S1 2320). As provided in FIG. 24, the functions of load balancing between multiple D3Xs 2330, 2331 and switching for the multiple D3Ss 2320, 2321 , 2322, 2323 may be performed by the same load balancing switch 2350, or group of parallel load balancing switches, so long as the switching device(s) has sufficient capacity to do both load balancing and switching. Regarding the switching function, if IP addresses are used with the network 2301 , then the switch 2350 may respond to an IP address and further information contained in the URL string. Information contained in the URL string may be read by the switch and used to direct traffic to the proper D3S. Preferably, multiple (redundant) load-balancing devices (not shown) are interposed between a D3 cluster 2340 and the network 2350 to enhance reliability. In case one load balancing device should fail, the other should be available to handle all load balancing responsibilities. To further promote reliability when multiple load balancing devices are used, each load balancing device may have an available network connection to each D3X 2330, 2331 to permit each load balancing device to communicate with any D3X 2330, 2331 in case a single load balancing device should fail. Preferably, the network connection is a common network segment. A virtual connection address (such as, for example, a VIP on a TCP/IP network) may be employed for each group of multiple load balancing devices to permit either (any) load balancing device to respond to connection requests to a D3X.
FIG. 24 further illustrates an available network connection between each D3X 2330, 2331 and each D3S 2320, 2321 , 2322, 2323 with a D3 cluster 2340. This promotes reliability, since each D3S 2320, 2321 , 2322, 2323 is capable of communicating with any D3X 2330, 2331 in case a device should fail. Preferably, the network connection is a common network segment. FIG. 24 further illustrates a distributed dynamic data Master service ("D3 Master") 2360 associated with D3 clusters 2340, 2341. A D3 Master service is a higher level executive service compared to a D3X (e.g., D3X1 , D3X2 2330, 2331); accordingly, where a D3 Master 2360 is present, it assumes some responsibilities formerly borne by the lead D3X. Namely, a D3 Master 2360 is responsible for generating the D3S assignment scheme and the assignment scheme identifier. Though depicted as separate services in FIG. 24, a D3 Master service may run as an extension of a D3X service. A D3 Master does not necessarily operate on a different physical machine than a D3X, although the D3 Master and D3X operations may be segregated on different machines to enhance reliability. Each D3X 2330, 2331 preferably checks the status of its subordinate D3Ss 2320, 2321 , 2322, 2323, and passes D3S status information to the D3 Master 2360. The D3 Master 2360 may generate a new D3S assignment scheme based on any change in D3S status presented to it by the D3Xs 2330, 2331. Upon generation, the new assignment scheme and related identifier may be communicated by the D3 Master 2360 to the D3Xs 2330, 2331 , which in turn may communicate these items to the subordinate D3Ss 2320, 2321 , 2322, 2323. An additional function of a D3 Master 2360 generally is to coordinate multiple D3 clusters 2340, 2370 (also as shown in FIG. 25). As a result, a D3 Master 2360 is not truly needed until multiple D3 clusters (e.g., 2340, 2370) are present. FIG. 25 illustrates a distributed dynamic data system 2500 according to another embodiment in which multiple D3 clusters 2540, 2570, each connecting to a network 2501 with a load balancing device 2550, 2551 , are provided. In such a system, at least one D3 Master (e.g., D3 Master 1 2560) is necessary. To promote reliability, a D3 Master (e.g., D3 Master 1 2560) may redirect traffic between D3 clusters 2540, 2570 if necessary. To do so, a D3 Master (e.g., D3 Master 1 2560) may maintain service availability information for each D3 cluster 2540, 2570 and, in the event that a D3 cluster (e.g., D3 cluster 1 2540) becomes unavailable, the D3 Master 2560 may redirect traffic to another D3 cluster (e.g., D3 cluster 2 2570) or group of further D3 clusters (not shown) by responding to a connection request with a correct connection address for an active D3 cluster (e.g., D3 cluster 2 2570) and providing an instruction (e.g., to a DTP (such as DTP1 2510) or a DTPG (such as DTPG1 2516) to direct connection requests to the different D3 cluster. Preferably, more than one D3 Master is provided to enhance reliability in case one D3 Master should fail. The illustrated embodiment provides even greater redundancy, since each cluster 2540, 2570 has two associated D3 Masters 2560, 2561 , 2562, 2563. Redundant D3 Masters are preferably capable of handling the same traffic. When multiple D3 Masters 2560, 2561 , 2562, 2563 are used, then a "lead" D3 Master is selected. One method for making this selection is for the D3 Masters 2560, 2561 , 2562, 2563 to perform a lexicographical election, implying a network connection between all D3 Masters 2560, 2561 , 2562, 2563. Preferably this election includes a health check of all D3 Masters 2560, 2561 , 2562, 2563. Only the lead D3 Master updates the D3S assignment scheme and generates the assignment scheme identifier. As a further enhancement, the lead D3 Master (e.g., D3 Master 1 2560) may communicate with all other D3 Masters (e.g., D3 Masters 2, 3, and 4 2561 , 2562, 2563) to ensure that the current assignment scheme is present at all times on all D3 Masters 2560, 2561 , 2562, 2563. This promotes reliability, since If the lead D3 Master (e.g., D3 Master 1 2560) should fail, then another D3 Master can seamlessly assume the role of lead D3 Master. To further promote reliability, in each D3 cluster (e.g., D3 cluster 1 2540), each D3 Master 2560, 2561 may have an available connection to each D3X 2530, 2531 and D3S 2520, 2521 , 2522, 2523 within the cluster 2540, preferably via a common network segment. These available connections permit hierarchical failover in case of unit failure, from a D3S (e.g., D3S1 2520) to a D3X (e.g., D3X1 2530), and from a D3X (e.g., D3X1 2530) to a D3 Master (e.g., D3 Master 1 2560). Moreover, an entire cluster (e.g., cluster 1 2540) may failover to another cluster (e.g., cluster 2 2570). The result provides high reliability.
Where multiple D3 Masters are provided at a cluster (e.g., cluster 1 2540), such as in FIG. 25, a virtual connection address may be employed for the multiple D3 Masters 2560, 2561. This permits either (any) D3 Master 2560, 2561 to respond to connection requests, with the other(s) available as a backup. This architecture allows devices to be addressed by the same connection address rather than separate connection addresses. As noted previously, one example of virtual connection addressing is the use of a VIP on a TCP/IP network. An alternative to using a virtual connection address for multiple D3 Masters (e.g., D3 Masters 1 and 2 2560, 2561) at a D3 cluster (e.g., D3 cluster 1 2540) is to provide a load balancing device (not shown) between with the D3 Masters 2560, 2561 on the one hand and the D3Xs 2530, 2531 and D3Ss 2520, 2521 , 2522, 2523 on the other. Similar to the situation with multiple D3Xs described previously, use of a load balancing device with D3 Masters 2560, 2561 allows each D3 Master 2560, 2561 within a D3 cluster 2540 to handle a proportional load of the requests for connection addresses; in other words, it permits distributed loading between D3 Masters 2560, 2561. Preferably, such a load balancing device is capable of checking the statuses of the D3 Masters 2560, 2561. There still remains the need for a single lead D3 Master. Use of a load balancing device does not require that a virtual connection address be employed.
In the embodiment shown in FIG. 25, network connections are depicted between the D3 Masters 2560, 2561 , 2562, 2563, and also between each D3 Master and the D3Xs (e.g., D3X1 and D3X2 2530, 2531) and D3Ss (e.g., D3S1 , D3S2, D3SN, D3SN+1 2520, 2521 , 2522, 2523) in an intra-cluster network. Such connections may be made by a public network such as the Internet. To enhance reliability, the D3 Masters, D3Xs, and D3Ss may be additionally connected by a virtual private network. Moreover, this virtual private network connection provides an avenue for system administration by an administering authority, such as visitalk.com. Furthermore, the D3 Masters 2560, 2561 , 2562, 2563 at all the D3 clusters 2540, 2570 may communicate over a further network. This permits one lead D3 Master to oversee all present D3 clusters.
While only two DTPs 2510, 2511 are illustrated in FIG. 25, it is contemplated that a very large number of users could be simultaneously connected to the distributed dynamic data system 2500. A system 2500 may be scaled with additional D3 clusters (not shown) to support literally millions of simultaneous users. Importantly, this scalability may be achieved with the incremental addition of relatively inexpensive equipment. Rather than requiring a massively powerful and highly expensive central database server to attempt to maintain a single centralized database for a large number of users, the distributed nature of a distributed dynamic data system according to the present invention permits low cost server hardware to be utilized for maintaining the dynamic portion of the system data. In addition to being scaleable at a low cost, a distributed dynamic data system 2500 may include hardware devices or D3 clusters that are distributed over a wide geographical area. By virtue of network connections, multiple D3 clusters may be organized into virtual sites that may or may not be contained at the same location.
For communication between DTPs (e.g., DTP1 2510, DTP2 2511), a D3S (e.g., D3S1 2530 at D3 cluster 1 2540) may either enable direct point to point connection by providing DTP addressing information, or the D3S 2530 may act as a switch carrying information from one DTP to the other. The latter functionality approach is less preferred from a system scalability perspective, however, since it generates a far greater amount of network traffic and consumes system resources. FIG. 25 further illustrates the possibility of providing multiple DRs 2515, 2517 and associated DTPGs 2516, 2518 on the same system 2500. Multiple DTPGs 2516, 2518 may send dynamic information to the same DTP (e.g., DTP1 2510). For example, a DTP user may have two instances of a Web browser active or a single browser that supports multiple simultaneous source, and may receive streaming data or messages from multiple DTPGs 2516, 2518 over the same period of time.
A DTP as provided in any of the foregoing embodiments may include the ability to cache information so as to reduce load / traffic on a distributed dynamic data system. Examples of information that might be cached by a DTP include: the D3S connection address for the D3S assigned to a DTP; D3S connection addresses D3Ss mapped to other DTPs; any connection information provided by a DR to speed re-connect; security items, including passwords; and data, including historical data, such as may be stored by a D3S assigned to a DTP.
To address security concerns, messages may be transmitted with embedded security tokens. Use of a security token appended to data or messages permits a sender to be authenticated. A security token sent with a message may be compared to a token generated by a receiver to establish whether the receiver desires to receive the message. Most any device, such as a DTP, D3S, D3X, D3 Master, or DR/DTPG may constitute a sender or receiver for these purposes. Messages from unauthenticated senders may be rejected. While the foregoing embodiments described in connection with FIGS. 21- 23 and 25 each included a relatively centralized static data repository and associated DTPG, a distributed dynamic data system according to the present invention may be operated without a centralized static data repository. Reference is made to FIG. 26, which provides multiple DTPs 2610, 2611 , multiple D3Ss 2620, 2621 , and a D3X 2630. Where distributed dynamic data system lacks a centralized data repository, then static data (including, at a minimum, a unique ID) is assumed to either be stored at a DTP (e.g., DTP1 2610) or be possessed by a DTP user. A typical characteristic of a system 2600 according to FIG. 26 lacking a centralized static data repository is that a sender (e.g., DTP1 2610 or a user thereof) does not know the connection address of a desired recipient DTP (e.g., DTP2 2611). Moreover, the desired recipient (e.g., DTP2 2611) may, (but not necessarily) not know the connection address of the sender (e.g., DTP1 2610). Each DTP 2610, 2611 is generally available at the discretion of the application user, such that each DTP 2610, 2611 does not always need to be available to the network. 2601. Availability may be affected by the inactivity of DTP applications, or if a device operating a DTP is turned off.
Although many of the characteristics of a system according to FIG. 26 are similar to the foregoing embodiments described in connection with FIGS. 21-25, the procedure that is employed when a first DTP seeks to connect to a second DTP is different. To start, a first DTP (e.g., DTP1 2610) initiates contact with the D3X 2630 to determine which D3S 2620, 2621 will be assigned to the DTP 2610 and to which the DTP 2610 should connect. The D3X 2630 then responds to this request by providing the desired information to the DTP 2610. Next, a connection is established between the DTP 2610 and a D3S (e.g., D3S1 2620) to permit data transfer and provide the D3S 2620 with the connection address for the DTP 2610. In a similar vein, a second DTP (e.g., DTP2 2611) initiates contact with the D3X 2630 to determine which D3S (e.g., D3S2 2611) will be assigned to the second DTP 2611 and to which the DTP 2611 should connect. The D3X 2630 responds to this request from the second DTP 2611 by providing the desired information to the second DTP 2611. A connection is then established between the second DTP 2611 and its assigned D3S (e.g., D3S2 2621) to permit data transfer and to provide the D3S 2621 with the connection address for the second DTP 2611. After D3Ss 2620, 2621 have been assigned to the DTPs 2610, 2611 , the first DTP 2610 may initiate contact with the D3X 2630 to determine which D3S 2620, 2621 contains the connection address for the second DTP 2611. After this information is returned to the first DTP 2610, the first DTP 2610 may contact the D3S 2621 assigned to the second DTP 2611 to obtain the DTP connection address for the second DTP 2611 . Using the DTP connection address obtained from the D3S 2621 assigned to the second DTP 2611 , the first DTP 2610 may establish communication (direct or otherwise) or pass information to the second DTP 2611. To assist in understanding this procedure, initial logical connections between various components of a distributed dynamic data system 27 according to the embodiment depicted in FIG. 26 are illustrated in FIG. 27.
In a system 2600 as illustrated in FIGS. 26-27, if a recipient DTP (e.g., DTP2 2611) is unavailable, then information intended for receipt by the recipient DTP 2611 may be stored at the D3S 2621 assigned to that DTP 2611 for retrieval when the recipient DTP 2611 connects. This functionality requires D3Ss 2620, 2621 to be available to DTPs 2610, 2611 - not necessarily in a persistent fashion, but at least available on demand. If a first DTP 2610 is connected to the network 2601 , then a second DTP 2611 , using minimum contact information for the first DTP 2610, can obtain the DTP connection address for the first DTP 2610 and thereafter establish communication with the first DTP 2610. Preferably, the second DTP 2611 can establish direct communication with the first DTP 2610, i.e., without routing the substance of the communications through D3Ss 2620, 2621. Alternatively, and less preferably, communications between the first and second DTPs 2610, 2611 may be routed through a D3S 2620, 2621. Regarding caching, so long as a DTP 2610, 2611 is connected to the network 2600, it may cache (or otherwise store) information of interest to that DTP 2610, 2611. This caching tends to reduce the load on the system 2600, since it avoids the need to reinitiate the entire log on / initial connection procedure. These features merely illustrate some of the potential enhancements to the basic system depicted in FIGS. 26-27; others are provided in the description of the foregoing embodiments. 1) Consistent Mapping of D3Ss Using The D3S Assignment Scheme
The D3S assignment scheme referred to in conjunction with the foregoing embodiments will now be discussed, without reference to any particular preceding figure. Even though the distributed dynamic data system of the present invention may be highly segmented and highly distributed, the consistent assignment scheme enables high refresh rates to be obtained without overloading any particular segment or device of the system. The assignment scheme (i.e. the scheme of mapping unique IDs for DTP users to particular D3Ss) indicates the availability of particular D3Ss. And since a D3S operates on hardware (such as a network server), the assignment scheme is hardware-dependent. The assignment scheme preferably includes a D3 cluster array, a D3 key array, and an incrementing assignment scheme identifier (discussed previously). The D3 assignment scheme is generated by the top-level executive service (e.g. lead D3X or, if present, D3 Master), but may also be stored an all D3Xs and D3Ss. Generation and updating of the assignment scheme are preferably initiated by a script. A new assignment scheme identifier is generated with each change to the assignment scheme.
Generally, a term called "load balancing" refers to the process of distributing communications activity evenly across a computer network so that no single device is overwhelmed. Load balancing is especially important for networks where it is difficult to predict the number of requests that will be issued to a server. a) Mapping to a particular D3S
A preferred D3 assignment scheme used with the present invention combines elements of both static and empirical load balancing to determine which particular D3S within a D3 cluster should be mapped to the unique Id corresponding to a DTP user. The static load balancing is accomplished by hashing each unique ID into a numerical value according to a predetermined algorithm and distributing these values over the number of D3Ss available within a particular D3 cluster using a D3 cluster array. For instance, if a unique ID is an email address such as "john.doe@visitalk.com," one way of hashing such an address into a numerical value is to extract the two characters on either side of the "@" symbol, replacing each character with its numerical position in the alphabet, multiplying these numbers by one another, eliminating all but the last four digits, dividing the resulting product by a scaling factor (such as 1x104), and then multiplying the resulting quotient by the number of D3 Services available at the particular D3 cluster. According to this method, if a particular D3 cluster is assumed to have six D3Ss, then the resulting hash value will be some numerical value between 0 and 5. For a given set of unique IDs, the resulting hash values should be relatively evenly distributed among the six available D3Ss. To assign a particular D3S, a D3 cluster array is used in conjunction with the hash value obtained from a unique ID. Each D3 cluster has at least two D3Ss available, and the D3 cluster array contains positions for each available D3S in a D3 cluster. Each available position in the D3 cluster array is populated with an identifier for a particular D3S within a D3 cluster. In an embodiment including multiple D3 clusters, then the D3 cluster array is preferably a two-dimensional array. The hash value for a unique ID marks a position in the D3S cluster array for a particular D3 cluster where the entry corresponding to a specific D3S may be found. In a preferred embodiment where a load balancing switch connects a particular D3 cluster to a network, then the hash value for a unique ID more specifically marks a position in the D3 cluster array for a particular D3S where an entry corresponding to a load balancing switch output group may be found. Assuming that a D3 cluster array has six positions, then all hash values from 0 up to 1 may be assigned to the first positional entry, all hash values from 1 up to 2 may be assigned to the second positional entry, and so on. Alternatively, the algorithm may use integer division to ensure that all hash values result in whole numbers. Again, each positional entry identifies a specific D3S within a D3 cluster. In this manner, a predetermined hashing algorithm provides a repeatable method of allocating a particular D3S to a particular unique ID within a given D3 cluster, while balancing the load relatively evenly between the available D3Ss in a given cluster.
Use of positional entries in the D3 cluster array permits additional D3Ss to be added to a particular D3 cluster without changing an entire assignment scheme. It is contemplated that the number of D3Ss available at a particular site will be controlled by an administrative authority such as visitalk.com. The ability to provide additional D3Ss at a particular site when needed permits a certain extent of empirical load balancing based on usage. The use of positional entries in the D3 cluster array provides the further benefit of allowing individual D3Ss to be replaced by spare D3Ss, when necessary, without changing the entire assignment scheme. b) Multiple clusters and use of D3 Key
If a distributed dynamic data system includes only a single D3 cluster, then the above-mentioned method is sufficient to distribute loading between D3Ss. In a preferred embodiment, however, multiple D3 clusters are provided. If a large number of DTPs connect to a D3 network, then the issue of balancing load between the multiple D3 clusters must be addressed to prevent any particular device or node in the system from being overloaded. To respond to this concern, a further load balancing technique is used.
When multiple D3 clusters are present, then the invention contemplates use of a D3 Key that is assigned to groups of unique IDs for DTP users. Each unique ID for a DTP user should be assigned a corresponding D3 Key, but a D3 Key will not be unique to each DTP user. A D3 Key is used to determine which D3 cluster contains the D3S mapped to a given unique ID. For example, the D3 Key may comprise a 10 digit number. The more digits that the D3 Key contains, the greater the number of possible D3 clusters that may be contained in a D3 system. Each D3 cluster should have a minimum of one D3 Key, and more than one D3 Key can map to the same D3 cluster; however, a single D3 key should not map to more than one particular D3 cluster. It is contemplated that a central authority, such as visitalk.com, will determine permissible values for D3 Keys based on the presence of particular D3 clusters. It is further contemplated that permissible D3 Key values will be provided to each entity that maintains a static data repository used with the D3 system. However, it is contemplated that the actual assignment of D3 Key values to unique IDs will be at the discretion of each static data repository maintaining entity. Use of the D3 Key to determine a particular D3 cluster will now be explained. The D3 key array is preferably a one-dimensional array having multiple positional entries populated with entries, each entry identifying a particular D3 cluster. It is contemplated that the D3 key array will be maintained and updated, and propagated to at least one component in the D3 system, by a central administrative authority, such as visitalk.com. A D3 Key represents a position in the D3 key array where an entry identifying a D3 cluster may be found. That is, the permissible values for entries in the D3 key array are limited by the clusters actually present in the D3 system. If it is assumed that a D3 key array has 10 positions, but only two D3 clusters are present, then the 10 positions in the D3 array may be populated with entries corresponding to only the first or second D3 clusters.
Contrary to the order in which the topics were first explained here, if multiple D3 clusters are present, then mapping a unique user ID to a particular D3S first requires selection of a particular D3 cluster, and later requires selection of a particular D3S within that cluster. In a preferred embodiment with multiple D3 sites, the assignment scheme may include the information provided in FIG. 28, to which attention is now directed. FIG. 28 provides an exemplary representation of a D3S assignment scheme. As implemented, the actual assignment scheme may or may not appear in the form of four quadrants. In the exemplary representation, however, the upper left quadrant 3001 signifies the assignment scheme identifier. An example of such an identifier may be a serial number that increments sequentially with each change to the assignment scheme, up to a maximum value of 255, then returning to a value of 1. The upper right quadrant 3002 signifies the D3 Key array, which contains positional entries, each identifying a D3 cluster. The D3 Key array is only used if multiple D3 clusters are present. The D3 Key array provides mapping to a particular D3 cluster based on a D3 Key, and allows empirical load balancing between clusters. The lower left quadrant 3003 signifies one portion of the D3 Cluster array, providing the number of D3Ss operating (not standby spares) at each respective D3 cluster. The lower right quadrant 3004 signifies the other portion of the D3 Cluster array, which provides positional entries, each identifying a D3S at a particular D3 cluster. The result of applying the unique user ID and D3 Key to the D3 assignment scheme is either a D3 map (e.g., 201 ;2;004) or a D3 URL (e.g., D3_2.visitalk.com/004).
An example showing how a D3 URL may be obtained from a D3 assignment scheme may be explained in conjunction with the illustration of FIG. 29, premised on the following assumptions: • Assume that the unique identifier is "john.doe@visitalk.com".
• Assume that the D3 Key is "5"
• Assume that the D3 assignment scheme is as provided in FIG. 29, according to the legend provided in FIG. 28 and the foregoing description. The upper left quadrant 3101 represents the assignment scheme identifier; the upper right quadrant 3102 represents the D3 Key array populated with entries corresponding to D3 clusters 1 and 2; the lower left quadrant 3103 represents a portion of the D3 cluster array, indicating the number of D3Ss operating at D3 clusters 1 and 2, and the lower right quadrant 3104 represents the other portion of the D3 cluster array, with entries each identifying particular available D3Ss.
The first step is to use the D3 Key in conjunction with the D3 Key array to determine which D3 cluster should be used. The D3 Key value of "5" represents the fifth position, starting at zero, in the D3 key array (located in the upper-right quadrant 3102) where the D3 cluster identifier will be found. The fifth position, starting from zero, is populated with the value of "2," as indicated by the upper arrow in FIG. 29. This means that D3 cluster 2 will be used. In the D3 Cluster array, D3 cluster 2 corresponds to the lower row, containing an entry of "6" in the lower left quadrant 3103 (meaning that 6 services are active at that cluster).
The next step is to apply a hashing technique and consistent formula to convert the unique ID to a number corresponding to a position in the D3 Cluster array. One example of a known hashing technique is to extract the two characters before and after the "@" symbol in the unique ID, convert each character to numbers corresponding to numerical positions in the English alphabet, multiply these numbers together, eliminate all but the last 4 digits of the product, and divide the resulting product by a scaling factor (such as 1x104). For purposes of this example, assume that hash value obtained is 0.25. The hash value is then multiplied by the number of D3 Services available at D3 cluster 2 (here, the value indicated in the lower left quadrant 3103 corresponding to D3 cluster 2, or "6") to yield "1.5" as the result. Assuming that the first entry in the lower right quadrant 3104 (a portion of the D3 cluster array) is assigned to all values from 0 up to 1 , and the second entry is assigned all values from 1 up to 2, and so on, then the "1.5" value yielded from hashing the unique ID corresponds to the second entry in the lower right quadrant 3104 portion of the D3 cluster array. The value of that second entry is "3" (as indicated by the lower arrow in FIG. 29), indicating that D3 Service 3 in D3 cluster 2 is to be assigned or mapped to the unique ID. Thus, from the foregoing example, the results are: • D3 cluster = 2
• D3 Service ID = 3
• Assignment scheme identifier = 21 (extracted from matrix used to generate the map)
If these results are translated to a D3 URL, then a value of "D3_2.visitalk.com/003" is obtained. The "2" following D3 refers to D3 cluster number 2, and the 7003" at the end of the URL refers to the particular D3S (D3S 3) located in D3 cluster number 2.
Returning from the specific example to a generalized detailed description, if only one D3 cluster is present, then a D3 Key is not necessary, since the sole function of a D3 Key is to provide quick mapping information to a particular D3 cluster. If multiple D3 clusters are present, then the direct mapping information for a specific D3S may comprise the unique ID coupled with a D3 Key. However, a search for a D3S mapped to a particular user may still be performed with only a unique ID (i.e., without a D3S key) since the mapping scheme will direct a unique ID to a specific D3S in a cluster. Therefore, only one D3S per cluster must be searched. One drawback of running a search among multiple D3 clusters using only a unique ID is that the search needs to be executed at all the D3 clusters until the particular D3 cluster having the desired D3S is found; it thus adds traffic and consumes network resources. 2) Public versus Private networks
The rise of private subnetworks connected to the public Internet has been primarily driven by security concerns. Private subnetworks may be shielded from the public Internet by devices such as proxy servers or firewalls. While a client or DTP within a private subnetwork connected to the Internet is generally allowed to send communications to public clients or DTPs without difficulty, such a client / DTP is generally not allowed to receive unsolicited communications unless in direct reply to an outgoing communications sent by the private client / DTP. The presence of the public Internet and private subnetworks means that five different possibilities exist for communications between clients: (1) public to public; (2) public to private; (3) private to public; (4) private to private on the same subnetwork; and (5) private to private on different subnetworks. For the first four connection types (public-public, public-private, private- public and private-private on same subnetwork), direct point-to-point communications may be established using known solutions. A distributed dynamic data system according to the present invention may be utilized for exchanging communications according to all five of these connection types, including between two private clients / DTPs located on different subnetworks, as illustrated in the top-level diagram of a distributed dynamic data system provided in FIG. 30. Two separate DTPs 3210, 3211 may connect to a public network 3201 through firewalls 3218, 3219 that shield private subnetworks 3208, 3209. By way of a first DTP 3210, a first user on a first subnetwork 3208 may send a message through a first firewall 3218 to a D3S (e.g., D3S 2 3221) mapped to a second DTP 3211 on a second subnetwork 3209, and include in that message the connection address for a D3S (e.g., D3S 1 3220) mapped to the first DTP 3210. If the second user utilizing the second DTP 3211 on a second subnetwork 3209 should query its own D3S (e.g., D3S 2 3221), then the communication initiated by the first DTP 3210 may be passed through the second firewall 3219 to the second user as part of the response to the query initiated by the second DTP 3211. Since this method of communicating between the first and second users flows through the respective D3Ss 3220, 3221 , however, it consumes more resources than would direct point- to-point communications between the first and second DTPs 3210, 3211. Yet, a system according to the present invention will permit communications such as instant-type communications (including text or any other real-time messages) to proceed through the D3Ss 3220, 3221 mapped to the respective DTPs 3210, 3211 on different subnetworks 3208, 3209.
In another aspect of the invention, FIG. 18 is a schematic overview of a distributed dynamic DNS system 4000. As used herein, "Dynamic DNS" refers to a method used by client machines used to determine to which server a particular unique identification number, such Permanent Communication Number or PCNSM, should be registered. A unique identification number may be any type of number or identifier (e.g., a 12-digit number) which is unique to a particular user, such as a PCN. "Directory cluster" 4040 refers to a cluster of servers where a client 4010 registers a PCN so that other clients (not shown) may determine if it is online. The distributed dynamic DNS system 4000 according to the present invention and illustrated in FIG. 18 may be described in terms of functional block components and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware and/or software components configured to perform the specified functions. In addition, those skilled in the art will appreciate that the present invention may be practiced in any number of data communication contexts and that the various systems described herein are merely examples of applications for various aspects of the invention. Such general techniques that are known to those skilled in the art are not described in detail herein. 3) Dynamic DNS Cluster:
The dynamic DNS cluster 4030 provides an algorithm and a list of dynamic directory clusters to a client (or user) computer 4010. A client 4010 uses the algorithm in conjunction with a PCN to calculate which dynamic directory cluster (e.g., 4040) it should use to determine if the user associated with that PCN is online, or to register that PCN with the dynamic directory cluster. In its simplest form, the algorithm will be the PCN mode (modulo) by the number of dynamic directory clusters. Other algorithms may also be used.
Each dynamic DNS cluster (e.g., 4030) preferably consists of multiple load balanced web servers (not shown) for redundancy. Every web server in a dynamic DNS cluster has access to the same list of IP addresses for the dynamic directory clusters.
The list of dynamic directory clusters is relatively static, so it can be replicated between dynamic DNS clusters. Once a dynamic DNS cluster (e.g., 4030) has the list of dynamic directory clusters, it will check to see which of the clusters are online. It will then use the list of online dynamic directory clusters to fulfill requests from client computers.
There are various ways a DNS cluster 4030 could determine which directory clusters are online and able to register PCNs. If the DNS cluster 4030 itself determines the online status of the directory cluster, it is possible that it will not register some dynamic directory clusters because connectivity to a particular dynamic directory cluster is not available through the Internet. However, if a dynamic DNS cluster 4030 cannot locate a directory cluster (e.g., 4040), it can be presumed that a client 4010 requesting a list of dynamic directory clusters from the dynamic DNS cluster 4030 will likely not be able to locate the same dynamic directory clusters as the dynamic DNS cluster 4030. Each DNS cluster (e.g., 4030) preferably requests a health check from each directory cluster (e.g., 4040) intermittently.
Additionally, it is advantageous to provide a method of propagating a list of dynamic directory clusters (e.g., 4040) to each of the DNS clusters (e.g., 4030). This allow a dynamic directory cluster 4040 to be taken offline in order to do maintenance, thus preventing the directory cluster from appearing to go offline. If a directory cluster 4040 is removed from a group of servers, the ability to force a client to retrieve the current list of directory clusters from a DNS cluster 4030 may be desirable. This would allow a directory cluster 4040 to be taken offline and may force each of the clients to update their dynamic DNS entries when they make a request from the directory cluster going offline. This should allow for any request for online statuses to be fulfilled, even when taking a directory cluster 4040 down for maintenance.
The dynamic directory cluster 4040 can be taken offline by simply removing it from the list of dynamic directory clusters supplied to the dynamic DNS cluster 4030. Since dynamic DNS cluster 4030 only checks to see if those dynamic directory clusters in the list of dynamic directory clusters is online means that any dynamic directory clusters not included in the initial list will never be sent to client computer as possible online directories. In order to take a dynamic directory cluster 4040 offline, updating the dynamic directory list should be segmented in two phases. First, a client 4010 is forced to update its online status in two dynamic directory clusters. The first online status update is in accordance with the original dynamic directory cluster list, and the second online status update is in accordance with the revised dynamic directory cluster list.
A client 4010 updates its online status with certain periodicity. It can be certain that a client has updated its online status in both dynamic directory clusters once this period of time has elapsed. After every client has updated their online status in both dynamic directory clusters, a process of sending the revised dynamic directory list to clients checking online status may be initiated. Thereafter follows a waiting period for the time it takes for the list of dynamic directory clusters to expire. After this expiration, none of the clients checking an online status will attempt to use the dynamic directory cluster that has been removed from the list. The advantage to this approach is that a directory cluster (e.g., 4040) may be taken offline without affecting a client's ability to determine if a particular PCN is online. A second approach is to have the directory cluster that is being taken offline redirect the request to the appropriate directory cluster. Although this approach may result in the client ending up with an offline status when the correct status for a PCN is online, and the client requesting the PCN online status makes the request before the client setting the online status has a chance to update its online status. a) Dynamic Directory
The Directory cluster 4040 preferably has access to a database containing each of the online PCNs. b) Client Requests
There are at least two different types of requests for online status, each of which is dictated by a PCN. The first is a client request from a web page, the second is a request by an application. Although each of these requests is almost identical, they are handled differently in that one is handled solely by the client application, and the second is an HTTP request embedded in the web page by the web server.
To request the online status of a PCN, the client computer 4010 or the web server will first check to see if its list of directory clusters is current. If it is not, it will request the algorithm and list of dynamic directory clusters from the dynamic DNS cluster 4030 (determined by some form of global server load balancing). It will use the algorithm in conjunction with a PCN to determine which dynamic directory cluster contains the online status of the PCN, making a request of the dynamic directory cluster for that PCN. The dynamic directory cluster 4040 returns the correct online status of the PCN. To list itself in the dynamic directory cluster, a client 4010 checks to see if its list of dynamic directory clusters is current. Once again, if it is not, it will request the algorithm and list of dynamic directory clusters from the dynamic DNS cluster 4030. It will then use the algorithm and its own PCN to determine which dynamic directory cluster (e.g., 4040) to register with. It will then send a registration request to that dynamic directory cluster.
FIG. 19 is a top-level diagram of an Internet-based communications system 21 , illustrating various concepts as disclosed herein. As illustrated in FIG. 19, at the core of the communications system 21 is a global electronic network such as, for example, the Internet 25. A variety of Internet devices 22 connect to the Internet 25, for the purposes of carrying out communication over the Internet 25. As used herein, Internet device 22 is any device that is directly accessible by or has the capability to directly access the Internet. An Internet device 22 may be, for example, a computer or a personal communication device (such as an IP telephone or videophone), and may access the Internet by wireless or non- wireless type of connection. The particular method and manner in which an Internet device 22 accesses the Internet 25 is not important to the operation of the invention as broadly described herein.
As further shown in FIG. 19, a web site 26 connects to the Internet 25. The web site 26 comprises a web server 27 and a directory of Permanent Communication Numbers stored, for example, in a database 28. An example of at least a portion of the contents of a suitable directory is illustrated in FIG. 9, and includes such things as a user name, an indicator of the user's online status, and a unique Permanent Communication Number assigned to the user. The database 28 may comprise a dynamic portion subject to relatively frequent updating, and a permanent portion subject to relatively infrequent updating. A user may be assigned a Permanent Communication Number by, e.g., registering with the web site 26. Further details about the use of Permanent Communication Numbers are set forth later herein.
When a first user desires to communicate with another user, the first user connects to the web site 26 and enters the target user's Permanent Communication Number, which gets transmitted to the web site 26. If the target user is on-line, the requesting user receives the target user's current IP address from the database 28 based upon the target user's Permanent Communication Number, and instant or live communication then ensues between the requesting user and the target user. In the event that the target user is not on-line, the web site 26 allows the requesting user to store a message in the video or voice mail for the target user. When the target user eventually comes on-line, the target user can retrieve messages from his or her mailbox.
FIG. 20 is an architectural diagram of one embodiment of the Internet- based communications system illustrated in FIG. 19. Similar to FIG. 19, and as illustrated in FIG. 20, at the core of the communications system 31 is a global electronic network such as, for example, the Internet 35. A variety of Internet devices 32 (which may be any of the types of devices as discussed with respect to Internet devices 22 in FIG. 19) are shown connected to the Internet 35, for the purposes of carrying out communication over the Internet 25. As further shown in FIG. 20, a web site 36 connected to the Internet 35 comprises a web server 37 and a directory of Permanent Communication Numbers stored, for example, in a static database 40. A dynamic database 41 stores the on-line status of users associated with the Permanent Communication Numbers stored in the static database 40. Users seeking to communicate with other users may access the web site 36 and enter the Permanent Communication Number of the target user, and obtain the target user's most recent IP address and current on-line status thereby. If the target user is not on-line, then the requesting user may leave a message in a mailbox for the target user. The message may be retrieved when the target user comes on line.
FIG. 1 illustrates a system cluster configuration as may be utilized in connection with one or more embodiments as described herein. As shown in FIG. 1 , the global computer network known as Internet 51 is represented as a cloud. A co-location service 100 is also shown as a cloud in accordance with the convention of showing various network structures and functions as a cloud representation, where the specific details of the implementation of the particular structure or functionality are not particularly significant. Co-location service 100 in the system of the illustrative embodiment is provided by GlobalCenter, but may be any similar entity that is in the business of co-locating web services. Information regarding GlobalCenter is available on the Internet at the Internet address www.globalcenter.com. Co-location service 100 provides a large facility direct connection for continuous monitoring of the server site.
Co-location service 100 is linked to a router 101 via a link 113. Router 101 may comprise any suitable router unit. Router 100 provides connections to the Internet 51. Router 101 provides a single point of entry from the system of the invention into the Internet 51. From a user's perspective, router 101 provides a single point of contact for users. When a user types in a specific Uniform Resource Locator (URL), e.g., "http://www.visitalk.com" (a domain name associated with the name of the present applicant), the user is directed to router 101. Therefore, the address or name "visitalk.com" (in this example) will resolve to the IP address of router 101. In the illustrated embodiment, router 101 is coupled to two director switches 103, 107 via links 115, 117, respectively.
Each director switch 103, 107 is a commercially available unit. In the embodiment of the invention described herein, director switches 103, 107 comprise the ACE director available from Alteon WebSystems and described in detail in data sheets provided on-line at Alteon's web site located at "http://www.alteonwebstytems .com/products". The Alteon ACE directors are characterized as having an 8 gigabyte backplane. The ACEdirector is a Layer 4+ switch that includes software capability for high performance server load- balancing. In the illustrated embodiment, director switches 103, 107 are configured in a redundant configuration such that one director 103 is redundant to the other director 107 and only one director switch 103 or 107 is active at any time. If one director switch, e.g., director switch 103, fails, the other director switch, e.g., director switch 107, picks up immediately. The failed director switch 103 may be replaced without taking the entire system down. A link 119 is provided between director switches 103, 107. Link 119 is preferably a high- capacity link which, in the illustrative embodiment, is a one gigabyte link. This link is a high capacity link so that, in the event failures occur in either link 115 or link 117, or if failures occur in the links between one director switch 103 or 109 and one of the enterprise switches 105, 109, then all traffic may be routed over link 119. Each director switch 103, 107 is a switching network that balances traffic across multiple servers or other devices. Director switches comprise software and/or hardware configured so that each sends three streams of traffic into each of the enterprise switches 105, 109.
One director switch, e.g., director switch 107, is designated and utilized as a primary director switch. The other director switch, i.e., director switch 103, is utilized as a secondary director switch. The secondary director switch 103 remains quiescent or dormant until a fault or failure associated with the primary director 107 occurs. The redundancy factor provided by the directors 103, 107 includes coverage of more than failure of one of the director switches 103, 107. The failure could also include a failure of either one of the links 115, 117 coupling the director switches 104, 107 to router 10 1.
Each director switch 103, 107 routes traffic to enterprise switches 105, 109 and to the server cluster 111 beyond director switches 105, 109. Link redundancy capability is provided between each director switch 103, 107 and the enterprise switches 105, 109. In the illustrated embodiment, three links are provided between each director switch and each enterprise switch. Links 121 , 123, 127 connect director switch 103 to enterprise switch 105. Links 122, 124, 126 connect director switch 103 to enterprise switch 109. Links 131 , 133, 137 connect director switch 107 and enterprise switch 105. Links 132, 134, 136 connect director switch 107 and enterprise switch 109.
The number of links between each director switch 103, 107 and each enterprise switch 105, 109 corresponds to the number of networks included in each enterprise switch 105, 109. With this arrangement, each director 103, 107 is coupled to three network portions of each enterprise switch 105, 109. Utilizing three different traffic paths between each director 103, 107 and each enterprise switch 105, 109 increases the ability to push more traffic through the system and to segregate that traffic into additional networks at the server level. There are three routes of traffic into each enterprise switch 105, 109 from each director switch 103, 107.
Enterprise switches 105, 109 are, in the illustrated embodiment, very large capacity switches. Each enterprise switch 105, 109 is very redundant, i.e., each includes three networks and each can accommodate a very large numbers of users. Each enterprise switch 105, 109 has a large switching fabric through which a high volume of data may be switched. The enterprise switches in the illustrative embodiment are characterized by 24 gigabit backplanes. A particularly well suited enterprise switch which may be used in the system of the illustrated embodiment is the Catalyst 4000 Series available from Cisco and described in various documentation available at Cisco's web site located at "http://www.cisco.com".
Both enterprise switches 105, 107 are, in the illustrated embodiment, active all the time, but if a failure disables one of the enterprise switches 105, 109, only half the network will be lost. The loss affects capacity of the system, but it does not take down the network. If only one of the enterprise switches was active at a time, and the other enterprise switch ran in a hot standby mode, the performance of the system would be determined by only one enterprise network. By running both enterprise switches active, the overall system performance is doubled. Each enterprise switch 105, 109 independently communicates with the server cluster 111. If either one of enterprise switches 105, 109 goes down, the entire traffic load can be handled by the remaining enterprise switch. The functionality of the cluster may be maintained if one director switch 103, 107 is lost, but not necessarily the load.
In the system architecture, a number of virtual local area networks or V-LAN's have been set up behind enterprise switches 105, 109. By providing for V-LANs, various servers can communicate with each other through the enterprise switches 105, 109. This arrangement serves to divide up the traffic. One VLAN cannot see another. Traffic is segregated, bandwidth is improved, and contention among resources on the network is reduced. In the various drawing figures that follow, each VLAN shows the most relevant portion of the network to provide an understanding of its structure and functionality. For purposes of clarity, not all elements of FIG. 1 are repeated in each of the VLAN drawing figures.
In the system of the invention, two servers LDAP 1 , LDAP2 of service cluster 111 are provided for "lightweight directory access protocol" (LDAP). LDAP is used with a service that allows a user with a certain type of software to log on and connect to the server and see a directory of other people who are logged into that server. VLAN2 shown in FIG. 2 is the VLAN that is used by Internet traffic users. In the illustrated embodiment, each server LDAP1 , LDAP2 has two Internet protocol (IP) addresses: Server LDAP1 has IP addresses 10.2.1.1 and 10.6.1.1 , and server LDAP2 has IP addresses 10.2.1.2 and 10.6.1.2. Referring back to FIG. 1 , traffic from router 101 and director switches 103, 105 is routed to enterprise switches 105, 109. Director switches 103, 107 distribute that traffic to VLAN 2. Director switches 103, 107 determine what type of traffic it is and determine which server the traffic goes to director switches 103, 107 forward the data packets to VLAN 2 and, ultimately, to the appropriate server. Director switches 103, 107 determine that the traffic is LDAP traffic and determine which of the four LDAP IP addresses the traffic is going to. Although each server LDAP1 , LDAP2 has two IP addresses, only one instance of LDAP is running on each server LDAP1 , LDAP2, but each server LDAP1 , LDAP2 may be served from either of its two IP addresses. In operation, all four LDAP IP addresses will be looked at and a determination made as to which IP address has the least traffic. Each IP address is associated with a separate network interface card at the server LDAP1 , LDAP2. Accordingly, each server LDAP1 , LDAP2 includes two network interface cards for redundancy. If one network interface card fails, the traffic is routed to the other network interface card of that server. The network interface cards used in the servers of the illustrated embodiment are commercially available units. Each network interface card includes dual ports and therefore supports two link connections and can therefore have two IP addresses, one for each of the dual ports.
After the director switches 103, 107 determine which server LDAP1 , LDAP2 has the least traffic, traffic is forwarded to the appropriate server IP address. The selected server LDAP1 I or LDAP2 processes the user traffic and provides a response back through the VLAN. To the server LDAP1, LDAP2 the operation is like a typical server request in which the server is plugged into a network and is working. Server LDAP1 , LDAP2 just sends a response back to the appropriate address, which is carried in the packet. As should now be apparent, the system of the invention provides balancing at the network interface card level as contrasted with balancing at the server level.
Servers LDAP1 , LDAP2 only support LDAP. Running LDAP service provides directory service to users connecting to it. Suitable LDAP software is available from a number of vendors. In the illustrative embodiment described herein, the LDAP software is Microsoft's version which comes in a product called Microsoft Site Server. LDAP is an industry standard, but several different companies create versions of it. Turning now to FIG. 3, the system of the invention includes a VLAN having three Internet Information Servers (IIS) IISMTS1 , IISMTS2, IISMTS3. The VLAN, including servers IISMTS1 , IISMTS2, IISMTS3, operate in exactly the same manner as VLAN 2 servers LDAP1 , LDAP2. Each server IIMTSI, IISMTS2, IISMTS3 utilizes IIS software which is commercially available from Microsoft. MTS, Microsoft Transaction Server, is a software depository for business objects. Software objects that users create that do certain tasks all reside on MTS. Each one of servers IISMTS1 , IISMTS2, IISMTS3 supports two functions. Each provides service via IIS and provides back-room software functionality with MTS. On each of servers IISMTS1 , IISMTS2, IISMTS3, MTS objects perform certain functionality on the network. The servers IISMTS1 , IISMTS2, IISMTS3 are physically separate servers from the LDAP servers, LDAP1 , LDAP2, but each work in the same way. Server IISMTS1 has two IP addresses, 10.2.1.3 and 10.6.1.3 and is linked to enterprise switch 105 via link 301 , and to enterprise switch 109 via link 303. Server IISMTS2 has IP addresses 10.2.1.4 and 10.6.1.4, and is linked to enterprise switch 105 via link 305 and to enterprise switch 109 via link 307. Server IISMTS3 has IP addresses 10.2.1.5 and 10.6.1.5 and is linked to enterprise switch 105 via link 309 and to enterprise switch 109 via link 311. Turning again back to FIG. 1 , director switches 103, 107 sense when a user is utilizing a browser such as Netscape or Internet Explorer and the user requests a page by putting sending a URL. Director switches 103, 107 determine that the URL request is to be routed through IIS for page service.
On a page, there may be icons or the like which may be clicked on to cause an MTS object to activate. When the user clicks on such an icon, a program is executed on a server, e.g., server IISMTS1. Server IISMTS1 executes one or more objects that causes something else to occur. For example, another object may be displayed, an entry may be added to a data base, or an order may be processed. Two network interface cards, each corresponding to one IP address in each server IISMTS1 , IISMTS2, IISMTS3, provide redundancy so if any one interface card fails the server switches activity to the second network interface card in the same server. If a server IISMTS1 , IISMTS2, IISMTS3 fails it fails over to the other two servers. Thus, the system provides triple redundancy at the server level and single redundancy within a server for IIS and MTS. In this VLAN there is redundancy to each server IISMTS1 , IISMTS2, IISMTS3. In FIG. 4, the VLAN management network of the system of the invention is shown as VLAN3. VLAN3 includes, inter alia, a management server MGT. This VLAN management network provides server management as well as switching infrastructure management. Remote management capability is provided by connection through a Point-to-Point Tunneling Protocol ("PPTP") link 400 from the Internet 51. VLAN management network VLAN3 is also used for Sequel Server connectivity as well as LDAP replication. Redundancy is again provided with each of the network servers MGT, IISMTSI, IISMTS2, IISMTS3, LDAP1 , LDAP2, SQL1 and SQL2 having connections to both enterprise switches 105, 109 via network interface cards located at the respective servers. The network interface cards are not shown in the drawing Figures to reduce drawing clutter, but those skilled in the art understand that each link connection to a server as shown in the various figures has a network interface card connection at the server. Management server MGT has link 401 to enterprise switch 105 and link 403 to enterprise switch 109. IISMTS servers IISMTS1 , IISMTS2, IISMTS3 have links 405, 409, 413 to enterprise switch 105 and links 407, 411 , 415 to enterprise switch 109. LDAP servers LDAP1 , LDAP2, LDAP3 have links 417, 421 to enterprise switch 105 and links 419, 423 to enterprise switch 109. Sequel servers SQL1 , SQL2 have links 425, 429 to enterprise switch 105 and links 427, 431 to enterprise switch 109. In this VLAN, only one EP address is assigned per server. The servers will fail over from one link to the other in the event of a network interface card failure. Upon occurrence of a network interface card failure, the IP address is automatically transferred to the active network interface card connection.
In VLAN3 the two LDAP servers LDAP1 , LDAP2 are the same as shown in VLAN2 of FIG. 2 but their connections are different. At the server level, hardware is managed from the server MGT. There are actually six connections out of each server provided by three dual port network interface cards on the servers IISMTSI, IISMTS2, IISMTS3, LDAP1 , LDAP2. The management server MGT and each sequel server SQL1 , SQL2 each have two physical network interface cards, both dual port. Whenever an IISMTS server needs to talk directly to a sequel server, it will go through network VLAN3. The sequel servers SQL1. SQL2 are the database depository for any data collected. Searches are conducted against the sequel server databases. An Internet user will connect to one of the IIS servers IISMTS1 , IISMTS2, IISMTS3, but because director switches 103, 107 perform load balancing, the user can not predict which one he enters the system through via the URL address.
For example, when a user enters the system with a request, one of the director switches 103, 107 passes off the request to one of the IIS servers IISMTS1 , IISMTS2, IISMTS3. When the user clicks on a button that says "Member Search". The IISMTS server to which the user is connected passes a request within VLAN 3. The request is routed to a sequel server SQL1 or SQL2 as the user request is for is a database operation.
A remote management facility can connect to management server MGT via the Internet 51 and link 400, and perform any management needed with the servers, such as reconfiguring software and monitoring resources to identify loading. A primary purpose of this network VLAN3 is to support communication between servers and to facilitate control of the servers via a remote management station. Management server MGT can access any of the servers IISMTS1 , IISMTS2, IISMTS3, LDAP1 , LDAP2, SQL1 , SQL2 and it can access enterprise switches 105, 109 and perform configuration tasks.
VLAN3 functions as an internal "housekeeping" network that maintains all database data and LDAP traffic. The remote management station accesses the management server MGT via a point-to-point tunneling protocol, which is a way of accessing server MGT using encryption. A further VLAN is provided in the system of the invention as shown in FIG.
5. VLAN network VLAN4 includes LDAP servers LDAP1 , LDAP2. Enterprise switches 105, 109 each have access to both LDAP servers LDAP1 , LDAP2. LDAP server LDAP1 has, in the illustrated embodiment, IP addresses 10.4.1.1 and 10.7.1.1 and is linked to enterprise switch 105 via link 501 and to enterprise switch 109 via link 505. LDAP server LDAP2 has IP addresses 10.4.1.2 and 10.7.1.2 and is linked to enterprise switch 105 via link 505 and to enterprise switch 109 via link 507. LDAP server LDAP1 has two IP addresses. In the illustrative embodiment, VLAN4 serves provides a pool of the LDAP servers for internal system access only to the transaction servers IISMTS1 , IISMTS2, IISMTS3. VLAN2 is for Internet users whereas VLAN4 is for transaction servers in the server cluster 111.
For example, when a user connects to server cluster 111 from the Internet 51 to access the permanent directory from one of sequel servers SQLI, SQL2, the accessed sequel server SQLI will file an MTS object that will go out and perform a look-up on an LDAP directory. The system includes two different kinds of directory: An LDAP directory which may be supported almost entirely out of the box by any appropriate LDAP application, e.g., Microsoft LDAP, and a permanent directory which is a directory of all members to the service provided by the system. The members identified in the permanent directory may or may not be currently on-line on the Internet. This permanent directory database is maintained by the sequel servers SQL1 , SQL2. A second directory provides a list of all the permanent directory members who are on-line at substantially the time a request is made. One of the servers IISMTS1 , IISMTS2, IISMTS3 executes an MTS object to do a look up against active members and will indicate whether or not a member is on line. As will be explained elsewhere, if a member is on line, a call can be made to the active member and real-time communication can occur. VLAN4 supports that kind of traffic so that IISMTS servers IISMTS1 , IISMTS2, IISMTS3 can fire MTS objects that perform certain operations against the LDAP directory. This traffic is segregated from all other traffic.
Turning now to FIG. 6, VLAN 5 is the network used for traffic destined for LDAP servers LDAP1 , LDAP2, LDAP3. VLAN5 has one primary side and a standby side. The destination is to a virtual IP address that is provided by a director switch 103 or 107 . Once the virtual IP address is utilized, traffic will be load-balanced to the two LDAP servers LDAP1 , LDAP2. Each IISMTS server IISMTS1 , IISMTS2, IISMTS3 has one IP address. In an illustrated embodiment, server IISMTS1 has address 10.5.1.3 and is linked to enterprise switch 105 via link 601 and is linked to enterprise switch 109 via link 603. Server IISMTS2 has address 10.5.1.4 and is linked to enterprise switch 105 via link 605 and is linked to enterprise switch 109 via link 607. Server IISMTS3 has address 10.5.1.5 and is linked to enterprise switch 105 via link 609 and is linked to enterprise switch 109 via link 611. For example, if a user wants to obtain a directory listing, the request will come in on VLAN5. The request goes to the primary director switch 103 which in turn looks at the loading on VLAN4. LDAP servers LDAP1 , LDAP2 and transmits the request to the more lightly loaded server. The LDAP result is sent back to the director switch 103 which presents the results back to the requesting object. In another example, if a member search is performed for all members having a specific listed interest, a request will cause an MTS object on VLAN 5 to fire. The system would then route the request back up to director switch 103 to VLAN 3. In accordance with the system of the invention one server talks to another server across virtual networks where the resource that it needs, such as the LDAP directory, is not on the same virtual network.
A significant advantage of the system of the present invention as illustrated is that it is an Ethernet type of network in which contention is reduced significantly. Contention is reduced by creating artificial separate networks so that, for example, whenever a sequel server SQL1 is talking to an LDAP server LDAP1 , that communication place over a particular VLAN. None of the other VLANs hears the communication. When MTS server IISMTS1 is talking to LDAP server LDAP1 , that happens over a particular VLAN and therefore does not interfere with other traffic. Thus contention is greatly reduced. Thus, a very complex Ethernet type network is formed into multiple simpler Ethernet type networks, each of which is still contention-based but which has a reduced volume of traffic.
The system of the present invention provides a high level of security. Traffic cannot pass from one VLAN to the next without authority of either a director switch 103, 107 or an enterprise switch 105, 109. The VLAN networks are effectively hidden from the Internet. In other systems, if a "hacker" hits a switch he will either get through the switch or not. In the present system, even if the hacker were to get through the switch, he could still not get into any VLAN. Depending on what kind of traffic the hacker is sending, not only would he have to spoof fool his way through the switch, but the hacker would have to know how to get from the switch into the particular VLAN that he wanted access to. However, the VLANs are hidden from the entire Internet via director switches.
In the system as illustrated there are multiple networks and 5 VLANs. Each VLAN, though called a virtual local area network, is separate from each other. Servers and switches are preferably on the same VLANs and on the same network, then they can talk to each other. For example, FIG. 3 shows IISMTS servers IISMTS1 , IISMTS2, IISMTS3 all on the same VLAN2. Exemplary IP addresses for each server port are indicated. The address includes a network number portion and a host address portion. For IP address 10.2.1.3, the network number portion is 10.2.1 and 3 is the actual host address portion. The only other switches and servers that can communicate with IP address 10.2.1.3 are ones that have an IP address beginning with 10.2.1 , i.e., IP address 10.2.1 defines a network. IISMTS server IISMTS1 has a second link connection to enterprise switch 109 which carries IP address 10.6.1.3. That IP address is on a completely different network which may be identified as network 10.6.1. So the only communications that can occur with IP address 10.6.1.3 are with other servers or switches with addresses 10.6.1 , which in VLAN 2 shown in FIG. 3 are servers IISMTS2, IISMTS3 with IP addresses 10.6.1.4 and 10.6.1.5. The VLANs in the system of the invention actually separate traffic. The only way to make two networks talk to each other is by a router. Each director switch 103, 107 is, among other things, a router. So each director switch 103, 107 can communicate with the different networks. So a director switch 103, 107 can communicate with either the 10.2 or the 10.6 side of the servers IISMTS1 , IISMTS2, IISMTS3 of VLAN2. By combining two different networks into a single VLAN, redundancy is provided and performance is enhanced. If an entire network goes down, functionality is not lost. The system of the present invention provides both a dynamic and static directory. In the illustrative embodiment, the dynamic directory provides a list of
Internet users who are currently connected to a server, and the static or permanent directory provides a list of members to services supplied by the server or related servers and system.
FIGS. 7 and 8 are useful for understanding the dynamic directory. When an Internet user initially turns on his or her computer, loads the appropriate communications software, e.g., a client program conforming to the H.323 specification, and enters the name or address of a site that he or she desires to access, the client software automatically connects to the server associated with that site. When the user connects to the server, the server obtains user information via the client software. The user information is stored in an LDAP dynamic directory and for as long as the user is connected to the LDAP server, the user information is maintained. The information is not stored to a permanent directory and when the user drops his or her connection to the server, the user information is dropped. In addition, a permanent directory is provided which may be on a different server, or may be part of the same logical database. The permanent directory includes all users which have chosen to register with the service provided. In accordance with the principles of the invention an interaction is provided between the dynamic and permanent directories. Users stored in the permanent directory are offered an opportunity to register at the site in return for various service and/or product offerings that are made available. Users register with their name, address, and all other relevant information. The users become part of the permanent list whether they are connected to the server or not, and they are always on that list. Because it is desirable to develop an increasingly large permanent directory, the system of the present invention is unique in that it actively solicits membership. As shown in the flow diagram of, FIG. 8, as a user logs on to the server at 801 , the user's email address is extracted from the user's clients software at step 803 and their email address is added to the dynamic directory as indicated at step 805. The permanent directory is accessed and the user's address is looked up at step 806. If the user is not already listed in the permanent directory, the user is listed in the permanent directory and flagged to indicate that the user has been sent an invitation to register at step 807. An instant email is sent to the user based upon the email address provide to the server from the user's client software at step 808. The email will provide an invitation to join the permanent directory. If the user has previously registered, an email message may be automatically sent to him to provide specific information as indicated at step 813. When the user signs off at the site, the information stored in the dynamic directory is relinquished. One significant feature of the arrangement described above is that the identifying information is not consciously provided for collection at the time it is collected. When setting up the client software, i.e., the H.232 software, the user enters an email address and other information so that any server to which the user connects to subsequently is provided that information. In many instances, the information provided is intentionally deceptive or inaccurate because the user does not want to have his or her real identity known. To eliminate deceptive, incorrect identities, the present system monitors email returns at step 819. If the email sent to the user is returned within a short period of time, the presumption is that the email address is incorrect and the user will be dumped from the dynamic directory as indicated at step 821. This is done to for example eliminate pornographic, foul or obscene bogus email addresses which are frequently used where directory listings of users are accessible on the Internet. If no email is returned within the period time for an auto return, the registration process may be initiated at step 823.
In one embodiment, users may become listed in the permanent directory in one of two ways. Either they visit the web site and register on their own, or else a trigger was fired which caused an entry to be made in the permanent directory without the user knowing that the entry was being made. Thus the user shows up in the permanent directory whether or not the user is currently on-line.
In accordance with one aspect of the present invention, the permanent and dynamic directories are merged as shown in FIG. 9. When a user is viewing the permanent directory one tabular column of the display will include an indication that indicates which registrants are presently on-line. In the display of FIG. 9, a flashing spot giving the appearance of a flashing green light indicates that a registrant is on-line. In operation, when a user logs into the permanent directory, as the web site is downloading the permanent directory list to the user, it cross references with the dynamic memory to see if any of the permanent directory entrants are on-line. So as each registrant is listed the dynamic directory is checked to see if the registrant is on-line and a visual indication is provided on the displayed list. The result of the utilization of the dynamic directory, the permanent directory and the merge is that users know if other registrants are on-line. This provides the capability of establishing real time communications via audio and/or audio-video communication. In addition, the merged list as displayed include a connect button or icon which permits establishing communication in real time. In the event that a desired registrant is not currently online, other services may be utilized by the user.
In an alternate embodiment of the invention, visual representations of the dynamic directory are not utilized. Instead, in this embodiment, each computer includes a connector object program which is loaded into the computer. Such a connector object program may, for example, be downloaded into the computer from the visitalk.com website. The connector object preferably runs in the background as a non-visual program. The connector object maintains a connection to the non-visual directory. It does not return a list to the computer, but instead polls the system director 103, 107 (e.g., using a Ping command) to let the system director 103, 107 know that it is still on-line. In this embodiment of the invention, the permanent directory will know whether a user is on-line because the connector object maintains a connection to the system. At the user's computer an icon is displayed on the computer screen at the system tray. With a connector object as used in the system of the invention the connector object permits the user to connect to the LDAP directory without having to receive a visual representation of a directory locally. Typically when a LDAP file is accessed, the LDAP file will return a directory. In the present instance, the return of the directory from the LDAP server is suppressed. FIG. 10 illustrates a system architecture in accordance one embodiment as disclosed herein. As illustrated in FIG. 10, a plurality of system clusters 111 are connected to the Internet 51. Each system cluster 111 , which may be located in a different geographic area, serves as a communications portal to the Internet 51. Each system cluster 111 is substantially the same from a functionality standpoint, but the various system clusters 111 may have different numbers of servers connected. Each system cluster of the invention is readily scaleable up in number of servers connected to the enterprise switches 105, 109. One reason for providing geographically separate clusters is so that long distance telephone access charges for users to access the system clusters 111 may be minimized. Each system cluster 111 provides communication services for its geographic area via the Internet and between other geographic areas also via the Internet 51. Each system cluster 111 may be accessed by users having a variety of Internet devices 71 which include, by way of example (not limitation), computer terminals and personal communication devices such as pagers, phones, video devices and the like. Also connected into the system is one or more management centers 81. Management centers 81 provide the system cluster management functions described above in conjunction with the management server MGT. By providing a system architecture such as shown in FIG. 10, substantially worldwide real time communications access may be provided to users of the system.
As described above dynamic and permanent directory functions are provided in the system clusters 111. The permanent directory provides information for users who have registered to use services provided by the system. The dynamic directory provides information on users who are logged on to the web site serviced by a cluster in one embodiment and have their Internet device activated or turned on in another embodiment. In one embodiment of the invention, each system cluster 111 maintains its own directories. To assure that each system cluster directory contains up to date information regarding who is currently logged on to the system, communications paths are established between the system clusters 111 to exchange directory update information between the system clusters. Each system cluster 111 will thus maintain a substantially complete and updated directories. In accordance with one aspect of the invention, each system cluster will periodically broadcast directory changes that have occurred during the immediately prior predetermined time period to all other system clusters 111. By maintaining updated directories at each system cluster, the reliability of the system is enhanced. FIG. 11 illustrates the Internet 51 connections 51 between several of the system clusters for broadcasting directory updates to other system clusters 111.
Although the description above has focused on the illustrative embodiment in which computers are used to access the servers, the computers can just as easily be an Internet device. As used herein, an Internet device is any device which can access or be accessed by the Internet and includes all manner of devices such as computers, communication devices such as telephones, videophones, cameras, keyboards and any other input/output device which is connectable to the Internet either directly or indirectly. For example, a personal communications device may be used as an Internet device.
To facilitate the use of the system as a fully integrated and operational communications portal, each user of the system in one embodiment of the invention has a personal identification code or Permanent Communication Number. The Permanent Communication Number is a permanent personal identification code that is pre-assigned to the registered user of the communication services provided by the system of the invention. Whenever a registered user enters his or her personal identification code into an Internet device, that Internet device becomes identified in the system directories as the Internet device at which the registered user is active. With this type of an arrangement, a registered user can receive communications at any Internet device so long as the user has entered his or her Permanent Communication Number on the Internet device. To avoid multiple Internet devices from being indicated as the Internet device at which the registered user is located, the system of the invention will update the directory listing for each user to overwrite any entries for prior Internet devices at which the user has registered his or her Permanent Communication Number. The interactive operation of the registration of a user using a Permanent Communication Number with the system of FIG. 10 is shown in the flow diagram of FIG. 12. Initially, a user registers with system at step 1201 , providing identification information including name, a billing address and credit card information. The system assigns a Permanent Communication Number to the user which is unique to the user at step 1203. A permanent directory entry is made for the registered user at step 1204. The user may, as indicated at step 1205 enter the personal identification code at any Internet device such as device 1007 shown in FIG. 10. In one embodiment of the invention, upon entry of the Permanent Communication Number at Internet device 1007, Internet device 1007, at step 1207 utilizing a connector object as described above accesses one of the system clusters 111. The system cluster 111 verifies that the Permanent Communication Number is a valid code at step 1209. If the Permanent Communication Number received at the system cluster 111 is not a valid Permanent Communication Number, service to the Internet device 1007 is denied as indicated at step 1211. If the personal identification code is a valid Permanent Communication Number, the permanent directory is updated to indicate that the user is accessible on the system at step 1213. The service cluster 111 in one embodiment of the invention will return information to the Internet device 1007 to indicate at step 1215 whether the Internet device 1007 is accessible via the system or whether service is denied. If the Internet device 1007 is accessible via the system, Internet device may receive incoming calls via the system. Subsequently, if the user activates a second Internet device 1009 using the same Permanent Communication Number the process is repeated and the directory is updated with the IP address of the Internet device 1009. The prior directory entry is overwritten and all incoming calls to the user will now be routed to the Internet device 1009. In any instance when the directory at the system cluster 111 at which the user activates an Internet device 1007 or 1009, the directories at all the system clusters 111 will be updated to reflect the status of the user as being accessible on the system as described above. The system of the invention may also be used to establish communications between Internet devices and non-Internet devices. For example, if a user at Internet device 1009 desires to establish a communication with a conventional telephone type device, the registered user at Internet device 1009 can also access a telephone directory listing and launch a call to the conventional telephone device via the system of the invention. More specifically, as shown in FIG. 14, a user having registered his or her presence at Internet device 1009 as indicated in FIG. 12 by entering a unique identification number, such as a Permanent Communication NumberSM, enters the telephone number of the desired telephone at step 1401. System cluster 111 which Internet device 1009 accesses receives the telephone number and through a directory lookup at step 1403 identifies the system cluster 111 in geographic proximity to the telephone switching center 1421 to which the telephone number is associated to minimize telephone costs associated with placing such a call. An Internet connection between the system cluster 111 to which Internet device 1009 is associated and the system cluster 111 which is in proximity to switching center 1421 at step 1405.
The Permanent Communication Number is an identifier which is preferably uniquely assigned to an individual. The assignment of each Permanent
Communication Number is by a controlling entity that has responsibility for assigning the Permanent Communication Number upon request. The assignee of the present invention, for example, generates and assigns Permanent Communication Number. Each Permanent Communication Number is preferably a 12 digit numeric code arranged in a format of "xyyy yyyy yyyy" where "x" is any number from 2 to 9 and "y" is any number from 0 to 9, although of course the Permanent Communication Number may be chosen to be any size, depending mainly upon the number of users there are expected to be. In assigning each Permanent Communication Number, the assignment is generally made in a sequential fashion. As each Permanent Communication Number is assigned, a permanent directory entry is made for that Permanent Communication Number. When an Internet device user enters his or her Permanent Communication Number at the Internet device, the Internet device is uniquely associated with that individual until such time as he or she enters the unique Permanent Communication Number at another Internet device. Each Internet device includes a unique device identifying code such that when an Internet device logs onto a system, the Internet device is specifically identified. The Permanent Communication Number directory is updated to indicate the number of the Internet devices at which the user has entered his or her Permanent Communication Number. Thus when a user enters his or her Permanent Communication Number, the specific Internet device identity and the Permanent Communication Number are forwarded to the system directory. The assignment of Internet device numbers is similar to or the same as the present assignment to each computer and each cellular phone presently manufactured of a unique equipment identification number. Thus, an individual can receive communications directed directly to him or her at any Internet device located anywhere in the world thereby providing unparalleled communications capability and access.
A process for assigning Permanent Communication Numbers is illustrated in FIG. 13. At step 1301 , a request is received from a user for a Permanent Communication Number. At step 1303, a determination is made as to whether or not the request is for a vanity number or not. If the request is not for a vanity number, the next available Permanent Communication Number is identified at step 1304. The available number is assigned to the user at step 1305. The permanent directory is updated at step 1307 to reflect the assigned Permanent Communication Number and the user information. The user is notified of his or her Permanent Communication Number at step 1309. Returning back to step 1303, if it is determined that the user has requested a vanity Permanent Communication Number, and the vanity Permanent Communication Number is available, it is assigned at step 1305, and the remainder of the process repeats. If however, it is determined at step 1311 that the vanity Permanent Communication Number is not available, the user is notified of the unavailability at step 1313.
As indicated above, the system in accordance with certain embodiments as described herein allows any Internet device to access system provided services. In accordance with yet another aspect of the invention, a novel Internet device is provided. In accordance with the principles of the invention, an Internet device is any device which is directly accessible by or has the capability to directly access the Internet and which receives a Permanent Communication Number. The Internet device may be a computer, a personal communication device such as a telephone, videophone, and may access the Internet by wireless or hardline type of connection. The method and manner in which the Internet device accesses the Internet is not important to an understanding of the present invention.
An Internet device 1401 in accordance with one embodiment is shown in FIG. 14. Internet device 1401 is a personal communication device. The device 1401 includes one or more data input devices such as keypad 1403, or microphone 1404, and touch screen 1405, or sensors 1406, or any other device or element for the inputting of personal identification information. In the illustrative embodiment of Internet device shown, the keypad is used to enter the Permanent Communication Number of a user. A display included in the Internet device 1401 may prompt the user to enter his or her Permanent Communication Number when the device 1401 is powered-up. A block diagram of the Internet device 1401 is shown in FIG. 15. The
Internet device 1401 includes a processor 1501 and associated memory 1503, a receiver 1505, a transmitter 1507 and antenna 1509. The operation of the device 1401 is substantially the same as commercially available digital cellular phones and commercially available digital personal communication devices. Reference may be made to any number of prior art documents that describe the general operation and architecture of prior digital cellular phones and digital personal communication devices. One significant difference between the Internet device 1401 and various prior art personal communication devices and cellular phones and the like is that the Internet device 1401 is preferably compatible with the International Telecommunications Union (ITU) recommendations for implementing H.323 protocol. The ITU H.323 recommendation is a mutually agreed upon specification which defines how personal computers can inter-operate to share audio and video streams over computer networks including intranets and the public Internet. The operation of the Internet device 1401 of the present invention is shown in the flow diagram of FIG. 16. At power up 1601 , the processor 1501 operates to display a prompt to the user of the Internet device 1401 to enter his or her Permanent Communication Number as indicated at step 1603. The user then enters the Permanent Communication Number at step 1605. The Internet device 1401 by use of a connector object transmits the received Permanent Communication Number to the Internet server at step 1607. In addition, the Internet device 1401 transmits a unique equipment code identifying the particular Internet device 1401 to the server. When a server receives the Permanent Communication Number and the equipment code, the server updates its directory to reflect the association between the Permanent Communication Number and the specific Internet device 1401. The server will return information to the Internet device 1401 at step 1609 indicating that the Internet device has been denied service as indicated at step 1611 or that it is active as indicated at step 1611. Internet Protocol communications may be received at Internet device 1401 from other users connected to the Internet. The Internet device 1401 as long as it is powered up will periodically provide its equipment code and the entered Permanent Communication Number to the Internet server as indicated at step 1613 via a connector object to indicate that the user's Internet device is available for receiving incoming calls. In one embodiment of the invention, the Internet device includes memory 1507 for storing more than one Permanent Communication Number, thereby permitting an Internet device 1401 to be simultaneously accessible for calls for more than one individual, or for more than one purpose such as for business and personal use, or so that all members of a group may register for use of a common Internet device.
Turning back to FIG. 10, an example of operation of one embodiment as described herein essentially provides an IP-based switching center or central office for supporting Internet devices 71 or H.323 devices, i.e., not for supporting prior art telephone devices but for supporting software which in turn supports the H.323 protocol. In such an embodiment, site cluster 111 functions as an IP switch for IP communications. Each site cluster is part of the Internet as viewed by Internet devices and anyone on the Internet can access the system cluster. The system cluster provides a directory of users, a listing of the Permanent Communication Numbers, voice mail, video mail, conferencing service, all the services that one would expect from traditional public switched digital telephone switching center. Where prior art digital telephone switching systems provide dial tone, access, listings and directory services for traditional telephones coupled through analog circuits, the system cluster 111 provides the same functionality for Internet devices connected utilizing IP to the system cluster directory. The services are provided by the servers shown in the various figures.
In one aspect, certain embodiments as described herein operate as an equivalent PBX or Centrex Service for Internet devices. More specifically, each server cluster 111 may be viewed as operating as a PBX/Centrex service for Internet devices which access the cluster via the Internet. By providing server clusters 111 accessible by Internet devices via the Internet, the Internet in combination with a server cluster provides switching functionality for Internet devices allowing Incoming calls to be directed to specific Internet devices at a common geographic location or area or areas. The architecture of certain embodiments of the system as described herein is readily expandable to permit additional servers to be added to provide additional features. The use of Sequel servers SQL1 , SQL2 permits directories or memories to be provided for the storage of voice and/or video mail for registered users who are not logged on to the Internet. FIG. 17 illustrates the addition of additional features to a cluster 111 to permit the addition of voice and video mail. Mass memory 1701 , 1702 is provided for the storage of voice and video messages. Operation of the mass memories 1701 , 1702 for storage of messages is under control of the sequel servers SQL1 , SQL2. In addition, servers SQL1 , SQL2 are utilized to provide for video conferencing by directing conference calls to existing video conference providers. This arrangement is utilized in conjunction with the dynamic/permanent directory aspect of the system described above. In the event that a user of the system desires to communicate with another registered user, but that registered user is does not respond to attempts to connect to him or her, or if the called registrant is not currently on the Internet as indicated by the merged dynamic and permanent directories, a voice mail or video mail message may be left for the called registrant. The message is stored by the sequel servers SQL1 or SQL2 at a voice mail / video mail messaging site.
The invention has been described in conjunction with specific illustrative embodiments thereof. It will be understood that various modifications may be made without departing from the spirit or scope of the invention. It is intended that the scope of protection afforded the invention disclosed herein includes all such modifications and variations. It is intended that the illustrative embodiments do not limit the scope of the invention in any way and that the invention not be limited in scope only in accordance with the claims appended hereto. It will also be understood that although various embodiments are primarily described in conjunction with the Internet, the principles are equally applicable to other distributed electronic networks, including modifications, enhancements or substitutes for the Internet as it exists today.

Claims

Claims
1. A method for facilitating communication over a distributed electronic network, comprising the steps of: receiving, at a web site, a communication request over a distributed electronic network from an electronic device, said communication request comprising a unique user identifier; and retrieving an Internet protocol (IP) address corresponding to an intended receiving device associated with a unique user identifier from a cross-index of unique user identifiers and IP addresses.
2. The method of claim 1 , wherein said cross-index of unique user identifiers and IP addresses includes a static database portion for maintaining static data including a unique user identifier and a dynamic database portion for maintaining dynamic data including an IP address.
3. The method of claim 2, wherein said dynamic database portion maintains an on-line status indication for an intended receiving device, and said method further comprises the step of retrieving an on-line status indication of the intended receiving device from said dynamic database portion.
4. The method of claim 1 , further comprising the step of routing the communication request to the intended receiving device based upon the IP address of the receiving device.
5. The method of claim 4, wherein said step of routing the communication request to the intended receiving device based upon its IP address comprises the step of sending the IP address of the intended receiving device to said electronic device from which said communication request was received.
6. The method of claim 2, wherein said dynamic database portion comprises a plurality of dynamic data services, and each unique user identifier is mapped to a particular dynamic data service in a consistent and repeatable fashion using the unique user identifier.
7. The method of claim 2, wherein the intended receiving device periodically communicates with the dynamic database portion to update dynamic information.
8. A system for facilitating communication over a distributed electronic network, comprising: a web server for receiving communication requests over the distributed electronic network from electronic devices, each of said communication requests comprising a unique user identifier; and a table accessible to said web server, said table comprising a list of unique user identifiers and a list of associated IP addresses; wherein, in response to said communication requests, said web server accesses said table to retrieve the IP address associated with a unique user identifier, and routes communication between the requesting electronic device and the intended receiving device based upon the retrieved IP address.
9. The system of claim 8, wherein said table comprises a static portion for storing static information including unique user identifiers and a separate dynamic portion for storing dynamic information including IP addresses.
10. The system of claim 9, wherein said static portion is stored at a first device and said dynamic portion is stored at a second device, and said first and second devices are networked with said web server.
11. The method of claim 8, wherein said step of routing the communication request to the intended receiving device based upon its IP address comprises the step of sending the IP address of the intended receiving device to said electronic device from which said communication request was received.
12. An Internet device, comprising: a processor, memory coupled to said processor, a link to couple to the Internet; said memory containing a first identification code to identify said Internet device and a second identification code to identify a user of said Internet device, said processor causing said first and second identification codes to be transmitted to an Internet server.
13. The device of claim 12, wherein said first ID code corresponds to an IP address associated with said Internet device.
14. The device of claim 13, wherein said IP address associated with said device is subject to change.
15. An Internet communications portal, said portal comprising: a router coupled to the Internet; a director switch linked to said router; an enterprise switch linked to said director switch; a plurality of servers linked to said enterprise switch; a dynamic directory associated with said plurality of servers, said dynamic directory configured to provide a list of on-line users; and a permanent directory associated with a second one of said plurality of servers, said permanent directory configured to provide a list of registered users.
16. A distributed system for communicating over a packet-switched network, said system comprising: a plurality of Internet communication devices configured to allow communication over said packet-switched network by a plurality of users, each of said Internet communication devices including a processor, a memory coupled to said processor, and a data link coupled to the packet-switched network, wherein said memory contains a first identification code unique to said Internet communication device, and a second identification code unique to a user of said Internet communication device, said processor operable to transmit said first and second identification codes through said data link to the packet-switched network; an IP switching center coupled to said packet-switched network, said IP switching center including: a dynamic directory configured to maintain dynamic information for said users who are on-line with said Internet communication devices in accordance with said first and second identification codes transmitted by said Internet communication devices; a permanent directory configured to maintain static information for registered users of said Internet communication devices; a processor operable to receive said first and second identification codes transmitted from said Internet communication devices; wherein said Internet communication devices are configured to periodically communicate with said IP switching center to indicate on-line status.
17. The system of claim 16, wherein at least one of said Internet communication devices is configured to conform to the H.323 specification.
18. A distributed system for communicating over a packet-switched network, said system comprising: a plurality of Internet communication devices configured to allow communication over said packet-switched network by a plurality of users, each of said Internet communication devices including a processor, a memory coupled to said processor, and a data link permitting connection to the packet-switched network, wherein said memory contains a first identification code unique to said Internet communication device, and a second identification code unique to a user of said Internet communication device, said processor operable to transmit said first and second identification codes through said data link to the packet-switched network; an IP switching center coupled to said packet-switched network, said IP switching center comprising: a dynamic directory configured to provide a list of said users who are online with said Internet communication devices in accordance with said first and second identification codes transmitted by said Internet communication devices; a permanent directory configured to provide a list of registered users of said Internet communication devices; a processor operable to receive said first and second identification codes transmitted from said Internet communication devices; a data repository configured to store communications from a first user to a second user listed in said permanent directory; wherein said Internet communication devices are configured to periodically communicate with said IP switching center to indicate on-line status.
19. The system of claim 18, wherein said stored communication comprises a video message from said first user.
20. The system of claim 18, wherein said stored communication comprises an audio message from said first user.
21. The system of claim 18, wherein said stored communication comprises a text-based message from said first user.
22. A method for facilitating information exchange over a distributed electronic network, the method comprising the steps of: providing a plurality of dynamic data services, each dynamic data service having a dynamic data service identifier and a network connection address; assigning a particular dynamic data service to a recipient user using a unique user identifier, wherein a consistent and repeatable assignment is obtained for a given unique user identifier; directing a sender seeking to provide information to a recipient user to the particular dynamic data service assigned to the recipient user.
23. The method of claim 22, further comprising the step of periodically updating a current user connection address for a recipient user at the particular data service assigned to said recipient user.
24. The method of claim 22, further comprising the step of periodically updating the availability status for a recipient user at the particular data service assigned to said recipient user.
25. The method of claim 22, further comprising the step of permitting a sender to obtain a user connection address for a recipient user from the dynamic data service assigned to said recipient user.
26. An Internet communication device for communicating over a global computer network, said Internet communication device comprising: a processor; a memory coupled to said processor; a data link coupled to the global computer network; said memory containing a first identification code unique to the Internet communication device, and a second identification code unique to a user of said Internet communication device; said processor operable to transmit said first and second identification codes through said data link to at least one server accessible over the global computer network.
27. The Internet communication device of claim 26, wherein said second identification code is a twelve-digit integer having the form xyyyyyyyyyyy, where x represents a number from 2 to 9, and y represents a number from 0 to 9.
28. The Internet communication device of claim 27, wherein said first identification code corresponds to an IP address associated with said Internet communication device.
29. The Internet communication device of claim 28, wherein said at least one server includes a dynamic directory and a permanent directory.
30. The Internet communication device of claim 29, wherein said at least one server comprises a plurality of servers, said permanent directory is stored on a first server of said plurality of servers, and said dynamic directory is stored on at least one additional server of said plurality of servers.
31. A method for mapping a remote dynamic data service to a user, the user having a unique user identifier and a user connection address for accessing a network subject to intermittent change, the method comprising the steps of: providing a static data repository capable of storing persistent user information including a unique user identifier; providing a plurality of dynamic data services capable of storing dynamic information, each dynamic data service having a dynamic data service identifier and a network connection address; populating a plurality of positions in a positional cluster array with dynamic data service identifiers, wherein each position of said plurality of positions contains no more than one entry; converting a unique user identifier to a numerical value according to an algorithm that provides a distribution of numerical values over a prescribed numerical range, wherein application of the algorithm results in a repeatable numerical value for each particular unique user identifier; selecting a positional entry in the positional cluster array using the numerical value obtained from a unique user identifier, and identifying a particular dynamic data service by the dynamic data service identifier corresponding to the selected positional entry; associating a unique user identifier with the particular dynamic data service corresponding to that unique user identifier.
32. The method of claim 31 , wherein said plurality of dynamic data services is included within a dynamic data cluster, said dynamic data cluster further includes a dynamic executive service that maintains the positional cluster array.
33. The method of claim 31 , wherein the association between a unique user identifier with the particular dynamic data service corresponding to the unique user identifier is accomplished at the static data repository.
34. The method of claim 31 , further comprising the step of associating a current user connection address with the particular dynamic data service corresponding to that user's unique user identifier.
35. The method of claim 34, wherein the association between a current user connection address with the particular dynamic data service corresponding to that user's unique user identifier is accomplished at a dynamic data service.
36. A system for facilitating information exchange over a distributed electronic network, comprising: a distributed transaction processor capable of connecting to a distributed network and capable of sending and receiving information over the network, wherein said distributed transaction processor is operated by a user having a unique user identifier; a static data repository for storing static information including a unique user identifier; a gateway providing an interface between the static data repository and the network; a dynamic data cluster available to the network, the cluster including a plurality of data services capable of storing dynamic information;
wherein dynamic information associated with a user may be accessed with a unique user identifier.
37. The system of claim 36, wherein a particular data service of said plurality of data services is assigned to a user, and dynamic information associated with a user is stored at the particular data service assigned to the user.
38. The system of claim 37, wherein the availability of said distributed transaction processor to the network is controlled at the discretion of the user.
39. The system of claim 38, wherein the dynamic data cluster further includes an executive service capable of generating an addressing scheme.
40. A method for facilitating communication with a recipient user over a network, the method comprising the steps of: providing a recipient's unique user identifier to an executive data service that maintains a dynamic data service addressing scheme to learn which dynamic data service of a plurality of dynamic data services is associated with the recipient's unique user identifier; obtaining dynamic information for the recipient user from the dynamic data service associated with the recipient's unique user identifier.
41. The method of claim 40, further comprising the initial step of furnishing identifying information for a recipient user to a static data repository to be used in a query of the static data repository for the unique user identifier corresponding to the recipient user.
42. The method of claim 40, wherein the dynamic information includes a network connection address for the recipient user.
43. The method of claim 42, wherein the network connection address is an IP address.
44. A method for remotely storing dynamic information associated with a recipient user having a unique user identifier, the method comprising the steps of: providing a plurality of dynamic data services accessible to a network and capable of storing dynamic information; and assigning a particular dynamic data service from said plurality of dynamic data services to a unique user identifier in a consistent and repeatable fashion using the unique user identifier.
45. The method of claim 44, further comprising the step of permitting dynamic data corresponding to a unique user identifier to be stored at the particular dynamic data service assigned to the unique user identifier.
46. The method of claim 45, wherein said recipient user may connect to said network using an electronic device.
47. The method of claim 46, wherein the electronic device is intermittently available to said network, and the availability of the electronic device to said network is at the discretion of the recipient user.
48. The method of claim 44, wherein each dynamic data service has a dynamic data service identifier, and said assignment of a particular dynamic data service to a unique user utilizes a positional array populated with entries corresponding to dynamic data service identifiers.
49 The method of claim 44, wherein a spare dynamic data service is available, and the assignment of a particular dynamic data service to a unique user is performed according to an assignment scheme that permits a spare dynamic data service to be substituted for a formerly active dynamic data service.
50. The method of claim 48, wherein said plurality of dynamic data services is included within a dynamic data cluster, said dynamic data cluster further includes a dynamic executive service that maintains the positional cluster array, and the assignment scheme is maintained by the dynamic executive service.
51. A method for linking a plurality of dynamic data services, each having a dynamic data service identifier, to a plurality of records contained in a static data repository, the method comprising the steps of: providing a unique record identifier for each record of said plurality of records; using each unique record identifier to obtain a distribution of output values over a prescribed range, wherein a repeatable output value is obtained for a particular record; populating, at a dynamic executive service, a plurality of positions in a positional array with dynamic data service identifiers, wherein each position contains no more than one entry; selecting a positional entry in the positional array using an output value obtained from a unique record identifier, and identifying a particular dynamic data service by the dynamic data service identifier corresponding to the selected positional entry; and associating a unique record identifier with the particular dynamic data service selected from the positional array for the unique record identifier.
PCT/US2001/043745 2000-11-09 2001-11-09 Distributed dynamic data system and method WO2002039215A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002230461A AU2002230461A1 (en) 2000-11-09 2001-11-09 Distributed dynamic data system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US71056700A 2000-11-09 2000-11-09
US09/710,567 2000-11-09

Publications (3)

Publication Number Publication Date
WO2002039215A2 WO2002039215A2 (en) 2002-05-16
WO2002039215A3 WO2002039215A3 (en) 2003-01-23
WO2002039215A9 true WO2002039215A9 (en) 2003-05-01

Family

ID=24854569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/043745 WO2002039215A2 (en) 2000-11-09 2001-11-09 Distributed dynamic data system and method

Country Status (2)

Country Link
AU (1) AU2002230461A1 (en)
WO (1) WO2002039215A2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008085708A2 (en) * 2006-12-21 2008-07-17 Boxicom, Inc. Data backup system and method associated therewith
CN101212425A (en) * 2006-12-28 2008-07-02 北京交通大学 Multi-service supporting integrated network construction method and routing device
US20120030343A1 (en) 2010-07-29 2012-02-02 Apple Inc. Dynamic migration within a network storage system
EP2503471A1 (en) * 2011-03-23 2012-09-26 Detector de Seguimiento y Transmision, S.A. Information management system and associated process
CN103207882B (en) * 2012-01-13 2016-12-07 阿里巴巴集团控股有限公司 Shop accesses data processing method and system
US11381506B1 (en) * 2020-03-27 2022-07-05 Amazon Tehonlogies, Inc. Adaptive load balancing for distributed systems

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167432A (en) * 1996-02-29 2000-12-26 Webex Communications, Inc., Method for creating peer-to-peer connections over an interconnected network to facilitate conferencing among users
US6118785A (en) * 1998-04-07 2000-09-12 3Com Corporation Point-to-point protocol with a signaling channel
US6463471B1 (en) * 1998-12-28 2002-10-08 Intel Corporation Method and system for validating and distributing network presence information for peers of interest
US20020021307A1 (en) * 2000-04-24 2002-02-21 Steve Glenn Method and apparatus for utilizing online presence information

Also Published As

Publication number Publication date
WO2002039215A2 (en) 2002-05-16
WO2002039215A3 (en) 2003-01-23
AU2002230461A1 (en) 2002-05-21

Similar Documents

Publication Publication Date Title
JP4592184B2 (en) Method and apparatus for accessing device with static identifier and intermittently connected to network
US7525930B2 (en) System and method for user identity portability in communication systems
EP2319221B1 (en) Content distribution network
US6470389B1 (en) Hosting a network service on a cluster of servers using a single-address image
EP1177666B1 (en) A distributed system to intelligently establish sessions between anonymous users over various networks
US6594254B1 (en) Domain name server architecture for translating telephone number domain names into network protocol addresses
CA2190713C (en) Address resolution method and asynchronous transfer mode network system
EP1869868B1 (en) System, network device, method, and computer program product for active load balancing using clustered nodes as authoritative domain name servers
US6347085B2 (en) Method and apparatus for establishing communications between packet-switched and circuit-switched networks
CN100566328C (en) Network resolve method in the territory with the user distribution server, reach relevant telecommunication system
US20030140084A1 (en) System controlling use of a communication channel
EP0825748A2 (en) A method and apparatus for restricting access to private information in domain name systems by redirecting query requests
US6801952B2 (en) Method and devices for providing network services from several servers
US8583745B2 (en) Application platform
US20060013227A1 (en) Method and appliance for distributing data packets sent by a computer to a cluster system
WO2002039215A2 (en) Distributed dynamic data system and method
US20020065936A1 (en) Multi-platform application
WO2000069143A2 (en) System and method for facilitating communications over a distributed electronic network
JP2000293496A (en) Decentralizing device for service load of network
US20070288491A1 (en) Method and Apparatus for Configuring a Plurality of Server Systems Into Groups That Are Each Separately Accessible by Client Applications
US7664880B2 (en) Lightweight address for widely-distributed ADHOC multicast groups
US20020133572A1 (en) Apparatus and method for providing domain name services to mainframe resource mapping
US20030225910A1 (en) Host resolution for IP networks with NAT
JP2001111757A (en) Network facsimile system
JPH09149071A (en) Network management method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGES 1/28-28/28, DRAWINGS, REPLACED BY NEW PAGES 1/17-17/17; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP