US20080177830A1 - System and method for allocating resources on a network - Google Patents

System and method for allocating resources on a network Download PDF

Info

Publication number
US20080177830A1
US20080177830A1 US12/057,517 US5751708A US2008177830A1 US 20080177830 A1 US20080177830 A1 US 20080177830A1 US 5751708 A US5751708 A US 5751708A US 2008177830 A1 US2008177830 A1 US 2008177830A1
Authority
US
United States
Prior art keywords
plurality
resources
client
set
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/057,517
Inventor
Patrick Tam Vo
Vasu Vallabhaneni
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vallabhaneni Vasu
Original Assignee
Patrick Tam Vo
Vasu Vallabhaneni
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US11/002,545 priority Critical patent/US7464165B2/en
Application filed by Patrick Tam Vo, Vasu Vallabhaneni filed Critical Patrick Tam Vo
Priority to US12/057,517 priority patent/US20080177830A1/en
Publication of US20080177830A1 publication Critical patent/US20080177830A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L29/00Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00
    • H04L29/12Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00 characterised by the data terminal
    • H04L29/12009Arrangements for addressing and naming in data networks
    • H04L29/12047Directories; name-to-address mapping
    • H04L29/12056Directories; name-to-address mapping involving standard directories and standard directory access protocols
    • H04L29/12066Directories; name-to-address mapping involving standard directories and standard directory access protocols using Domain Name System [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L29/00Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00
    • H04L29/12Arrangements, apparatus, circuits or systems, not covered by a single one of groups H04L1/00 - H04L27/00 characterised by the data terminal
    • H04L29/12009Arrangements for addressing and naming in data networks
    • H04L29/12207Address allocation
    • H04L29/12283Address allocation involving aspects of pools of addresses, e.g. assignment of different pools of addresses to different Dynamic Host Configuration Protocol [DHCP] servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements or network protocols for addressing or naming
    • H04L61/15Directories; Name-to-address mapping
    • H04L61/1505Directories; Name-to-address mapping involving standard directories or standard directory access protocols
    • H04L61/1511Directories; Name-to-address mapping involving standard directories or standard directory access protocols using domain name system [DNS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements or network protocols for addressing or naming
    • H04L61/20Address allocation
    • H04L61/2007Address allocation internet protocol [IP] addresses
    • H04L61/2015Address allocation internet protocol [IP] addresses using the dynamic host configuration protocol [DHCP] or variants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements or network protocols for addressing or naming
    • H04L61/20Address allocation
    • H04L61/2061Address allocation involving aspects of pools of addresses, e.g. assignment of different pools of addresses to different dynamic host configuration protocol [DHCP] servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • H04L67/1004Server selection in load balancing
    • H04L67/1008Server selection in load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • H04L67/1004Server selection in load balancing
    • H04L67/1012Server selection in load balancing based on compliance of requirements or conditions with available server resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • H04L67/1004Server selection in load balancing
    • H04L67/1017Server selection in load balancing based on a round robin mechanism

Abstract

A system and method for allocating resources on a network, including a server and at least one client. The resources are associated within a single set, such that the number of resources within the network can be easily incremented or decremented. Flags are associated with each resource, where the flags may be set to one of two states: a first state or a second state. When the server receives a connection request from a client, the server examines the flags associated with the resources to find a flag set to a second state. Upon finding a resource with a flag set to the second state, that resource is assigned to the client. Once the resource is assigned to a client, the associated flag is set to a first state and another flag associated with another resource is set to a second state.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 11/002,545, filed on Dec. 2, 2004, entitled “System and Method for Allocating Resources on a Network”. Applicants claim benefit of priority under 35 U.S.C. §120 to U.S. patent application Ser. No. 11/002,545, which is incorporated by reference herein in its entirety and for all purposes.
  • BACKGROUND OF THE INVENTION
  • 1. Technical Field
  • The present invention relates in general to data processing systems, and in particular, networked data processing systems. More particularly, the present invention relates to the management of a networked data processing system. Still more particularly, the present invention relates to the allocation of resources on a networked data processing system.
  • 2. Description of the Related Art
  • Dynamic Host Configuration Protocol (DHCP) is a protocol for assigning a dynamic internet protocol (IP) address to devices on a network. With dynamic addressing, a device can have a different IP address each time it connects to a network. In some systems, the device's IP address can even change while it is still connected. In any case, when a computer system (i.e., a client system) attaches itself to the network for the first time, it broadcasts a DHCPDISCOVER packet. A DHCP server on the local segment will see the broadcast and return a DHCPOFFER packet that contains an IP address. Other information may also be included, such as which router and domain name server (DNS server) the client system should utilize when connecting to the DHCP server. A router is a device that connects several local area networks (LANs) together. A DNS server is a computer system that contains a program that translates domain names into IP addresses. DNS servers allow users to utilize domain names instead of IP addresses when communicating with other computer systems. An example of a domain name is www.ibm.com.
  • The client may receive multiple DHCPOFFER packets from any number of servers, so it must choose between them, and broadcast a DHCPREQUEST packet that identifies the explicit server and lease offer that it chooses. A lease is the amount of time an IP address can be allocated to a client system. The decision regarding which lease offer to choose may be based on which offer has the longest lease or provides the most information that the client system needs for optimal operation. If there are more client systems than IP addresses, using shorter leases can keep the server from running out of IP addresses. If there are more addresses than client systems, a permanent lease or a fixed IP address may be assigned to each client system.
  • The chosen server will return a DHCPACK that tells the client system that the lease is finalized. The other servers will know that their respective offers were not accepted by the client system when they see the explicit DHCPREQUEST packet. If the offer is no longer valid for any reason (e.g., due to a time-out or another client being allocated the lease), the selected server must respond with a DHCPNAK message. The client system will respond with another DHCPDISCOVER packet, which starts the process over again.
  • Once the client system receives a DHCPACK, all ownership and maintenance of the lease is the responsibility of the client. For example, a client system may refuse an offer that is detailed in the DHCPACK message, and it is the client's responsibility to do so. Client systems test the address that has been offered to them by conducting an address resolution protocol (ARP) broadcast. If another node responds to the ARP broadcast, the client system should assume that the offered address is being utilized. The client system should reject the offer by sending a DHCPDECLINE message to the offering server, and should also send another DHCPDISCOVER packet, which begins the process again.
  • Once the client system has the lease, it must be renewed prior to the lease expiration through another DHCPREQUEST message. If a client system finishes utilizing a lease prior to its expiration time, the client system is supposed to send a DHCPRELEASE message to the server so that the lease can be made available to other nodes. If the server does not receive a response from the client system by the end of the lease, it indicates the lease is non-renewed, and makes it available for other client systems to utilize in future connection requests.
  • Therefore, dynamic addressing simplifies network administration because the software keeps track of IP addresses rather than requiring an administrator to manage the task. This means that a new computer system can be added to a network without having to manually assign a unique IP address to a new system.
  • To assign IP addresses to the client systems, a DHCP server utilizes a configuration file. Stored in the configuration file is a range of IP addresses for each sub-network. This configuration file is utilized to construct a database that is referenced each time a DHCP server assigns an IP address to a client system. Associated with each range of IP addresses are options, such as a router or a DNS server. Therefore, when the DHCP server assigns an IP address from a particular range of addresses to a client system, it also specifies which router and DNS server the client should utilize. Depending on the number of active client systems in a sub-network, there may be times when a particular router and/or DNS server is overburdened with network traffic. When that occurs, the system administrator may want to load-balance to the network by associating a new router and/or DNS server with the range of IP addresses. Traditionally, the system administrator would have to modify the configuration file with the location information of the new router and/or DNS server.
  • As is well-known in the art, each time a configuration file is modified, the DHCP server has to be refreshed. During the time the DHCP server is refreshing, the DHCP server is off-line and cannot respond to any IP address requests. Also, while client systems that request IP addresses after the DHCP server has been refreshed will utilize a new router and/or DNS server, the client systems that were assigned IP addresses before the DHCP server was refreshed will continue to use the overburdened router and/or DNS server.
  • U.S. Patent Application Publication Number 2003/0163341, “Apparatus and Method of Dynamically Updating Dynamic Host Configuration Protocol (DHCP) Options,” filed by IBM Corporation, the assignee of the present application, deals with this problem by storing the options in the configuration file in a special stanza that includes dynamic options and a frequency at which the options are to be updated. The options typically include a router and a DNS server that the client systems utilize when connecting to the network. Each time the options are updated, a different router and/or DNS server is utilized for subsequent client system transactions. The prior router and/or DNS server are removed from the dynamic stanza and not utilized in future client system transactions. While the referenced application solves the problem of requiring the DCHP server to be taken off-line each time the system administrator edits the configuration file by inserting the new options, the options are only updated at a preset time interval. This means that the system administrator must uniquely tailor the present time interval for updating the options so that each individual option is not overburdened. For example, if the system administrator sets a lengthy preset time interval, the options would not be updated frequently enough to prevent each option from becoming overburdened with network traffic.
  • Also well-known in the art is load-balancing of DHCP options via the utilization of virtual subnets. In the past, system administrators were required to divide a subnet (i.e. IP addresses the DHCP server may assign to incoming client systems) into virtual subnets and allocate the options among the virtual subnets. Therefore, whenever a client system was assigned an IP address by the DHCP server, the DHCP server also assigned the options related to that particular virtual subnet. However, in the event that there are more options added to the network, the system administrator would have to manually distribute the new options among the virtual subnets or redefine the ranges of the virtual subnets.
  • Therefore, there is a need for dynamically load-balancing DHCP network options without the need for refreshing the DHCP server and without the utilization of virtual subnets.
  • SUMMARY OF THE INVENTION
  • A system and method for allocating resources on a network, including a server and at least one client. The resources are associated within a single set, such that the number of resources within the network can be easily incremented or decremented. Flags are associated with each resource, where the flags may be set to one of two states: a first state or a second state. When the server receives a connection request from a client, the server examines the flags associated with the resources to find a flag set to a second state. Upon finding a resource with a flag set to the second state, that resource is assigned to the client. Once the resource is assigned to a client, the associated flag is set to a first state and another flag associated with another resource is set to a second state. Therefore, the system and method allows for network resources to be distributed equally among connecting clients without an explicit allocation of network resources and connection addresses by a systems administrator.
  • These and other features and advantages of the present invention will be described in, or will become apparent to those of ordinary skill in the art in view of the following detailed description of the preferred embodiments.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as the preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
  • FIG. 1 is an exemplary block diagram depicting the data processing system in accordance with a preferred embodiment of the present invention;
  • FIG. 2 is an exemplary block diagram illustrating a data processing system that may be implemented as a server in accordance with a preferred embodiment of the present invention;
  • FIG. 3 is an exemplary block diagram depicting a data processing system that may be implemented as a client in accordance with a preferred embodiment of the present invention;
  • FIG. 4A is a high-level flowchart illustrating the initialization state of the method of allocating at least one resource to a client by a server in accordance with a preferred embodiment of the present invention;
  • FIG. 4B is a high-level flowchart illustrating the runtime state of the method of allocating at least one resource to a client by a server in accordance with a preferred embodiment of the present invention;
  • FIG. 5A is a high-level flowchart illustrating the initialization state of the method of allocating at least one resource to a client by a server in accordance with another preferred embodiment of the present invention;
  • FIG. 5B is a high-level flowchart illustrating the runtime state of the method of allocating at least one resource to a client by a server in accordance with another preferred embodiment of the present invention; and
  • FIG. 6 is a pseudocode representation of the load balance container in accordance with a preferred embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • This invention is described in a preferred embodiment in the following description with reference to the figures. While the invention is described in terms of the best mode for achieving this inventor's objectives, it will be appreciated by those skilled in the art that variations may be accomplished in view of these teachings without deviating from the spirit or scope of the present invention.
  • Referring now to the figures, and in particular, with reference to FIG. 1, there is depicted an exemplary block diagram of a networked data processing system 100 in which the present invention may be implemented. Data processing system 100 includes a network 102. Network 102 is a collection of computers that communicate with each other through a system of interconnects. Examples of network 102 include wide-area networks (WANs), local-area networks (LANs), and the Internet. Data processing system 100 also includes a server 104, storage 106, and a collection of clients 108, 110, and 112 that periodically seek connections to server 104 via network 102. Data processing system 100 may include additional servers, clients, and peripherals not depicted in FIG. 1.
  • Referring to FIG. 2, there is illustrated a block diagram of a data processing system 200 that may be implemented as a server, such as server 104 in FIG. 1, in accordance to a preferred embodiment of the present invention. Data processing system 200 may be a symmetric multiprocessor (SMP) system including a collection of processors 202 and 204, coupled to system bus 206. A single processor system may also be utilized. Also coupled to system bus 206 is memory controller/cache 208, which may be utilized as an interface to local memory 209. I/O bus bridge 210 is coupled to system bus 206 and provides an interface to I/O bus 212. Memory controller/cache 208 and I/O bus bridge 210 may be integrated as depicted, either indirectly or directly.
  • Peripheral component interconnect (PCI) bus bridge 214 coupled to I/O bus 212 provides an interface to PCI local bus 216. Communications links to network computers 108, 110, 112 in FIG. 1 may be provided through modem 218 and network adapter 220 coupled to PCI local bus 216 through add-in boards. Additional PCI bus bridges 222 and 224 provide interfaces for additional PCI local buses 226 and 228, from which additional network adapters or modems may be supported. Therefore, data processing system 200 allows connections to multiple network computers. A memory-mapped graphics adapter 230 and hard disk 232 may also be coupled to I/O bus as depicted, either directly or indirectly.
  • Those having ordinary skill in this art will note that the hardware depicted in FIG. 2 may vary depending on the configuration of data processing system 200. For example, other peripheral devices, such as additional disk drives, optical drives may be utilized in addition to or in the place of the hardware depicted. The example depicted in FIG. 2 is not meant to imply architectural limitations with respect to the present invention.
  • With reference to FIG. 3, a block diagram illustrating another data processing system is depicted in which the present invention may be implemented. Data processing system 300 is an example of a client computer. Data processing system 300 preferably employs a peripheral component interconnect (PCI) local bus architecture. Of course, those having ordinary skill in this art will appreciate that even though the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may also be utilized in the implementation. Processor 302 and main memory 304 are coupled to PCI local bus 306 via PCI bridge 308. PCI bridge 308 also may include an integrated memory controller and cache memory for processor 302. Additional connections to PCI local bus 306 may be made through direct component interconnection or via add-on boards. In the illustrated example, local area network (LAN) adapter 310, SCSI host bus adapter 312, and expansion bus interface 314 are coupled to PCI local bus 306 by direct component connection. In contrast, audio adapter 316, graphics adapter 318 and audio/video adapter 319 are coupled to PCI local bus 306 by add-in boards inserted into expansion slots. Expansion bus interface 314 provides a connection for a keyboard and mouse adapter 320, modem 322, and additional memory 324. Small computer system interface (SCSI) host bus adapter 312 provides a connection for hard disk drive 326, tape drive 328, and CD-ROM drive 330. Typical PCI local bus implementations will support three or four PCI expansion slots or add-in connectors.
  • Preferably, an operating system runs on processor 302 and is utilized to coordinate and provide control of various components within data processing system 300. The operating system may be a commercially available operating system, such as Windows XP, with is available from Microsoft Corporation. Instructions for the operating system and applications are located on storage drives, such as hard disk drive 326, and may be loaded into main memory 304 for execution by processor 302.
  • Those having ordinary skill in this art will appreciate that the hardware in FIG. 3 may vary depending on the implementation. Other internal hardware or peripheral devices, such as non-volatile memory and optical disk drives may be utilized in addition to or in place of the hardware illustrated in FIG. 3. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
  • As another example, data processing system 300 may be a stand-alone system configured to be bootable without relying on some type of network communication interface, whether or not data processing system 300 includes some type of network communication interface. As a further example, data processing system 300 may be a Personal Digital Assistant (PDA) device, which is configured with ROM and/or flash ROM in order to provide non-volatile memory for storing operating system files and/or user-generated data.
  • The depicted example in FIG. 3 and above-described examples are not meant to imply architectural limitations. For example, data processing system 300 may also be a notebook computer or hand held computer in addition to taking the form of a PDA. Data processing system 300 may also be a kiosk or a web appliance.
  • The present invention involves a system and method for allocating multiple resources on a network to clients within that network. The invention may be local to client systems 108, 110, and 112 of FIG. 1 or to server 104 or to both server 104 and client systems 108, 110, and 112. Also, the present invention may be implemented on any storage medium, such as floppy disks, compact disks, hard disks, RAM, ROM, flash-ROM, and other computer-readable mediums.
  • Referring now to the FIG. 4A, there is illustrated a high-level flowchart diagram of the initialization state of one preferred embodiment of the present invention. The process beings at step 700, and thereafter proceeds to step 702, which depicts the configuration of the server to assign connection addresses out of a pool of connection addresses. Then, the process continues to step 704, which illustrates the association of a respective flag, fill_count, and a maximum_fill_count with each resource on the network. Next, at step 706, the block depicts a determination made of the value of num_clients, or the number of clients currently utilizing each resource on the network. As illustrated in step 707, the value of fill_count for each resource is set equal to num_clients. Then, the process continues to step 708, which depicts a determination made of whether or not there are any more resources on the network to be initialized.
  • If it is determined that there are more resources to be initialized, the process proceeds to step 712, which illustrates moving to the next resource on the list. Then, as depicted in step 714, the flag associated with the resource is set to a clean state. The process then returns to prior block 708 and proceeds in an iterative fashion. If, however, it is determined that there are no more resources to be initialized, the process continues to step 710, which depicts the process continuing to step 710.
  • With reference to FIG. 4B, there is depicted a high-level flowchart diagram of the runtime state of the method according to one preferred embodiment of the present invention. The process begins at step 710, and thereafter proceeds to step 714, which depicts the server waiting for a connection request from a client. Then, the process continues to step 716, which illustrates the server receiving a connection request, parsing the connection request, and determining the type and number of resources needed to fill the connection request. Next, step 718 depicts a determination made as to whether or not any more resources are needed to fill the connection request. If it is determined that no more resources are needed, the connection request is considered filled, the process continues to prior step 714, and the process proceeds in an iterative fashion.
  • If more resources are needed to fill the connection request, step 720 illustrates a determination made of whether or not the current balance policy is set to fill. This determination is made by examining the balance_policy field 622 of FIG. 6. If it is determined that the current balance policy is set to fill, the process proceeds to step 730, which depicts a determination made of whether or not the fill_count of the resource is less than maximum_fill_count. If it is determined that the fill_count of the resource is not less than maximum_fill_count, the process proceeds to step 736, where the next resource in the list is examined. Then, the process continues to step 730 and proceeds in an iterative fashion.
  • If it is determined that the fill_count of the resource is less than maximum_fill_count, the process then proceeds to step 732, which depicts the assigning of the resource to the incoming client. Then, the fill_count is incremented, as illustrated in 734. Next, the process proceeds to step 736, which illustrates a determination of whether or not fill_count is greater than or equal to maximum_fill_count. If it is determined that fill_count is not greater than or equal to maximum_fill_count, the process returns to prior step 718 and proceeds in an iterative fashion. If fill_count is determined to be greater than or equal to maximum_fill_count, the process continues to step 738, which depicts the resetting of fill_count. Then, the process continues to step 740, which illustrates moving to the next resource on the list. Next, the process returns to prior step 730 and proceeds in an iterative fashion.
  • Returning to prior step 720, if it is determined that the current balance policy is not set to fill, the process then continues to step 722, which depicts the determination that the current balance policy is set to rotate. The process continues to step 724, which illustrates the assignment of the first resource with a flag set to a clean state to the client. Then, the process proceeds to step 726, which depicts setting the flag associated with the resource to a dirty state. Next, step 728 indicates setting the flag associated to the next resource on the list to a clean state. The procedure then returns to prior step 718 and continues in an iterative fashion.
  • In the prior preferred embodiment, the resources preferably are continually allocated to incoming clients no matter how burdened the resources on the network become. The traffic is balanced between the resources due to the various balance policies of fill or rotate. Those having ordinary skill in the art should appreciate that maximum_fill_count can be considered an absolute limit on the number of clients that may utilize a particular resource at one time. In another preferred embodiment of the present invention, the maximum_fill_count absolute limit holds all further client requests until the number of clients currently utilizing particular resources decrease due to expired or non-renewed connection leases.
  • Referring to FIG. 5A, there is illustrated a high-level flowchart diagram of the initialization state of another preferred embodiment of the present invention. The process begins at step 400 and thereafter proceeds to step 402, which depicts the configuration of a server to assign connection addresses out of a pool of connection addresses. The process then continues to step 404, which illustrates the association of a flag, a fill_count, and a maximum_fill_count with each resource on the network. Then, the process proceeds to step 408, which depicts a determination of the number of clients currently utilizing each resource on the network and assigning that number to the value of fill_count for each of the resources.
  • The procedure then continues to step 410, which illustrates a determination made of whether or not there are any more resources to be initialized. If it is determined that there are no more resources to be initialized, the process moves to step 500, which illustrates the beginning of the runtime state, as depicted in FIG. 5B. If it is determined that there are more resources to be initialized, the process continues to step 412, which depicts moving to the next resource on the list. Then, the procedure transitions to step 414, which illustrates a determination made as to whether or not fill_count is less than maximum fill_count. If fill_count is less than maximum_fill_count, the process continues to step 422, which indicates that the flag associated with the resource is set to a clean state. The process then returns to prior step 410 and proceeds in an iterative fashion.
  • Returning to step 414, if the determination is made that fill_count is not less than maximum_fill_count, the process continues to step 416, which depicts a determination made of whether or not fill_count is equal to maximum_fill_count. If it is determined that fill_count is equal to maximum_fill_count, the process proceeds to step 420, which depicts the removal of this resource from the single set. The process then returns to prior step 410 and proceeds in an iterative fashion. If it is determined that fill_count is not equal to maximum_fill_count, the process continues to step 418, which illustrates the generation of an error message. The error message preferably conveys to the systems administrator that the number of clients currently utilizing the particular resource has exceeded the value of maximum_fill_count and that the resource will be removed from the single set. The process then continues to prior step 420 and proceeds in an iterative fashion.
  • Those with ordinary skill in this art will appreciate that instead of removing the resource from the single set when a resource's maximum_fill_count has been reached, the resource may also be marked by the server in a way as to not allow client allocation to the resource until client utilization of the resource is reduced through the signing off of clients or the non-renewal of current client leases.
  • With reference to FIG. 5B, there is depicted a high-level flowchart diagram of the runtime state of another preferred embodiment of the present invention. The process continues from FIG. 5A at step 500 and proceeds thereafter at step 502, which depicts the server waiting for a connection request from a client. The procedure then proceeds to step 504, which illustrates the server receiving a connection request, parsing the connection request, and determining the type and number of resources needed to fill the connection request from the client. Then, the process continues to step 506, which depicts a determination made of whether or not there are any resources of the type requested with a flag set to a clean state. If a determination is made that there are not any available resources of the type requested with flag set to a clean state, the procedure moves to step 518, which illustrates a waiting state. During the waiting state, the server waits for some client leases to be freed and the fill_count of respective resources are decremented accordingly. If the fill_count is less than maximum_fill_count, the associated resource is added back to the list. The flag associated with the added resource is set to clean. Alternatively, if there are no resources of the type requested with the flag set to a clean state, the fill_count of all the resources of the type requested is set to zero, as depicted in step 518. This allows the resources to continue to be assigned to incoming clients, despite the fact that the resources have exceeded the pre-determined fill_count. The administrator may allow overburdened resources to be assigned to incoming clients because the network is configured to service incoming clients, no matter how burdened the resources have become. The process then moves back to the prior step 506 and continues in an iterative fashion.
  • Returning to step 506, if it is determined that there are resources of the type requested with the associated flag set to a clean state, the process moves to step 508, which depicts the assignment of the resource to the client. Then, the process continues to step 510, which illustrates the incrementing of fill_count to reflect the assignment of the client to the resource. Then, as depicted in step 512, a determination is made of whether or not fill_count is less than maximum_fill_count for the particular resource. If the fill_count is determined to be less than maximum_fill_count for the particular resource, the process continues to step 520, which illustrates the setting of the associated flag to a dirty state. The process then proceeds to step 522, which depicts moving to the next resource on the list and setting the associated flag to clean. Then, the process moves to step 524, which illustrates a determination of whether or not any more resources are to be allocated in the present connection request. If there are more resources to be allocated, the process returns to prior step 506 and proceeds in an iterative fashion. If there are no more resources to be allocated, the process returns to prior step 502 and continues in an iterative fashion.
  • Returning to step 512, if it is determined that fill_count is not less than maximum_fill_count, the process continues to step 514, which illustrates a determination made of whether or not fill_count is equal to maximum_fill_count. If fill_count is equal to maximum_fill_count, the process continues to step 516, which depicts the removal of the resource from the list. The procedure then returns to prior step 522 and continues in an iterative fashion. If, however, fill_count is not equal to maximum fill_count, the process continues to step 519, which illustrates the generation of an error message. This error message preferably conveys to the system administrator that the particular resource has become overburdened, since fill_count has exceeded maximum_fill_count and that the resource will be removed from the list. The process then proceeds to prior step 516 and continues in an iterative fashion.
  • Now referring to FIG. 6, there is illustrated a pseudocode representation of the load balance container 600 according to a preferred embodiment of the present invention. Heading 602 indicates to the DHCP server the pool of IP addresses that the DHCP server may allocate to connection clients. In this example, the server may allocate any address between 192.168.1.0-192.168.1.200, or 200 unique IP addresses. Exclude statements 602, 604, 606, 608, and 610 are present to allow the options corresponding to those addresses to be placed in balance_options container 612.
  • DNS servers 614 and 616 are DNS servers that are available for allocation. In a preferred embodiment of the present invention, the server examines balance_policy 622 to determine how to assign the incoming clients to the various available resources. If balance_policy 622 is set to fill, clients are allocated to each resource in the list until the fill_count associated with the resource has reached maximum_fill_count. Then, the cycle is repeated with the next resource on the list. If balance_policy 622 is set to rotate, clients are balanced to each resource on the list by order of assignment. For example, assume that the balance_options container 612 merely contains three options: O1, O2, and O3. The first incoming client would be assigned to O1, the second to O2, the third to O3, and the procedure will assign the fourth to O1.
  • In another preferred embodiment of the present invention, when a first client sends a connection request, DNS server 614 is utilized. Then, as detailed in FIG. 5, the flag in DNS server 614 is set to a first (dirty) state, so that DNS server 616 will be allocated when a second client sends a connection request to the DHCP server. The same scheme is applied to default gateways 618 and 620. This algorithm allows load balancing of the DHCP options until the fill_count of each DHCP option reaches the maximum_fill_count 624 of one-hundred clients. Finally, balance_policy 622 set to rotate indicates to the DHCP server that the options on the network should be load balanced, or that the options should be allocated to client systems in a round-robin fashion.
  • While load_balance container 612 has been particularly shown as described as a computer program product residing in the memory of at least one component of the networked data processing system, ones skilled in the art can appreciate that load_balance container 612 may be implemented by physical circuitry residing on one or more components of the networked data processing system. Also, according to a preferred embodiment of the present invention, load_balance container 612 is implemented in volatile memory of the networked data processing system. However, load_balance container 612 may also be implemented in non-volatile memory, optical disk storage, hard disk drives, floppy drives, and any other type of volatile and non-volatile memory storage device.
  • While this invention has been particularly shown as described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention. It is also important to note that although the present invention has been described in the context of a fully functional computer system, those skilled in the art will appreciate that the mechanisms of the present invention are capable of being distributed as a program product in a variety of forms, and that the present invention applies equally regardless of the particular type of signal bearing media utilized to actually carry out the distribution. Examples of signal bearing media include, without limitation, recordable type media such as floppy disks or CD ROMs and transmission type media such as analog or digital communication links.

Claims (23)

1. A method for load balancing of a plurality of return options on a network without setting up virtual subnets, wherein said network includes at least a server, at least a client, and a plurality of return options, said method comprising:
allocating a plurality of sets in a container, wherein each of said plurality of sets includes a plurality of return options of a same type; and
assigning a set of return options to a client on a network, wherein said set of return options includes at least one option from each of said plurality of sets.
2. The method according to claim 1, said assigning further includes:
wherein each of said plurality of return options includes a value representing a maximum number of clients that may be assigned to said each of said return options:
repeatedly assigning a first return option out each of said plurality of sets to a requesting client until said value is reached; and
in response to reaching said value, assigning another return option out of each of said plurality of sets.
3. The method according to claim 1, said assigning further includes:
assigning each of said plurality of return options of the same type out of said plurality of sets to a requesting client in a round-robin fashion
4. The method according to claim 1, further comprising:
wherein each of said plurality of return options includes:
a first value representing a maximum number of clients that may be assigned to said each of said plurality of return options; and
a second value representing a number of clients currently assigned to each of said plurality of return options; and
in response to determining that a first set of said plurality of sets includes a return option wherein said second value exceeds said first value, setting said second value corresponding to each of said plurality of return options of a same type included in said first set to zero.
5. A system for load balancing of a plurality of return options on a network without setting up virtual subnets, wherein said network includes at least a server and at least a client, said system comprising:
a container, including a plurality of sets, wherein each of said plurality of sets includes a plurality of return options of the same type; and
means for assigning a set of return options to a client on a network, wherein said set of return options includes at least one option from each of said plurality of sets.
6. The system according to claim 5, said means for assigning further includes:
wherein each of said return options includes a value representing a maximum number of clients that may be assigned to said each of said return options:
means for repeatedly assigning a first return option out each of said plurality of sets to a requesting client until said value is reached; and
in response to reaching said value, a means for assigning another return option out of each of said plurality of sets.
7. The system according to claim 5, said means for assigning further includes:
wherein each of said return options includes a value representing a maximum number of clients that may be assigned to said each of said return options:
means for assigning each of said return options out of said plurality of sets to a requesting client in a round-robin fashion.
8. The system according to claim 5, said means for assigning further includes:
The method according to claim 1, further includes:
wherein each of said plurality of return options includes:
a first value representing a maximum number of clients that may be assigned to said each of said plurality of return options; and
a second value representing a number of clients currently assigned to each of said plurality of return options; and
in response to determining that a first set of said plurality of sets includes a return option wherein said second value exceeds said first value, means for setting said second value corresponding to each of said plurality of return options of a same type included in said first set to zero.
9. A method for allocating a plurality of resources on a network, which includes a server and at least one client, said method comprising:
associating said plurality of resources within a single set, such that said plurality of resources are allocated only from said single set, such that the number of said plurality of resources within said network may be more easily incremented or decremented;
associating a respective one of a plurality of flags with each of said plurality of resources, wherein each of said plurality of flags may be set to one of two states, a first state or a second state;
in response to said server receiving a connection request from said client, examining said plurality of flags to identify a first flag set to said second state;
in response to identifying said first flag set to said second state, assigning an associated one of said plurality of resources to said client; and
in response to assigning said associated one of said plurality of resources to said client, setting said first flag to said first state and setting a second flag associated with another one of said plurality of resources to said second state.
10. The method in claim 9, wherein said associating a respective one of a plurality of flags, further comprises:
associating a respective one of a plurality of counters with each of said plurality of resources, wherein each of said plurality of counters represents a number of clients currently utilizing each of said plurality of resources.
11. The method of claim 9, wherein said associating a respective one of a plurality of flags, further comprises:
associating a respective one of a plurality of variables with each of said plurality of resources, wherein each of said plurality of variables represents a maximum number of clients permitted to utilize each of said plurality of resources.
12. The method of claim 9, further comprising:
in response to determining the workload of one of said plurality of resources exceeds a predetermined value, removing said one of said plurality of resources from said single set.
13. The method of claim 9, further comprising:
in response to determining the workload of one of said plurality of resources does not exceed a predetermined value, associating said one of said plurality of resources in said single set.
14. A system for allocating a plurality of resources on a network, which includes a server and at least one client, said system comprising:
means for associating said plurality of resources within a single set, such that said plurality of resources are allocated only from said single set, such that the number of said plurality of resources within said network may be more easily incremented or decremented;
means for associating a respective one of a plurality of flags with each of said plurality of resources, wherein each of said plurality of flags may be set to one of two states, a first state or a second state;
means for examining said plurality of flags to identify a first flag set to said second state, in response to said server receiving a connection request from said client;
means for assigning an associated one of said plurality of resources to said client, in response to identifying said first flag set to said second state; and
means for setting said first flag to said first state and setting a second flag associated with another one of said plurality of resources to said second state, in response to assigning said associated one of said plurality of resources to said client.
15. The system in claim 14, wherein said means for associating a respective one of a plurality of flags, further comprises:
means for associating a respective one of a plurality of counters with each of said plurality of resources, wherein each of said plurality of counters represents a number of clients currently utilizing each of said plurality of resources.
16. The system of claim 14, wherein said means for associating a respective one of a plurality of flags, further comprises:
means for associating a respective one of a plurality of variables with each of said plurality of resources, wherein each of said plurality of variables represents a maximum number of clients permitted to utilize each of said plurality of resources.
17. The system of claim 14, further comprising:
means for removing said one of said plurality of resources from said single set, in response to determining the workload of one of said plurality of resources exceeds a predetermined value.
18. The system of claim 14, further comprising:
means for associating said one of said plurality of resources in said single set, in response to determining the workload of one of said plurality of resources does not exceed a predetermined value.
19. A computer program product for allocating a plurality of resources on a network, which includes a server and at least one client, said computer program product comprising:
instruction means, embodied within computer-readable media, for associating said plurality of resources within a single set, such that said plurality of resources are allocated only from said single set, such that the number of said plurality of resources within said network may be more easily incremented or decremented;
instruction means, embodied within computer-readable media, for associating a respective one of a plurality of flags with each of said plurality of resources, wherein each of said plurality of flags may be set to one of two states, a first state or a second state;
instruction means, embodied within computer-readable media, for examining said plurality of flags to identify a first flag set to said second state, in response to said server receiving a connection request from said client;
instruction means, embodied within computer-readable media, for assigning an associated one of said plurality of resources to said client, in response to identifying said first flag set to said second state; and
instruction means, embodied within computer-readable media, for setting said first flag to said first state and setting a second flag associated with another one of said plurality of resources to said second state, in response to assigning said associated one of said plurality of resources to said client.
20. The computer program product in claim 19, wherein said instruction means for associating a respective one of a plurality of flags, further comprises:
instruction means, embodied within computer-readable media, for associating a respective one of a plurality of counters with each of said plurality of resources, wherein each of said plurality of counters represents a number of clients currently utilizing each of said plurality of resources.
21. The computer program product of claim 19, wherein said instruction means for associating a respective one of a plurality of flags, further comprises:
instruction means, embodied within computer-readable media, for associating a respective one of a plurality of variables with each of said plurality of resources, wherein each of said plurality of variables represents a maximum number of clients permitted to utilize each of said plurality of resources.
22. The computer program product of claim 19, further comprising:
instruction means, embodied within computer-readable media, for removing said one of said plurality of resources from said single set, in response to determining the workload of one of said plurality of resources exceeds a predetermined value.
23. The computer program product of claim 19, further comprising:
instruction means, embodied within computer-readable media, for associating said one of said plurality of resources in said single set, in response to determining the workload of one of said plurality of resources does not exceed a predetermined value.
US12/057,517 2004-12-02 2008-03-28 System and method for allocating resources on a network Abandoned US20080177830A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/002,545 US7464165B2 (en) 2004-12-02 2004-12-02 System and method for allocating resources on a network
US12/057,517 US20080177830A1 (en) 2004-12-02 2008-03-28 System and method for allocating resources on a network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/057,517 US20080177830A1 (en) 2004-12-02 2008-03-28 System and method for allocating resources on a network

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/002,545 Continuation US7464165B2 (en) 2004-12-02 2004-12-02 System and method for allocating resources on a network

Publications (1)

Publication Number Publication Date
US20080177830A1 true US20080177830A1 (en) 2008-07-24

Family

ID=36575666

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/002,545 Expired - Fee Related US7464165B2 (en) 2004-12-02 2004-12-02 System and method for allocating resources on a network
US12/057,517 Abandoned US20080177830A1 (en) 2004-12-02 2008-03-28 System and method for allocating resources on a network

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/002,545 Expired - Fee Related US7464165B2 (en) 2004-12-02 2004-12-02 System and method for allocating resources on a network

Country Status (1)

Country Link
US (2) US7464165B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110125907A1 (en) * 2003-11-24 2011-05-26 At&T Intellectual Property I, L.P. Methods, Systems, and Products for Providing Communications Services
US20130067043A1 (en) * 2011-09-12 2013-03-14 Microsoft Corporation Recording Stateless IP Addresses
US8654650B1 (en) 2010-04-30 2014-02-18 Amazon Technologies, Inc. System and method for determining node staleness in a distributed system
US8694639B1 (en) * 2010-09-21 2014-04-08 Amazon Technologies, Inc. Determining maximum amount of resource allowed to be allocated to client in distributed system
US9578130B1 (en) 2012-06-20 2017-02-21 Amazon Technologies, Inc. Asynchronous and idempotent distributed lock interfaces
US9760529B1 (en) 2014-09-17 2017-09-12 Amazon Technologies, Inc. Distributed state manager bootstrapping
US9852221B1 (en) 2015-03-26 2017-12-26 Amazon Technologies, Inc. Distributed state manager jury selection
US10191959B1 (en) 2012-06-20 2019-01-29 Amazon Technologies, Inc. Versioned read-only snapshots of shared state in distributed computing environments

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050165932A1 (en) * 2004-01-22 2005-07-28 International Business Machines Corporation Redirecting client connection requests among sockets providing a same service
TWI302712B (en) * 2004-12-16 2008-11-01 Japan Science & Tech Agency Nd-fe-b base magnet including modified grain boundaries and method for manufacturing the same
US7908606B2 (en) * 2005-05-20 2011-03-15 Unisys Corporation Usage metering system
CN1992736A (en) * 2005-12-30 2007-07-04 西门子(中国)有限公司 IP address distribution method and use thereof
JP5229232B2 (en) * 2007-12-04 2013-07-03 富士通株式会社 Resource lending controller, resource lending process and resource lending program
US20170004020A1 (en) * 2015-06-30 2017-01-05 Coursera, Inc. Automated batch application programming interfaces

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
US6185623B1 (en) * 1997-11-07 2001-02-06 International Business Machines Corporation Method and system for trivial file transfer protocol (TFTP) subnet broadcast
US6330602B1 (en) * 1997-04-14 2001-12-11 Nortel Networks Limited Scaleable web server and method of efficiently managing multiple servers
US20030163341A1 (en) * 2002-02-26 2003-08-28 International Business Machines Corporation Apparatus and method of dynamically updating dynamic host configuration protocol (DHCP) options
US6718359B2 (en) * 1998-07-15 2004-04-06 Radware Ltd. Load balancing
US6728718B2 (en) * 2001-06-26 2004-04-27 International Business Machines Corporation Method and system for recovering DHCP data
US6813635B1 (en) * 2000-10-13 2004-11-02 Hewlett-Packard Development Company, L.P. System and method for distributing load among redundant independent stateful world wide web server sites
US20050188055A1 (en) * 2003-12-31 2005-08-25 Saletore Vikram A. Distributed and dynamic content replication for server cluster acceleration
US6980550B1 (en) * 2001-01-16 2005-12-27 Extreme Networks, Inc Method and apparatus for server load balancing
US7124188B2 (en) * 1998-12-01 2006-10-17 Network Appliance, Inc. Method and apparatus for policy based class service and adaptive service level management within the context of an internet and intranet
US7155515B1 (en) * 2001-02-06 2006-12-26 Microsoft Corporation Distributed load balancing for single entry-point systems
US7225237B1 (en) * 2000-07-31 2007-05-29 Cisco Technology, Inc. System and method for providing persistent connections based on subnet natural class
US7284067B2 (en) * 2002-02-20 2007-10-16 Hewlett-Packard Development Company, L.P. Method for integrated load balancing among peer servers
US7287090B1 (en) * 2000-12-21 2007-10-23 Noatak Software, Llc Method and system for identifying a computing device in response to a request packet

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330602B1 (en) * 1997-04-14 2001-12-11 Nortel Networks Limited Scaleable web server and method of efficiently managing multiple servers
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
US6185623B1 (en) * 1997-11-07 2001-02-06 International Business Machines Corporation Method and system for trivial file transfer protocol (TFTP) subnet broadcast
US6718359B2 (en) * 1998-07-15 2004-04-06 Radware Ltd. Load balancing
US7124188B2 (en) * 1998-12-01 2006-10-17 Network Appliance, Inc. Method and apparatus for policy based class service and adaptive service level management within the context of an internet and intranet
US7225237B1 (en) * 2000-07-31 2007-05-29 Cisco Technology, Inc. System and method for providing persistent connections based on subnet natural class
US6813635B1 (en) * 2000-10-13 2004-11-02 Hewlett-Packard Development Company, L.P. System and method for distributing load among redundant independent stateful world wide web server sites
US7287090B1 (en) * 2000-12-21 2007-10-23 Noatak Software, Llc Method and system for identifying a computing device in response to a request packet
US6980550B1 (en) * 2001-01-16 2005-12-27 Extreme Networks, Inc Method and apparatus for server load balancing
US7155515B1 (en) * 2001-02-06 2006-12-26 Microsoft Corporation Distributed load balancing for single entry-point systems
US6728718B2 (en) * 2001-06-26 2004-04-27 International Business Machines Corporation Method and system for recovering DHCP data
US7284067B2 (en) * 2002-02-20 2007-10-16 Hewlett-Packard Development Company, L.P. Method for integrated load balancing among peer servers
US20030163341A1 (en) * 2002-02-26 2003-08-28 International Business Machines Corporation Apparatus and method of dynamically updating dynamic host configuration protocol (DHCP) options
US20050188055A1 (en) * 2003-12-31 2005-08-25 Saletore Vikram A. Distributed and dynamic content replication for server cluster acceleration

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9240901B2 (en) * 2003-11-24 2016-01-19 At&T Intellectual Property I, L.P. Methods, systems, and products for providing communications services by determining the communications services require a subcontracted processing service and subcontracting to the subcontracted processing service in order to provide the communications services
US20110125907A1 (en) * 2003-11-24 2011-05-26 At&T Intellectual Property I, L.P. Methods, Systems, and Products for Providing Communications Services
US10230658B2 (en) 2003-11-24 2019-03-12 At&T Intellectual Property I, L.P. Methods, systems, and products for providing communications services by incorporating a subcontracted result of a subcontracted processing service into a service requested by a client device
US8654650B1 (en) 2010-04-30 2014-02-18 Amazon Technologies, Inc. System and method for determining node staleness in a distributed system
US8694639B1 (en) * 2010-09-21 2014-04-08 Amazon Technologies, Inc. Determining maximum amount of resource allowed to be allocated to client in distributed system
US9578080B1 (en) 2010-09-21 2017-02-21 Amazon Technologies, Inc. Resource allocation in distributed systems using grant messages
US8832238B2 (en) * 2011-09-12 2014-09-09 Microsoft Corporation Recording stateless IP addresses
US20130067043A1 (en) * 2011-09-12 2013-03-14 Microsoft Corporation Recording Stateless IP Addresses
US9578130B1 (en) 2012-06-20 2017-02-21 Amazon Technologies, Inc. Asynchronous and idempotent distributed lock interfaces
US10116766B2 (en) 2012-06-20 2018-10-30 Amazon Technologies, Inc. Asynchronous and idempotent distributed lock interfaces
US10191959B1 (en) 2012-06-20 2019-01-29 Amazon Technologies, Inc. Versioned read-only snapshots of shared state in distributed computing environments
US9760529B1 (en) 2014-09-17 2017-09-12 Amazon Technologies, Inc. Distributed state manager bootstrapping
US9852221B1 (en) 2015-03-26 2017-12-26 Amazon Technologies, Inc. Distributed state manager jury selection

Also Published As

Publication number Publication date
US20060123102A1 (en) 2006-06-08
US7464165B2 (en) 2008-12-09

Similar Documents

Publication Publication Date Title
CA2403733C (en) Method for dynamically displaying brand information in a user interface
JP4592184B2 (en) Static identifier is assigned, and the access method and device to device that is intermittently connected to the network
JP4159337B2 (en) Resolution of the virtual network name
JP3654554B2 (en) Network system and dhcp server selection method
JP4041218B2 (en) Network configuration settings implementation
US6697360B1 (en) Method and apparatus for auto-configuring layer three intermediate computer network devices
EP2214383A1 (en) Automatically Releasing Resources Reserved for Subscriber Devices within a Broadband Access Network
CN1217520C (en) Device for converting internet protocol address and household network system using same
US20190081922A1 (en) Method and system for increasing speed of domain name system resolution within a computing device
JP3792188B2 (en) The methods and apparatus suitable for this assign multiple ip addresses to one nic
US5884024A (en) Secure DHCP server
US6199112B1 (en) System and method for resolving fibre channel device addresses on a network using the device's fully qualified domain name
US20050027778A1 (en) Automatic configuration of an address allocation mechanism in a computer network
CN100527752C (en) DHCP address allocation method
EP0946027B1 (en) A method and apparatus for configuring a network node to be its own gateway
US7730210B2 (en) Virtual MAC address system and method
US5922049A (en) Method for using DHCP and marking to override learned IP addesseses in a network
US6381650B1 (en) Method for finding the address of a workstation assigned a dynamic address
EP0998099A2 (en) Network address management
US6611861B1 (en) Internet hosting and access system and method
US6195706B1 (en) Methods and apparatus for determining, verifying, and rediscovering network IP addresses
US7010585B2 (en) DNS server, DHCP server, terminal and communication system
Guttman Autoconfiguration for ip networking: Enabling local communication
US5557748A (en) Dynamic network configuration
US20020078188A1 (en) Method, apparatus, and program for server based network computer load balancing across multiple boot servers

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION