US20130060907A1 - Handling http-based service requests via ip ports - Google Patents

Handling http-based service requests via ip ports Download PDF

Info

Publication number
US20130060907A1
US20130060907A1 US13/226,849 US201113226849A US2013060907A1 US 20130060907 A1 US20130060907 A1 US 20130060907A1 US 201113226849 A US201113226849 A US 201113226849A US 2013060907 A1 US2013060907 A1 US 2013060907A1
Authority
US
United States
Prior art keywords
http
port
program instructions
based service
computer readable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/226,849
Inventor
Fraser P. Bohm
Martin W. J. Cocks
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US13/226,849 priority Critical patent/US20130060907A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOHM, FRASER P., COCKS, MARTIN W. J.
Priority to US13/416,961 priority patent/US9118682B2/en
Publication of US20130060907A1 publication Critical patent/US20130060907A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1006Server selection for load balancing with static server selection, e.g. the same server being selected for a specific client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1027Persistence of sessions during load balancing

Definitions

  • the present disclosure relates to the field of computers, and specifically to computer systems that utilize Internet Protocol (IP) ports. Still more particularly, the present disclosure relates to handling overflow conditions in IP ports on Hypertext Transfer Protocol (HTTP) servers.
  • IP Internet Protocol
  • HTTP Hypertext Transfer Protocol
  • HTTP servers provide access to HTTP-based services, such as web pages, etc. Communication sessions between a requesting client and the HTTP servers are often via one or more IP ports on the HTTP servers. These communication sessions may be persistent or temporary.
  • a computer implemented method, system and/or computer program product handles service requests for HTTP-based services via IP ports that are located on HTTP servers. These HTTP servers are logically coupled to a port sharing mechanism that handles service requests from clients, and each of the multiple HTTP servers provides the same HTTP-based service.
  • a request for the HTTP-based service is sent to an IP port in a first HTTP server. However, this IP port has a current number of active IP connections that exceeds a soft cap. Nonetheless, this IP port is directed to accept and execute the request, and then to terminate its connection with the client. Any subsequent request for this same HTTP-based service is directed to another of the multiple HTTP servers.
  • FIG. 1 depicts an exemplary computer in which the present disclosure may be implemented
  • FIG. 2 illustrates an exemplary port sharing mechanism that directs client requests for HTTP-based services to one of multiple HTTP servers
  • FIG. 3 is a high level flow chart of one or more exemplary steps taken by a processor to handle service requests for HTTP-based services via IP ports that are located on HTTP servers.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 there is depicted a block diagram of an exemplary computer 102 , which may be utilized by the present disclosure. Note that some or all of the exemplary architecture, including both depicted hardware and software, shown for and within computer 102 may be utilized by software deploying server 150 , client computer(s) 152 , and/or Hypertext Transfer Protocol (HTTP) servers 154 .
  • HTTP Hypertext Transfer Protocol
  • Computer 102 includes a processor unit 104 that is coupled to a system bus 106 .
  • Processor unit 104 may utilize one or more processors, each of which has one or more processor cores.
  • a video adapter 108 which drives/supports a display 110 , is also coupled to system bus 106 .
  • System bus 106 is coupled via a bus bridge 112 to an input/output (I/O) bus 114 .
  • I/O interface 116 is coupled to I/O bus 114 .
  • I/O interface 116 affords communication with various I/O devices, including a keyboard 118 , a mouse 120 , a media tray 122 (which may include storage devices such as CD-ROM drives, multi-media interfaces, etc.), a printer 124 , and (if a VHDL chip 137 is not utilized in a manner described below), external USB port(s) 126 . While the format of the ports connected to I/O interface 116 may be any known to those skilled in the art of computer architecture, in one embodiment some or all of these ports are universal serial bus (USB) ports.
  • USB universal serial bus
  • Network 128 may be an external network such as the Internet, or an internal network such as an Ethernet or a virtual private network (VPN).
  • VPN virtual private network
  • a hard drive interface 132 is also coupled to system bus 106 .
  • Hard drive interface 132 interfaces with a hard drive 134 .
  • hard drive 134 populates a system memory 136 , which is also coupled to system bus 106 .
  • System memory is defined as a lowest level of volatile memory in computer 102 . This volatile memory includes additional higher levels of volatile memory (not shown), including, but not limited to, cache memory, registers and buffers. Data that populates system memory 136 includes computer 102 's operating system (OS) 138 and application programs 144 .
  • OS operating system
  • OS 138 includes a shell 140 , for providing transparent user access to resources such as application programs 144 .
  • shell 140 is a program that provides an interpreter and an interface between the user and the operating system. More specifically, shell 140 executes commands that are entered into a command line user interface or from a file.
  • shell 140 also called a command processor, is generally the highest level of the operating system software hierarchy and serves as a command interpreter. The shell provides a system prompt, interprets commands entered by keyboard, mouse, or other user input media, and sends the interpreted command(s) to the appropriate lower levels of the operating system (e.g., a kernel 142 ) for processing.
  • a kernel 142 the appropriate lower levels of the operating system for processing.
  • shell 140 is a text-based, line-oriented user interface, the present disclosure will equally well support other user interface modes, such as graphical, voice, gestural, etc.
  • OS 138 also includes kernel 142 , which includes lower levels of functionality for OS 138 , including providing essential services required by other parts of OS 138 and application programs 144 , including memory management, process and task management, disk management, and mouse and keyboard management.
  • kernel 142 includes lower levels of functionality for OS 138 , including providing essential services required by other parts of OS 138 and application programs 144 , including memory management, process and task management, disk management, and mouse and keyboard management.
  • Application programs 144 include a renderer, shown in exemplary manner as a browser 146 .
  • Browser 146 includes program modules and instructions enabling a world wide web (WWW) client (i.e., computer 102 ) to send and receive network messages to the Internet using hypertext transfer protocol (HTTP) messaging, thus enabling communication with software deploying server 150 and other described computer systems.
  • WWW world wide web
  • HTTP hypertext transfer protocol
  • Application programs 144 in computer 102 's system memory also include a service request handling program (SRHP) 148 .
  • SRHP 148 includes code for implementing the processes described below, including those described in FIGS. 2-3 .
  • computer 102 is able to download SRHP 148 from software deploying server 150 , including in an on-demand basis, such that the code from SRHP 148 is not downloaded until runtime or otherwise immediately needed by computer 102 .
  • software deploying server 150 performs all of the functions associated with the present disclosure (including execution of SRHP 148 ), thus freeing computer 102 from having to use its own internal computing resources to execute SRHP 148 .
  • computer 102 may include alternate memory storage devices such as magnetic cassettes, digital versatile disks (DVDs), Bernoulli cartridges, and the like. These and other variations are intended to be within the spirit and scope of the present disclosure.
  • HTTP servers 202 a - n HTTP Servers A-N
  • HTTP servers 202 a - n are analogous to the HTTP servers 154 depicted in FIG. 1 .
  • each of the HTTP servers provides a same HTTP-based service, such as providing access to web pages, providing access to a specific web page, providing access to a portal, providing access to a web service (e.g., an application that runs on a computer that is remote from the client's computer), etc.
  • each of the HTTP servers 202 a - n has the architecture/software needed to provide the same service as any of the other HTTP servers 202 a - n , although perhaps without having the same current capacity due to overloading (as described below). As depicted in FIG. 2 , each of the HTTP servers 202 a - n has its own and separate IP port (e.g., one of IP ports 204 a - n ), such that none of the HTTP servers 202 a - n shares any of the IP ports 204 a - n.
  • IP port e.g., one of IP ports 204 a - n
  • port sharing mechanism 210 Logically coupled to the IP ports 204 a - n is a port sharing mechanism 210 , which is analogous to computer 102 in FIG. 1 , and thus in one embodiment port sharing mechanism 210 utilizes some or all of the architecture depicted for computer 102 .
  • the port sharing mechanism 210 directs requests for HTTP-based services from a client 206 (analogous to client computer 152 shown in FIG. 1 ) to one of the HTTP servers 202 a - n via an associated IP port from IP ports 204 a - n .
  • port sharing mechanism 210 keeps track of which, if any, of the IP ports 204 a - n have exceeded either a “hard cap” or a “soft cap” (described below). This information regarding hard/soft cap status is provided by the HTTP servers 202 a - n and/or the IP ports 204 a - n.
  • IP port 204 a is currently overloaded. That is, IP port 204 a has a current number of active IP connections and/or transaction requests in a pending transaction queue that exceed a soft cap of a predetermined value.
  • a soft cap may be such that if there are more than five transaction requests pending in IP port 204 a , either from client 206 or from client 206 and other clients 208 , then IP port 204 a is considered to be “soft cap” overloaded.
  • this “soft cap” is not the same as a “hard cap.” That is, a hard cap, as will be discussed further below, is a fixed cap on how many transactions can be pending in an IP port. If the hard cap is exceeded, then that IP port simply refuses to initiate a transaction session with the requesting client (e.g., refuses to complete a three-way handshake of SYN/SYN-ACK/ACK between the server and the client).
  • a “soft cap” results in a processor in the port sharing mechanism (or alternatively, a processor within the HTTP server 202 a ) directing IP port 204 a and HTTP server 202 a to accept and execute client 206 's request for the HTTP-based service, despite the current number of active IP connections in the IP port 204 a exceeding the soft cap.
  • Exceeding the soft cap also results in a message being sent (either from the port sharing mechanism 210 or from the HTTP server 202 a ) directing the IP port 204 a and/or the HTTP server 202 a to subsequently terminate the Transmission Control Protocol/Internet Protocol (TCP/IP) connection between the IP port 204 a and the client 206 after executing/fulfilling the request for HTTP-based service.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • a “FIN” message is sent to the client 206 from the IP port 204 a , instructing the client 206 to terminate (i.e., to clear the session on the client 206 's side) the TCP/IP session between the IP port 204 a and the client 206 .
  • a hard cap can also be used with a soft cap.
  • a hard cap is set for IP port 204 a for a predefined peak traffic time period.
  • This hard cap which is a predefined limit for IP connections to the first IP port and in this embodiment is only in effect during the predefined peak traffic time, may be smaller or larger than the soft cap.
  • the instruction to the IP port 204 a and HTTP server 202 a to accept and execute the request from the client 206 for the HTTP-based service is overridden, and the connection between IP port 204 a and client 206 is immediately terminated.
  • client 206 will then send a subsequent request for a new TCP/IP session in order to access the same HTTP-based service that was being provided by HTTP server 202 a (or which HTTP server 202 a would have attempted to provide if a hard cap had been reached). Rather than resending this request to the same HTTP server 202 a , the port sharing mechanism 210 will send the subsequent request to another of the HTTP servers 202 b - n (e.g., HTTP server 202 c via its IP port 204 c ). The port sharing mechanism 210 may direct the subsequent request from the client 206 according to various scenarios and embodiments described below.
  • n is a large number, such that there are numerous (e.g., thousands of) HTTP servers 202 a - n .
  • simple probability will ensure that the subsequent request will likely not be re-sent to the overloaded IP port 204 a , but rather to another of the IP ports 204 b - n.
  • port sharing mechanism 210 interrogates a local register (not shown) of soft/hard cap statuses in the IP ports 204 a - n , and thus “knows” which IP port from the IP ports 204 a - n has the bandwidth/capacity to handle the subsequent request.
  • the processor in the port sharing mechanism 210 previously transmitted an instruction to the client 206 , instructing the client 206 to include an exclusionary message in the subsequent request for the same HTTP-based service.
  • This exclusionary message directs the port sharing mechanism 210 to send this subsequent request for the same HTTP-based service to another HTTP server, from the multiple HTTP servers 202 b - n , other than HTTP server 202 a (e.g., “send the request to HTTP server 202 c via IP port 204 c ”).
  • the hard caps and/or soft caps described herein are adjustable.
  • the size of a hard/soft cap can be adjusted according to a predicted level of request activity during a time of day, a day of the week, a season of the year, etc.
  • a level of request activity is predicted to be high (greater than some upper predefined level) during a certain time period, then all of the IP ports 204 a - n will be called upon to handle high traffic, and thus the soft/hard caps are raised accordingly.
  • the level of request activity is predicted to be low (less than some lower predefined level) during a certain time period, then all of the IP ports 204 a - n will be less busy, and thus the soft/hard caps are lowered accordingly.
  • TCP/IP connections are persistent (exist until expressly terminated) or transient/temporary (automatically terminate before some predetermined length of time such as 30 seconds). Any TCP/IP connections, to a particular IP port, which are identified as being temporary are thus ignored when determining if that particular IP port has reached its soft/hard cap, since the temporary TCP/IP connections will “go away” soon anyway.
  • IP port 204 a from IP ports 204 a - n ) has cleared out its queue of pending transactions (i.e., has fallen below its soft or hard cap).
  • a message is then sent out (e.g., to the port sharing mechanism 210 and/or the client 206 ) from the particular IP port (e.g., IP port 204 a ) indicating that IP port 204 a has been reopened to accept new requests.
  • the reopening of IP port 204 a and the sending out notifications of the reopening of IP port 204 a may be under the control of the HTTP server 202 a and/or the port sharing mechanism 210 .
  • FIG. 3 a high level flow chart of one or more processes performed by a processor for handling service requests for Hypertext Transfer Protocol (HTTP) based services via Internet Protocol (IP) ports on HTTP servers is presented.
  • HTTP Hypertext Transfer Protocol
  • IP Internet Protocol
  • a port sharing mechanism is logically coupled to multiple HTTP servers (block 304 ).
  • Each of the multiple HTTP servers provides a same HTTP-based service, and each of the multiple HTTP servers has a unique IP port that is not shared with any other HTTP server from the multiple HTTP servers.
  • the port sharing mechanism directs requests for HTTP-based services from a client to one of the multiple HTTP servers.
  • a first request for the same HTTP-based service is transmitted to a first IP port in a first HTTP server from the multiple HTTP servers.
  • the first IP port has a current number of active IP connections that exceed a soft cap (which has been set to a predetermined value).
  • the first IP port and the first HTTP server are instructed (e.g., by a processor in the port sharing mechanism or within the first HTTP server itself) to accept and execute the request for the same HTTP-based service, despite the current number of active IP connections in the first IP port exceeding the soft cap.
  • the first HTTP server is also instructed to subsequently terminate a connection between the first IP port and the client after executing the request for the same HTTP-based service.
  • the port sharing mechanism may receive a subsequent request for the same HTTP-based service from the client after the connection between the first IP port and the client has been terminated. If so, then the port sharing mechanism transmits that subsequent request to another HTTP server, from the multiple HTTP servers, which is other than the first HTTP server. The process ends at terminator block 314 .
  • the present invention presents a novel and useful improvement over the prior art. More specifically, instead of refusing any connection attempt above a present threshold level as in the prior art, the present invention instructs a back end system (e.g., HTTP servers) to nonetheless accept, process and respond to a request, even though the threshold level has been exceeded.
  • a back end system e.g., HTTP servers
  • a message is also sent to indicate to the client that the connection is hereafter being terminated by the server.
  • This feature forces a client, which wants to make further requests to the back end system, to re-establish a connection through the port sharing mechanism.
  • the port sharing mechanism then connects the client to another back end system that is not above the threshold level.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • VHDL VHSIC Hardware Description Language
  • VHDL is an exemplary design-entry language for Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), and other similar electronic devices.
  • FPGA Field Programmable Gate Arrays
  • ASIC Application Specific Integrated Circuits
  • any software-implemented method described herein may be emulated by a hardware-based VHDL program, which is then applied to a VHDL chip, such as a FPGA.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

A computer implemented method, system and/or computer program product handles service requests for HTTP-based services via IP ports that are located on HTTP servers. These HTTP servers are logically coupled to a port sharing mechanism that handles service requests from clients, and each of the multiple HTTP servers provides the same HTTP-based service. A request for the HTTP-based service is sent to an IP port in a first HTTP server. However, this IP port has a current number of active IP connections that exceeds a soft cap. Nonetheless, this IP port is directed to accept and execute the request, and then to terminate its connection with the client. Any subsequent request for this same HTTP-based service is directed to another of the multiple HTTP servers.

Description

    BACKGROUND
  • The present disclosure relates to the field of computers, and specifically to computer systems that utilize Internet Protocol (IP) ports. Still more particularly, the present disclosure relates to handling overflow conditions in IP ports on Hypertext Transfer Protocol (HTTP) servers.
  • HTTP servers provide access to HTTP-based services, such as web pages, etc. Communication sessions between a requesting client and the HTTP servers are often via one or more IP ports on the HTTP servers. These communication sessions may be persistent or temporary.
  • SUMMARY
  • A computer implemented method, system and/or computer program product handles service requests for HTTP-based services via IP ports that are located on HTTP servers. These HTTP servers are logically coupled to a port sharing mechanism that handles service requests from clients, and each of the multiple HTTP servers provides the same HTTP-based service. A request for the HTTP-based service is sent to an IP port in a first HTTP server. However, this IP port has a current number of active IP connections that exceeds a soft cap. Nonetheless, this IP port is directed to accept and execute the request, and then to terminate its connection with the client. Any subsequent request for this same HTTP-based service is directed to another of the multiple HTTP servers.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1 depicts an exemplary computer in which the present disclosure may be implemented;
  • FIG. 2 illustrates an exemplary port sharing mechanism that directs client requests for HTTP-based services to one of multiple HTTP servers; and
  • FIG. 3 is a high level flow chart of one or more exemplary steps taken by a processor to handle service requests for HTTP-based services via IP ports that are located on HTTP servers.
  • DETAILED DESCRIPTION
  • As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • With reference now to the figures, and in particular to FIG. 1, there is depicted a block diagram of an exemplary computer 102, which may be utilized by the present disclosure. Note that some or all of the exemplary architecture, including both depicted hardware and software, shown for and within computer 102 may be utilized by software deploying server 150, client computer(s) 152, and/or Hypertext Transfer Protocol (HTTP) servers 154.
  • Computer 102 includes a processor unit 104 that is coupled to a system bus 106. Processor unit 104 may utilize one or more processors, each of which has one or more processor cores. A video adapter 108, which drives/supports a display 110, is also coupled to system bus 106.
  • System bus 106 is coupled via a bus bridge 112 to an input/output (I/O) bus 114. An I/O interface 116 is coupled to I/O bus 114. I/O interface 116 affords communication with various I/O devices, including a keyboard 118, a mouse 120, a media tray 122 (which may include storage devices such as CD-ROM drives, multi-media interfaces, etc.), a printer 124, and (if a VHDL chip 137 is not utilized in a manner described below), external USB port(s) 126. While the format of the ports connected to I/O interface 116 may be any known to those skilled in the art of computer architecture, in one embodiment some or all of these ports are universal serial bus (USB) ports.
  • As depicted, computer 102 is able to communicate with a software deploying server 150 and/or client computer(s) 152 via network 128 using a network interface 130. Network 128 may be an external network such as the Internet, or an internal network such as an Ethernet or a virtual private network (VPN).
  • A hard drive interface 132 is also coupled to system bus 106. Hard drive interface 132 interfaces with a hard drive 134. In one embodiment, hard drive 134 populates a system memory 136, which is also coupled to system bus 106. System memory is defined as a lowest level of volatile memory in computer 102. This volatile memory includes additional higher levels of volatile memory (not shown), including, but not limited to, cache memory, registers and buffers. Data that populates system memory 136 includes computer 102's operating system (OS) 138 and application programs 144.
  • OS 138 includes a shell 140, for providing transparent user access to resources such as application programs 144. Generally, shell 140 is a program that provides an interpreter and an interface between the user and the operating system. More specifically, shell 140 executes commands that are entered into a command line user interface or from a file. Thus, shell 140, also called a command processor, is generally the highest level of the operating system software hierarchy and serves as a command interpreter. The shell provides a system prompt, interprets commands entered by keyboard, mouse, or other user input media, and sends the interpreted command(s) to the appropriate lower levels of the operating system (e.g., a kernel 142) for processing. Note that while shell 140 is a text-based, line-oriented user interface, the present disclosure will equally well support other user interface modes, such as graphical, voice, gestural, etc.
  • As depicted, OS 138 also includes kernel 142, which includes lower levels of functionality for OS 138, including providing essential services required by other parts of OS 138 and application programs 144, including memory management, process and task management, disk management, and mouse and keyboard management.
  • Application programs 144 include a renderer, shown in exemplary manner as a browser 146. Browser 146 includes program modules and instructions enabling a world wide web (WWW) client (i.e., computer 102) to send and receive network messages to the Internet using hypertext transfer protocol (HTTP) messaging, thus enabling communication with software deploying server 150 and other described computer systems.
  • Application programs 144 in computer 102's system memory (as well as software deploying server 150's system memory) also include a service request handling program (SRHP) 148. SRHP 148 includes code for implementing the processes described below, including those described in FIGS. 2-3. In one embodiment, computer 102 is able to download SRHP 148 from software deploying server 150, including in an on-demand basis, such that the code from SRHP 148 is not downloaded until runtime or otherwise immediately needed by computer 102. Note further that, in one embodiment of the present disclosure, software deploying server 150 performs all of the functions associated with the present disclosure (including execution of SRHP 148), thus freeing computer 102 from having to use its own internal computing resources to execute SRHP 148.
  • The hardware elements depicted in computer 102 are not intended to be exhaustive, but rather are representative to highlight essential components required by the present disclosure. For instance, computer 102 may include alternate memory storage devices such as magnetic cassettes, digital versatile disks (DVDs), Bernoulli cartridges, and the like. These and other variations are intended to be within the spirit and scope of the present disclosure.
  • With reference now to FIG. 2, consider the exemplary back office systems depicted as HTTP servers 202 a-n (HTTP Servers A-N), where “n” is an integer. HTTP servers 202 a-n are analogous to the HTTP servers 154 depicted in FIG. 1. In one embodiment, each of the HTTP servers provides a same HTTP-based service, such as providing access to web pages, providing access to a specific web page, providing access to a portal, providing access to a web service (e.g., an application that runs on a computer that is remote from the client's computer), etc. That is, each of the HTTP servers 202 a-n has the architecture/software needed to provide the same service as any of the other HTTP servers 202 a-n, although perhaps without having the same current capacity due to overloading (as described below). As depicted in FIG. 2, each of the HTTP servers 202 a-n has its own and separate IP port (e.g., one of IP ports 204 a-n), such that none of the HTTP servers 202 a-n shares any of the IP ports 204 a-n.
  • Logically coupled to the IP ports 204 a-n is a port sharing mechanism 210, which is analogous to computer 102 in FIG. 1, and thus in one embodiment port sharing mechanism 210 utilizes some or all of the architecture depicted for computer 102. The port sharing mechanism 210 directs requests for HTTP-based services from a client 206 (analogous to client computer 152 shown in FIG. 1) to one of the HTTP servers 202 a-n via an associated IP port from IP ports 204 a-n. In one embodiment, port sharing mechanism 210 keeps track of which, if any, of the IP ports 204 a-n have exceeded either a “hard cap” or a “soft cap” (described below). This information regarding hard/soft cap status is provided by the HTTP servers 202 a-n and/or the IP ports 204 a-n.
  • While not explicitly depicted in FIG. 2, assume that client 206 has sent a request for the same HTTP-based service (i.e., the same HTTP-based service that is provided by any and all of the HTTP servers 202 a-n) to HTTP server 202 a via port sharing mechanism 210 and IP port 204 a. However, IP port 204 a is currently overloaded. That is, IP port 204 a has a current number of active IP connections and/or transaction requests in a pending transaction queue that exceed a soft cap of a predetermined value. For example, a soft cap may be such that if there are more than five transaction requests pending in IP port 204 a, either from client 206 or from client 206 and other clients 208, then IP port 204 a is considered to be “soft cap” overloaded. However, this “soft cap” is not the same as a “hard cap.” That is, a hard cap, as will be discussed further below, is a fixed cap on how many transactions can be pending in an IP port. If the hard cap is exceeded, then that IP port simply refuses to initiate a transaction session with the requesting client (e.g., refuses to complete a three-way handshake of SYN/SYN-ACK/ACK between the server and the client).
  • In accordance with one embodiment of the present invention, however, a “soft cap” results in a processor in the port sharing mechanism (or alternatively, a processor within the HTTP server 202 a) directing IP port 204 a and HTTP server 202 a to accept and execute client 206's request for the HTTP-based service, despite the current number of active IP connections in the IP port 204 a exceeding the soft cap. Exceeding the soft cap also results in a message being sent (either from the port sharing mechanism 210 or from the HTTP server 202 a) directing the IP port 204 a and/or the HTTP server 202 a to subsequently terminate the Transmission Control Protocol/Internet Protocol (TCP/IP) connection between the IP port 204 a and the client 206 after executing/fulfilling the request for HTTP-based service. Further, as depicted in FIG. 2, a “FIN” message is sent to the client 206 from the IP port 204 a, instructing the client 206 to terminate (i.e., to clear the session on the client 206's side) the TCP/IP session between the IP port 204 a and the client 206.
  • In one embodiment, a hard cap can also be used with a soft cap. For example, assume that a hard cap is set for IP port 204 a for a predefined peak traffic time period. This hard cap, which is a predefined limit for IP connections to the first IP port and in this embodiment is only in effect during the predefined peak traffic time, may be smaller or larger than the soft cap. Regardless of whether the hard cap is smaller or larger than the soft cap, if a determination is made that the hard cap has been exceeded during the predefined peak traffic time, then the instruction to the IP port 204 a and HTTP server 202 a to accept and execute the request from the client 206 for the HTTP-based service is overridden, and the connection between IP port 204 a and client 206 is immediately terminated.
  • As depicted in FIG. 2 and suggested by the “SYN” command from client 206 to one of the IP ports 204 a-n via the port sharing mechanism 210, client 206 will then send a subsequent request for a new TCP/IP session in order to access the same HTTP-based service that was being provided by HTTP server 202 a (or which HTTP server 202 a would have attempted to provide if a hard cap had been reached). Rather than resending this request to the same HTTP server 202 a, the port sharing mechanism 210 will send the subsequent request to another of the HTTP servers 202 b-n (e.g., HTTP server 202 c via its IP port 204 c). The port sharing mechanism 210 may direct the subsequent request from the client 206 according to various scenarios and embodiments described below.
  • In one scenario/embodiment, assume that “n” is a large number, such that there are numerous (e.g., thousands of) HTTP servers 202 a-n. In this case, simple probability will ensure that the subsequent request will likely not be re-sent to the overloaded IP port 204 a, but rather to another of the IP ports 204 b-n.
  • In one scenario/embodiment, port sharing mechanism 210 interrogates a local register (not shown) of soft/hard cap statuses in the IP ports 204 a-n, and thus “knows” which IP port from the IP ports 204 a-n has the bandwidth/capacity to handle the subsequent request.
  • In one scenario/embodiment, the processor in the port sharing mechanism 210 previously transmitted an instruction to the client 206, instructing the client 206 to include an exclusionary message in the subsequent request for the same HTTP-based service. This exclusionary message directs the port sharing mechanism 210 to send this subsequent request for the same HTTP-based service to another HTTP server, from the multiple HTTP servers 202 b-n, other than HTTP server 202 a (e.g., “send the request to HTTP server 202 c via IP port 204 c”).
  • Note that, in one embodiment, the hard caps and/or soft caps described herein are adjustable. For example, the size of a hard/soft cap can be adjusted according to a predicted level of request activity during a time of day, a day of the week, a season of the year, etc. Thus, if a level of request activity is predicted to be high (greater than some upper predefined level) during a certain time period, then all of the IP ports 204 a-n will be called upon to handle high traffic, and thus the soft/hard caps are raised accordingly. Conversely, if the level of request activity is predicted to be low (less than some lower predefined level) during a certain time period, then all of the IP ports 204 a-n will be less busy, and thus the soft/hard caps are lowered accordingly.
  • Note also that, in one embodiment, when determining if a hard/soft cap has been reached, consideration is given as to whether the current TCP/IP connections are persistent (exist until expressly terminated) or transient/temporary (automatically terminate before some predetermined length of time such as 30 seconds). Any TCP/IP connections, to a particular IP port, which are identified as being temporary are thus ignored when determining if that particular IP port has reached its soft/hard cap, since the temporary TCP/IP connections will “go away” soon anyway.
  • In one embodiment, as soon as a particular IP port (from IP ports 204 a-n) has cleared out its queue of pending transactions (i.e., has fallen below its soft or hard cap), then that particular IP port is reopened. A message is then sent out (e.g., to the port sharing mechanism 210 and/or the client 206) from the particular IP port (e.g., IP port 204 a) indicating that IP port 204 a has been reopened to accept new requests. The reopening of IP port 204 a and the sending out notifications of the reopening of IP port 204 a may be under the control of the HTTP server 202 a and/or the port sharing mechanism 210.
  • With reference now to FIG. 3, a high level flow chart of one or more processes performed by a processor for handling service requests for Hypertext Transfer Protocol (HTTP) based services via Internet Protocol (IP) ports on HTTP servers is presented. After initiator block 302, a port sharing mechanism is logically coupled to multiple HTTP servers (block 304). Each of the multiple HTTP servers provides a same HTTP-based service, and each of the multiple HTTP servers has a unique IP port that is not shared with any other HTTP server from the multiple HTTP servers. As described herein, the port sharing mechanism directs requests for HTTP-based services from a client to one of the multiple HTTP servers.
  • As depicted in block 306, a first request for the same HTTP-based service is transmitted to a first IP port in a first HTTP server from the multiple HTTP servers. However, the first IP port has a current number of active IP connections that exceed a soft cap (which has been set to a predetermined value). Nonetheless, as described in block 308, the first IP port and the first HTTP server are instructed (e.g., by a processor in the port sharing mechanism or within the first HTTP server itself) to accept and execute the request for the same HTTP-based service, despite the current number of active IP connections in the first IP port exceeding the soft cap. The first HTTP server is also instructed to subsequently terminate a connection between the first IP port and the client after executing the request for the same HTTP-based service.
  • As suggested in query block 310, the port sharing mechanism may receive a subsequent request for the same HTTP-based service from the client after the connection between the first IP port and the client has been terminated. If so, then the port sharing mechanism transmits that subsequent request to another HTTP server, from the multiple HTTP servers, which is other than the first HTTP server. The process ends at terminator block 314.
  • As described herein, the present invention presents a novel and useful improvement over the prior art. More specifically, instead of refusing any connection attempt above a present threshold level as in the prior art, the present invention instructs a back end system (e.g., HTTP servers) to nonetheless accept, process and respond to a request, even though the threshold level has been exceeded. As detailed above, when the response to the service request is sent back to the client, a message is also sent to indicate to the client that the connection is hereafter being terminated by the server. This feature forces a client, which wants to make further requests to the back end system, to re-establish a connection through the port sharing mechanism. The port sharing mechanism then connects the client to another back end system that is not above the threshold level.
  • The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
  • The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
  • The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of various embodiments of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
  • Note further that any methods described in the present disclosure may be implemented through the use of a VHDL (VHSIC Hardware Description Language) program and a VHDL chip. VHDL is an exemplary design-entry language for Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), and other similar electronic devices. Thus, any software-implemented method described herein may be emulated by a hardware-based VHDL program, which is then applied to a VHDL chip, such as a FPGA.
  • Having thus described embodiments of the invention of the present application in detail and by reference to illustrative embodiments thereof, it will be apparent that modifications and variations are possible without departing from the scope of the invention defined in the appended claims.

Claims (14)

1-7. (canceled)
8. A computer program product for handling service requests for Hypertext Transfer Protocol (HTTP) based services via Internet Protocol (IP) ports on HTTP servers, the computer program product comprising:
a computer readable media;
first program instructions to transmit a first request for a same HTTP-based service to a first IP port in a first HTTP server from multiple HTTP servers, wherein the first IP port is logically coupled to a port sharing mechanism, wherein each of the multiple HTTP servers provides the same HTTP-based service, wherein each of the multiple HTTP servers has a unique IP port that is not shared with any other HTTP server from the multiple HTTP servers, wherein the port sharing mechanism directs requests for HTTP-based services from a client to one of the multiple HTTP servers, and wherein the first IP port has a current number of active IP connections that exceed a soft cap of a predetermined value;
second program instructions to direct the first IP port and the first HTTP server to accept and execute said request for said same HTTP-based service, despite the current number of active IP connections in the first IP port exceeding the soft cap, and to subsequently transmit a message to terminate a connection between the first IP port and the client after executing said request for said same HTTP-based service;
third program instructions to receive a subsequent request for said same HTTP-based service from the client after said connection between the first IP port and the client has been terminated; and
fourth program instructions to transmit the subsequent request for said same HTTP-based service to an HTTP server, from the multiple HTTP servers, other than the first HTTP server; and wherein
the first, second, third, and fourth program instructions are stored on the computer readable storage media.
9. The computer program product of claim 8, further comprising:
fifth program instructions to set a hard cap for the first IP port, wherein the hard cap is a predefined limit for IP connections to the first IP port, and wherein the hard cap is only in effect during a predefined peak traffic time; and
sixth program instructions to, in response to determining that the hard cap has been exceeded during the predefined peak traffic time, override said directing of said first IP port and said first HTTP server to accept and execute said request for said same HTTP-based service and to immediately terminate the connection between the first IP port and the client; and wherein the fifth and sixth program instructions are stored on the computer readable storage media.
10. The computer program product of claim 8, further comprising:
fifth program instructions to identify which IP connections with the first IP port are temporary connections, wherein a temporary connection is scheduled to automatically terminate before a predetermined length of time; and
sixth program instructions to ignore the temporary connections when determining if said current number of active IP connections to said first IP port has exceeded said predetermined value; and wherein
the fifth and sixth program instructions are stored on the computer readable storage media.
11. The computer program product of claim 8, further comprising:
fifth program instructions to adjust a size of the soft cap according to a predicted level of request activity for said same HTTP-based service during different times of a day; and wherein the fifth program instructions are stored on the computer readable storage media.
12. The computer program product of claim 8, further comprising:
fifth program instructions to adjust a size of the soft cap according to a predicted level of request activity for said same HTTP-based service during different days of a week; and wherein the fifth program instructions are stored on the computer readable storage media.
13. The computer program product of claim 8, further comprising:
fifth program instructions to transmit an instruction to the client to include an exclusionary message in the subsequent request for said same HTTP-based service, wherein said exclusionary message directs said port sharing mechanism to send any subsequent request for said same HTTP-based service to said HTTP server, from the multiple HTTP servers, other than the first HTTP server; and wherein
the fifth program instructions are stored on the computer readable storage media.
14. The computer program product of claim 8, further comprising:
fifth program instructions to transmit a first instruction to the first HTTP server to reopen said first IP port in response to the current number of active IP connections in the first IP port falling below the soft cap; and
sixth program instructions to transmit a second instruction to the first HTTP server to transmit a reopen message indicating that the first IP port has reopened; and wherein the fifth and sixth program instructions are stored on the computer readable storage media.
15. A computer system comprising:
a processor, a computer readable memory, and a computer readable storage media;
first program instructions to transmit a first request for a same HTTP-based service to a first IP port in a first HTTP server from multiple HTTP servers, wherein the first IP port is logically coupled to a port sharing mechanism, wherein each of the multiple HTTP servers provides the same HTTP-based service, wherein each of the multiple HTTP servers has a unique IP port that is not shared with any other HTTP server from the multiple HTTP servers, wherein the port sharing mechanism directs requests for HTTP-based services from a client to one of the multiple HTTP servers, and wherein the first IP port has a current number of active IP connections that exceed a soft cap of a predetermined value;
second program instructions to direct the first IP port and the first HTTP server to accept and execute said request for said same HTTP-based service, despite the current number of active IP connections in the first IP port exceeding the soft cap, and to subsequently terminate a connection between the first IP port and the client after executing said request for said same HTTP-based service;
third program instructions to receive a subsequent request for said same HTTP-based service from the client after said connection between the first IP port and the client has been terminated; and
fourth program instructions to transmit the subsequent request for said same HTTP-based service to an HTTP server, from the multiple HTTP servers, other than the first HTTP server; and wherein
the first, second, third, and fourth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
16. The computer system of claim 15, further comprising:
fifth program instructions to set a hard cap for the first IP port, wherein the hard cap is a predefined limit for IP connections to the first IP port, and wherein the hard cap is only in effect during a predefined peak traffic time; and
sixth program instructions to, in response to determining that the hard cap has been exceeded during the predefined peak traffic time, override said directing of said first IP port and said first HTTP server to accept and execute said request for said same HTTP-based service and to immediately terminate the connection between the first IP port and the client; and wherein the fifth and sixth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
17. The computer system of claim 15, further comprising:
fifth program instructions to identify which IP connections with the first IP port are temporary connections, wherein a temporary connection is scheduled to automatically terminate before a predetermined length of time; and
sixth program instructions to ignore the temporary connections when determining if said current number of active IP connections to said first IP port has exceeded said predetermined value; and wherein
the fifth and sixth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
18. The computer system of claim 15, further comprising:
fifth program instructions to adjust a size of the soft cap according to a predicted level of request activity for said same HTTP-based service during different times of a day; and wherein the fifth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
19. The computer system of claim 15, further comprising:
fifth program instructions to transmit an instruction to the client to include an exclusionary message in the subsequent request for said same HTTP-based service, wherein said exclusionary message directs said port sharing mechanism to send any subsequent request for said same HTTP-based service to said HTTP server, from the multiple HTTP servers, other than the first HTTP server; and wherein
the fifth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
20. The computer system of claim 15, further comprising:
fifth program instructions to transmit a first instruction to the first HTTP server to reopen said first IP port in response to the current number of active IP connections in the first IP port falling below the soft cap; and
sixth program instructions to transmit a second instruction to the first HTTP server to transmit a reopen message indicating that the first IP port has reopened; and wherein the fifth and sixth program instructions are stored on the computer readable storage media for execution by the processor via the computer readable memory.
US13/226,849 2011-09-07 2011-09-07 Handling http-based service requests via ip ports Abandoned US20130060907A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/226,849 US20130060907A1 (en) 2011-09-07 2011-09-07 Handling http-based service requests via ip ports
US13/416,961 US9118682B2 (en) 2011-09-07 2012-03-09 Handling HTTP-based service requests Via IP ports

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/226,849 US20130060907A1 (en) 2011-09-07 2011-09-07 Handling http-based service requests via ip ports

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/416,961 Continuation US9118682B2 (en) 2011-09-07 2012-03-09 Handling HTTP-based service requests Via IP ports

Publications (1)

Publication Number Publication Date
US20130060907A1 true US20130060907A1 (en) 2013-03-07

Family

ID=47754004

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/226,849 Abandoned US20130060907A1 (en) 2011-09-07 2011-09-07 Handling http-based service requests via ip ports
US13/416,961 Expired - Fee Related US9118682B2 (en) 2011-09-07 2012-03-09 Handling HTTP-based service requests Via IP ports

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/416,961 Expired - Fee Related US9118682B2 (en) 2011-09-07 2012-03-09 Handling HTTP-based service requests Via IP ports

Country Status (1)

Country Link
US (2) US20130060907A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11249986B2 (en) * 2019-12-17 2022-02-15 Paypal, Inc. Managing stale connections in a distributed system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055435A1 (en) * 2003-06-30 2005-03-10 Abolade Gbadegesin Network load balancing with connection manipulation
US20070288619A1 (en) * 2004-08-25 2007-12-13 Sun-Mi Jun Terminal Apparatus For Wireless Connection And A Wireless Connection Administration Method Using The Same
US7567504B2 (en) * 2003-06-30 2009-07-28 Microsoft Corporation Network load balancing with traffic routing

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0733971A3 (en) 1995-03-22 1999-07-07 Sun Microsystems, Inc. Method and apparatus for managing connections for communication among objects in a distributed object system
US7231445B1 (en) 2000-11-16 2007-06-12 Nortel Networks Limited Technique for adaptively distributing web server requests
US7009938B2 (en) 2001-06-27 2006-03-07 International Business Machines Corporation Reduction of server overload
US20030021260A1 (en) * 2001-07-25 2003-01-30 Daley Robert S. System and method for frame selection in IP-based CDMA network
US7734726B2 (en) 2001-11-27 2010-06-08 International Business Machines Corporation System and method for dynamically allocating processing on a network amongst multiple network servers
US7752629B2 (en) 2004-05-21 2010-07-06 Bea Systems Inc. System and method for application server with overload protection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055435A1 (en) * 2003-06-30 2005-03-10 Abolade Gbadegesin Network load balancing with connection manipulation
US7567504B2 (en) * 2003-06-30 2009-07-28 Microsoft Corporation Network load balancing with traffic routing
US20070288619A1 (en) * 2004-08-25 2007-12-13 Sun-Mi Jun Terminal Apparatus For Wireless Connection And A Wireless Connection Administration Method Using The Same

Also Published As

Publication number Publication date
US9118682B2 (en) 2015-08-25
US20130060909A1 (en) 2013-03-07

Similar Documents

Publication Publication Date Title
US9148496B2 (en) Dynamic runtime choosing of processing communication methods
JP7018498B2 (en) Transporting control data in proxy-based network communication
US8745204B2 (en) Minimizing latency in live virtual server migration
US9578132B2 (en) Zero copy data transfers without modifying host side protocol stack parameters
JP2018509674A (en) Clustering host-based non-volatile memory using network-mapped storage
US20150281127A1 (en) Data packet processing in sdn
US9390036B2 (en) Processing data packets from a receive queue in a remote direct memory access device
EP2840576A1 (en) Hard disk and data processing method
US11249826B2 (en) Link optimization for callout request messages
US11212368B2 (en) Fire-and-forget offload mechanism for network-based services
US11330047B2 (en) Work-load management in a client-server infrastructure
US20160248836A1 (en) Scalable self-healing architecture for client-server operations in transient connectivity conditions
US9686202B2 (en) Network-specific data downloading to a mobile device
US9118682B2 (en) Handling HTTP-based service requests Via IP ports
US10003542B2 (en) Data streaming scheduler for dual chipset architectures that includes a high performance chipset and a low performance chipset
EP3886396B1 (en) Methods for dynamically controlling transmission control protocol push functionality and devices thereof
US10104001B2 (en) Systems and methods to early detect link status of multiple paths through an explicit congestion notification based proxy
US20230246966A1 (en) Flexible load balancing on multipath networks
US11044350B1 (en) Methods for dynamically managing utilization of Nagle's algorithm in transmission control protocol (TCP) connections and devices thereof
US8806056B1 (en) Method for optimizing remote file saves in a failsafe way
US9232002B1 (en) Migrating connection flows
JP6304739B2 (en) USB relay device, USB relay method, and USB relay program
US10863526B2 (en) System and method for prioritizing data traffic
US8051192B2 (en) Methods and systems for presentation layer redirection for network optimization
US8230078B2 (en) Accept and receive enhancements

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOHM, FRASER P.;COCKS, MARTIN W. J.;REEL/FRAME:026865/0894

Effective date: 20110905

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE