WO2000072167A1 - Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail - Google Patents

Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail Download PDF

Info

Publication number
WO2000072167A1
WO2000072167A1 PCT/US2000/013595 US0013595W WO0072167A1 WO 2000072167 A1 WO2000072167 A1 WO 2000072167A1 US 0013595 W US0013595 W US 0013595W WO 0072167 A1 WO0072167 A1 WO 0072167A1
Authority
WO
WIPO (PCT)
Prior art keywords
workgroup
teamprocessors
server
teamprocessor
system recited
Prior art date
Application number
PCT/US2000/013595
Other languages
English (en)
Inventor
Ivan Chung-Shung Hwang
Original Assignee
Hwang Ivan Chung Shung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hwang Ivan Chung Shung filed Critical Hwang Ivan Chung Shung
Priority to JP2000620492A priority Critical patent/JP4864210B2/ja
Priority to KR1020017000934A priority patent/KR20010074733A/ko
Priority to CA002338025A priority patent/CA2338025C/fr
Priority to AU52738/00A priority patent/AU5273800A/en
Priority to EP00937591A priority patent/EP1114372A4/fr
Priority to US09/744,194 priority patent/US6715100B1/en
Publication of WO2000072167A1 publication Critical patent/WO2000072167A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1017Server selection for load balancing based on a round robin mechanism
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer

Definitions

  • the present invention generally relates to a server cluster, and more particularly to a method and apparatus for implementing a workgroup server array and its architecture for building various server clusters to accommodate scalable web-based Intranet, Extranet and Internet mission-critical applications.
  • the inventive server array comprises team/workgroup computer equipped with workgroup-based direct-access servers and controlling devices, as described in Applicant's Patent No. 5,802,391 entitled "DIRECT-ACCESS TEAM/WORKGROUP SERVER SHARED BY TEAM/WORKGROUPED COMPUTERS WITHOUT USING A NETWORK OPERATING SYSTEM". Furthermore, this inventive server array creates a workgroup-server-array-based architecture, which can be employed to construct various highly available, scalable and mission-critical server clusters.
  • server clusters In order to achieve high scalability and availability requirements, the trend is toward systems that involve many servers working together, i.e., server clusters to deliver applications that the end users request. Furthermore, a large-scale web-based service requires architecture to build server clusters, so that availability, scalability, reliability, performance, management and security issues can be accommodated.
  • SS-3 architecture single-server-based 3-tier architecture
  • first-tier components which are load balancers
  • second-tier components which are application servers
  • third-tier components which are database and file servers.
  • Each individual server which can be PC-based, super-micro-based or mini-computer- based, comprises multiple CPU's with parallel processing capabilities using an Operating System, such as WinNT, Solaris, Linux and Unix.
  • each tiered component a) Load balancers - Analyze all the incoming traffic and re-direct each individual web-based query/request to one of the available second-tiered application servers that are attached.
  • the load balancer distributes requests to specific second-tiered web- based application servers based on the nature of the request and the availability and capability of the load-balanced web application server.
  • Application servers Receive the assignment from the first-tiered load balancer, carry out the web-based applications and interface with the third-tier database and file servers for application-oriented data retrieval.
  • each application server may be different from one another, based on different hardware and software configuration, creating management complexity for the load balancer.
  • each application server handles both loyalty-based and non-loyalty based queries, creating non-coherent program groups with different levels of security measures.
  • each application server does not have the remote boot capability, unless a network-access-based secondary processor is included, so that if the primary processor of the server fails, the secondary processor accessed by other network-based management servers can then be triggered to reboot the primary processor.
  • Database/File servers Are client-server-based servers that process database/file queries from all the second-tiered application servers deemed as clients.
  • inter-tier communication switches Are required between the first tiered load balancer and the second-tiered application servers and between the application servers and the third-tiered file and database servers. Since every component is network-based, all the communication between servers is handled through these two switches, creating unnecessary inter-tiered traffic bottlenecks and management overhead.
  • More tiers means more components, which create more single-point failures -
  • All the load balancers, application servers, file and database servers, routers and switches should have a fail-over scheme, so that mission critical applications can be maintained without failure. Even though the overall fail-over scheme can be developed, it is not efficient and cost-effective, due to the fact that there are too many hardware configurations and software programs involved. 2.
  • Server cluster management a) The monitoring and management of single-server-based server clusters become complicated because of the complexity of each component in regard to inter-tiered communication. Single software upgrades tend to create software incompatibility due to the fact that there are too many involved software programs that also may need to be upgraded from various venders. b) The overall performance is not easily optimized.
  • a server cluster Once a server cluster is built based on SS-3 architecture, it has to meet the criteria of at least handling steady-state operation smoothly and accommodating peak-time operation without glitches. However, there are no distributed small-scale optimal points that can be gauged, thereby adding uncertain factors in controlling the steady-state operation and restricting necessary measures in dealing with the peak-time operation. c) High availability and cost-effective linear scalability are difficult to maintain if too many database centric requests are to be serviced concurrently due to high-speed web access is prevalent. Currently, web-based queries are based on 56 kbps narrow-band transfer rate and the related services are centered in web-page delivery.
  • the SS-3 architecture will have difficulties in maintaining high availability. It is due to the fact that 20 times more traffic is generated within the server cluster, stressing the capability of the fail-over load balancers, creating bottlenecks between inter-tiered communications and severely diminishing the return on the SS-3-based scalability.
  • the aforementioned sever cluster which is based on single-server-based architecture, can not adequately provide highly available and scalable solutions for large-scale web-based mission-critical applications efficiently and cost-effectively.
  • the objects of this invention are accomplished by not only resolving the above- mentioned deficiencies, but also by devising technological breakthroughs in building a workgroup-based server-array and its architecture so that highly available and scalable solutions for large-scale web-based mission-critical applications can be accommodated efficiently and cost-effectively.
  • the present invention employs a plurality of team/workgroup computers, hereinafter referred to as TeamProcessors, housed in workgroup-computer chassis, hereinafter referred to as TeamChassis, together with a plurality of workgroup-based direct-access servers, hereinafter referred to as TeamServers, as described in Applicant's Patent No. 5,802,391. Based on these building blocks, various workgroup server array configurations can be implemented.
  • TeamProcessors housed in workgroup-computer chassis, hereinafter referred to as TeamChassis
  • TeamServers workgroup-based direct-access servers
  • the present invention further comprises a unique modular workgroup-based controlling and monitoring device, hereinafter referred to as TeamPanel, which provides local and remote monitoring and reboot management, task switching, load balancing and fail-over control functions.
  • TeamPanel a unique modular workgroup-based controlling and monitoring device
  • any particularly configured workgroup server array can be accommodated either by a single or by multiple TeamPanels cascaded together.
  • the present invention further comprises a plurality of the above-mentioned Team- building blocks, so that preferred workgroup server arrays for various configurations can be built to provide a number of unique underlying functions. Based on the preferred data structure and data flow, these underlying functions, include, but are not limited to, internal/external controlled task switching, workgroup-based device sharing, load balancing, fail-over, monitoring and management, security and performance measurements.
  • the present invention and its related architecture resolve the deficiencies inherent in the conventional single-server-based architecture by eliminating unnecessary network- access-based components and replacing them with workgroup-based direct-access components, thus reducing unnecessary network traffic and decreasing the number of single-point failures.
  • a plurality of workgroup server arrays based on a specific application can be formed as a workgroup server cluster, so that highly available and scalable mission critical web services based on that particular application can be accommodated.
  • a plurality of various application-based workgroup server clusters can be constructed in both serial and parallel manners to provide large scale multi-application web-based solutions for accommodating thousands of users concurrently even with broadband Quality of Service (QOS) intact.
  • QOS Quality of Service
  • FIG. 1A is a functional block diagram illustrating the preferred workgroup processor, i.e., TeamProcessor, as one of the apparatuses for building a preferred workgroup server array.
  • FIG. 1 B is a functional block diagram illustrating the preferred workgroup computer chassis, i.e., TeamChassis, which can house multiple TeamProcessors, as one of the apparatuses for building a preferred workgroup server array.
  • TeamChassis the preferred workgroup computer chassis, i.e., TeamChassis, which can house multiple TeamProcessors, as one of the apparatuses for building a preferred workgroup server array.
  • FIG. 1 C is a functional block diagram illustrating one of the preferred integrated configurations, which comprises eight (8) preferred TeamProcessors networked and workgrouped together via multiple links, as well as four (4) preferred TeamServers, as one of the embodiments of the present invention.
  • FIG. 1 D is a functional block diagram illustrating the preferred modular workgroup- based monitoring and management, i.e., TeamPanel, which comprises four (4) basic control units and one (1 ) main control unit with dual processors for connecting up to four (4) TeamProcessors, and can be enclosed in a TeamChassis with Front-Panel built-in.
  • FIG. 1 E is a functional block diagram illustrating a modular cascading of a primary TeamPanel and a secondary TeamPanel, accommodating an eight (8) TeamProcessor configuration.
  • FIG. 2A is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising eight (8) TeamProcessors, four (4) SCSI-disk-based TeamServers and two (2) cascaded TeamPanels, all evenly enclosed in two (2) TeamChassis.
  • FIG. 2B is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising four (4) TeamProcessors, two (2) SCSI-disk-based TeamServers and one (1 ) TeamPanel, all enclosed in one TeamChassis.
  • FIG. 2C is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising twelve (12) TeamProcessors, six (6) SCSI-disk-based TeamServers linked using dual SCSI channels and three (3) cascaded TeamPanels, all evenly enclosed in three (3) TeamChassis.
  • FIG. 3A is a functional block diagram illustrating a methodical implementation of a preferred data structure and data flow onto a preferred eight (8) TeamProcessor server array in which a plurality of underlying functions for use with internal operations, fail- over, load balance, security, management and optimal performance measurements can all be installed.
  • FIG. 3B is a functional block diagram illustrating a workgroup server cluster comprising a plurality of single-application workgroup server arrays, each providing a mutually exclusive database segment based on the optimal performance measurement, so that inter workgroup-based underlying functions, such as high availability and scalability can be installed.
  • FIG. 4 is a functional block diagram illustrating a preferred integration of various security zone-based application-oriented workgroup server clusters and backend database servers using FC-AL hub or FC Switches, creating a preferred data center/warehouse configuration in a distributed computing environment for web-based mission-critical applications.
  • FIGs. 1-4 Reference will be made to the preferred embodiment of the invention illustrated in FIGs. 1-4, based on team/workgroup computers used as the preferred building blocks of workgroup server array.
  • a team/workgroup computer is a group of computers, which are workgrouped together via a workgroup peer-to-peer link, and can all be connected to a number of direct- access workgroup servers via a workgroup server link.
  • the details are described in Applicant's Patent No. 5,530,892 entitled "SINGLE CHASSIS MULTIPLE COMPUTER SYSTEM HAVING SEPARATE DISPLAYS AND KEYBOARDS WITH CROSS INTERCONNECT SWITCHING FOR WORK GROUP COORDINATOR" and in Applicant's Patent No.
  • Each TeamProcessor based on a particular OS, is installed with that particular OS- centric workgroup server link interface card, i.e., TeamServer card, to recognize all the TeamServers as the direct-access local drives.
  • each TeamServer has only one primary TeamProcessor that has the absolute privilege to read, write and create files.
  • one physical hard disk drive, as well as a fault-tolerant disk array can be partitioned and formatted into multiple logical drives, each logical drive being controlled by each different TeamProcessor as the primary processor. Even though all of these TeamProcessors are connected on the internal network link and installed with network operation system, these TeamServers are not mapped as network-accessible drives throughout TeamProcessors.
  • TeamPro computer contains multiple TeamProcessors, all enclosed in one workgroup TeamChassis as described in Applicant's Patent No. 5,577,205 entitled "CHASSIS FOR A MULTIPLE COMPUTER SYSTEM".
  • the TeamPro computer is further equipped with a monitoring and management device, i.e., TeamPanel as a means to control and interface with each TeamProcessor through one console monitor and one RAP (remote-access-port)-based device, which is comprised of two (2) serial ports, one (1 ) keyboard, one (1 ) system LED, one (1 ) buzzer and one (1 ) reset button as described in Patent No.
  • a monitoring and management device i.e., TeamPanel as a means to control and interface with each TeamProcessor through one console monitor and one RAP (remote-access-port)-based device, which is comprised of two (2) serial ports, one (1 ) keyboard, one (1 ) system LED, one (1 ) buzzer and one (1 ) reset button as described in Patent
  • the preferred Team/workgroup computer-based TeamProcessor based on a PC computing platform, generally contains either oneway, two-way or four-way Intel Pentium CPUs WINNT PCI-based motherboard with 128 MB RAM, on a floppy disk interface module, an IDE interface module, a VGA card module, a Sound card module, a USB module, a parallel interface module, a RAP module, a network link LAN module using Ethernet, a workgroup peer-to-peer link module using Ethernet, a workgroup peer-to-peer link module using SCSI and a workgroup server link module using SCSI.
  • a TeamProcessor further is equipped with module-based external peripheral drives and devices such as floppy disk, IDE disk and optical drives, a VGA monitor, a USB-based digital camera, a mouse, a network Ethernet-based hub and switches, SCSI disk and tape drives, a printer and a set of speakers.
  • module-based external peripheral drives and devices such as floppy disk, IDE disk and optical drives, a VGA monitor, a USB-based digital camera, a mouse, a network Ethernet-based hub and switches, SCSI disk and tape drives, a printer and a set of speakers.
  • the preferred workgroup computer chassis i.e., TeamChassis
  • the preferred workgroup computer chassis i.e., TeamChassis
  • the same TeamChassis can also enclose two (2) mother-board-based TeamProcessors with various module-based drives and devices.
  • TeamChassis can further be equipped with internal redundant power supplies, smart-power management, hot swappable disks and fans, and external UPS.
  • the maximum number of individual TeamProcessors that can be workgrouped together to form a workgroup server array is constrained by the internal workgroup server link. If the workgroup server link uses SCSI-II, the effective length to ensure proper data transmission is six (6) meters and the number of nodes that can be attached is sixteen (16). That is why TeamChassis, which can enclose at least two (2) TeamProcessors, is used to support a better workgroup peer-to-peer link-based SCSI cable scheme, as the first TeamProcessor extends the cable from external and the second extends the cable for external connection. The same TeamChassis can also house four (4) CPU-card based TeamProcessors, allowing the SCSI cable to be even shorter.
  • Ultra-wide LVD SCSI which has the maximum data rate at 160MB/sec with the cable length up to twelve (12) meters.
  • FIG. 1 C shows a preferred workgroup link integration, in which eight (8) preferred TeamProcessors are linked by a workgroup peer-to-peer link using SCSI and four (4) SCSI hard-disk-based TeamServers are linked by a workgroup server link using SCSI. These TeamProcessors and TeamServers are connected together by using the same SCSI cable. By doing so, every TeamProcessor can direct access each TeamServer without involving other TeamProcessors, especially the primary TeamProcessor that has the absolute privileges. As illustrated in FIG. 1 C, each SCSI-disk-based TeamServer has two (2) logical drives and each TeamProcessor is allocated one logical drive and enabled with absolute privilege. A TeamServer can only be accessed in a read-only fashion by other non-primary TeamProcessors.
  • FIG. 1 C also illustrates the workgroup peer-to-peer link using Ethernet via TeamLink cards with Ethernet hub, so that if the workgroup peer-to-peer link using SCSI is faulty, the workgroup peer-to-peer link using Ethernet can be the alternative communication link, or vice versa.
  • the major benefit of implementing workgroup peer-to-peer link using Ethernet is that the inter-Team Processor communications within the workgroup won't adversely affect the network traffic, as well as other workgroups' inter- TeamProcessor communications.
  • the workgroup peer-to-peer link using Ethernet can accommodate various inter-TeamProcessor communications, such as mapped-drive- based, socket-based, and security-encryption/decryption-based.
  • peripheral buses besides SCSI can also be adopted as the de facto link that can merge workgroup peer-to-peer link and workgroup server link together, as long as their data-link layer is capable of implementing storage-based and communication-based protocols, either standardized or proprietary.
  • the workgroup peer-to-peer link based on any of applicable peripheral buses may not be necessary, as long as the workgroup server link and the workgroup peer-to-peer link using Ethernet are established.
  • FIG. 1 D illustrates the preferred version of TeamPanel, which comprises four (4) basic control units and one main control unit and connects up to four (4) TeamProcessors via RAP, VGA, USB and audio port.
  • the basic control unit contains a micro-processor and three (3) switches controlled by the micro-processor for allowing VGA signal, audio signal and USB signals to flow through onto the common VGA, audio, USB buses that link to other basic control units and the main control unit.
  • I C which connects to other basic control units and the main control unit and there is a set of ten (10) interface signals, which connect to the front panel.
  • the preferred main control unit may contain dual microprocessors for fault-tolerance, which provide the physical layer interfaces to hook up with a keyboard, serial-based devices and a printer, categorized as the workgroup sharable devices among workgrouped TeamProcessors.
  • the main control unit also keeps various status tables for tracking each workgrouped TeamProcessors vital signs, CPU load and activities, as well as usage tables for supervising common buses and peripheral devices so that after checking the tables for no conflicting usage, it can allow requests from TeamProcessors to be carried out sequentially.
  • the preferred front-panel contains two interactive push-buttons; one for selecting the chosen TeamProcessor for external VGA-based monitor to display and for the external keyboard and the mouse to control, the other one for resetting the chosen TeamProcessor.
  • Both the TeamPanel functional board and the front-panel are enclosed in a TeamChassis so that the cabling scheme is easier to arrange.
  • the default TeamProcessor that controls the TeamPanel is called TeamManager.
  • any TeamProcessor can first transfer the message to its attached control unit via COM2 of RAP, and then the control unit repacks the message with l 2 C protocol header and notifies the main control unit via TeamPanel internal link using l 2 C.
  • the basic control unit can communicate directly with the TeamManager through TeamPanel internal l 2 C link, thereby, for instance, reporting the current status of its attached TeamProcessor.
  • the TeamPanel internal link can be used as an alternative communication link to workgroup peer-to-peer links using SCSI and Ethernet.
  • replace COM1 -based mouse device with USB- based mouse. Therefore, if COM2 of RAP should fail, then COM1 of RAP can take over and provide the data communication between TeamProcessor and its attached basic control unit.
  • FIG. 1 E shows two (2) TeamPanels cascaded together to connect eight (8) preferred workgrouped TeamProcessors.
  • the first TeamPanel, i.e., TP-408M and the second TeamPanel, i.e., TP-408C are connected via the common VGA, Audio, USB and l 2 C buses, whereas TP-408C doesn't have the main control unit, so that the main control unit in TP-408M will supervise all the basic control units in TP-408C.
  • the TeamManager which controls the first TeamPanel will also be the TeamManager of the second TeamPanel.
  • any TeamProcessor of the second TeamPanel will first transfer the message to its attached control unit via COM2 of RAP and then the control unit re-packs the message with I C protocol header and notifies the main control unit in the first TeamPanel via internal I C link. Once the main control unit allows the linkage to take place, the basic control unit of the second TeamPanel can communicate directly with the TeamManager of the first TeamPanel through TeamPanel internal l 2 C link. Based on the same scenario, any particularly configured workgroup server array can be accommodated either by a single TeamPanel or by multiple TeamPanels cascaded together.
  • each TeamPanel can be enclosed in each TeamChassis, or can be extended to an external box for easy monitoring and control of multiple TeamPanels.
  • Multiple TeamChassis that contain all the workgroup server array's TeamProcessors can be housed in a TeamRack, which can also house additional TeamServers in additional TeamChassis and is further equipped with a cable distribution box that houses all the inter- TeamChassis cables, as well as all the incoming and outgoing cables.
  • FIG. 2A is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising eight TeamProcessors, four SCSI-disk-based TeamServers and two cascaded TeamPanels, enclosed in two TeamChassis that can be further housed in a TeamRack.
  • FIG. 2B is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising four TeamProcessors, two SCSI-disk-based TeamServers and one TeamPanel, enclosed in one TeamChassis that can be further housed in a TeamRack.
  • FIG. 2C is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising twelve TeamProcessors, six SCSI-disk-based TeamServers linked by two workgroup server links using dual SCSI channels, and three cascaded TeamPanels, enclosed in three TeamChassis that can be further housed in a TeamRack.
  • FIG. 3A illustrates a preferred configuration with defined data flows, which are designed to carry out various underlying functions using an eight ( ⁇ )-TeamProcessor workgroup server array as shown in FIG. 2A.
  • eight (8) TeamProcessors can be functionally classified into two groups: 1 ) Application/file service processors (TP1-TP4), 2) database/file service/load balance/firewall processors, (TP5-TP8).
  • Each TeamProcessor has its primary SCSI- disk-based TeamServer, which can be operated as a read-only TeamServer, hereinafter referred to as secondary TeamServer, for other seven TeamProcessors.
  • each TeamProcessor will recognize one IDE-based system drive, together with one primary TeamServer and seven secondary TeamServers, functioning as workgroup direct-access servers without using the NOS mapping scheme.
  • the above primary and secondary TeamServers accessed by all the workgrouped TeamProcessors can also be implemented with multiple fault-tolerant disk arrays and with dual-channel TeamServer cards to distribute traffic on two SCSI channels.
  • Application/file service-based TeamProcessors TP1-TP4 each are capable of handling HTTP-based application-oriented web queries from Internet and generating transaction batch files that are written onto both the system IDE drive and its primary TeamServer.
  • TeamProcessor TP5 and TP7 each maintains an application-specific workgroup database that is installed on its primary TeamServer. These two databases are basically the same at the end of the day.
  • the database controlled by TP5 will update during the day based on each batch transaction file generated from TeamServer1-TeamServer4 within a defined time period (t).
  • the database controlled by TP7 will be updated at the end of the day based on all the batches generated from TeamServer1-TeamServer4 during the day.
  • TP6 will be handling mostly FTP-based database-oriented web queries from the Intranet, so that TP5 can retrieve from TeamServer6 and update the database every t period.
  • TP5 will also update the database instantly due to the proprietary real-time socket-port-based database queries from the Intranet.
  • TP8 will be the default TeamProcessor, i.e., TeamManager that controls those two TeamPanels. Based on the preferred server-pair configuration, a number of unique functional services can be established for the inventive workgroup server array, hereinafter referred as WSA.
  • TeamManager TP8 coordinates all workgrouped TeamProcessors and generates management-based activities.
  • the activities include the monitoring of each TeamProcessor's Inventory, disk space and CPU usage, which can be generated by the installed OS on each TeamProcessor, as well as the alerts of intrusion, removal and failure that may be taking place on each workgrouped TeamProcessor.
  • Each TeamProcessor will routinely pack the management-based status information and send it via COM2 of RAP to its control unit, which notifies the main control unit and waits for OK to send instruction from the main control unit via TeamPanel internal I2C link.
  • TeamProcessor can direct communication from its control unit to the control unit of TeamManager, which subsequently sends the status information via COM2 of RAP to TeamManager.
  • TeamManager will always keep a management-based status table regarding all the workgrouped TeamProcessors.
  • One of the preferred methods regarding WSA internal front-panel switching services can be implemented such that upon requests from itself or any TeamProcessor to check a particular TeamProcessor is still functioning or not, TeamManager will send the request to the main control unit, which will further send a diagnostic request to the control unit of that particular TeamProcessor. If there is no response, the main control unit will send a notice to the control unit of TeamManager, which sends the notification to TeamManager via COM2 of RAP. Then, TeamManager can send the alarm message to the LAN-based management console via network link and wait for the response from the operator. The operator can take over the control of TeamManager via management console computer by running Carbon-Copy or similar software.
  • TeamManager is equipped with a video capture card and the common VGA bus is also hooked up to a NTSC converter, so that any TeamProcessor's VGA display can be recaptured into the TeamManager's VGA display. Therefore, TeamManager can be instructed to capture the screen display of the failed TeamProcessor by sending "select" request to the main control unit, which also will allow the subsequent communication from the control unit of TeamManager to the control unit of the failed TeamProcessor. The operator can also send the keyboard strokes to that failed TeamProcessor and act accordingly and save diagnosis file on TeamManager for further analysis. If the operator should decide to reset the failed TeamProcessor, TeamManager will be instructed to send "Reset" command to the control unit of the failed TeamProcessor.
  • That particular control unit will trigger the reset line that links directly to that failed TeamProcessor and reset it.
  • the booting up process can be captured, displayed and saved on TeamManager, so that the operator at the remote management console computer can watch and interact step-by-step with the boot-up process.
  • the technical personnel can further analyze based on the save files of diagnosis to determine the location of the problem and derive the solution.
  • One of the preferred methods regarding WSA onsite front-panel switching services can be implemented such that a local onsite operator can use the front panel on the TeamChassis to view, control and reset any of the TeamProcessors using the TeamPanel-based workgroup devices, such as a VGA monitor, a set of speakers, a keyboard and a mouse.
  • the main control unit Upon any push-button request on the panel tor "select” and “reset", whose signals directly link to the main control unit, the main control unit will first check the usage table, if applicable, for no conflicting usage and then set the related LED blinking. If the push-button activation is intended, the local operator will push the button one more time to trigger the action and the related LE D will be set on.
  • WSA remote front-panel switching services can be implemented such that any remote computer can take control of TeamManager or any of the TeamProcessors via external modem attached to workgroup-based serial link based on encrypted proprietary access codes. Once the communication is established, the remote computer can perform all the same functions as a LAN-based management console computer.
  • WSA device-sharing services can be implemented such that peripheral devices in a WSA can be accessed by TeamManager and any of other TeamProcessors.
  • the TeamProcessor sends a request message through COM2 of RAP to its control unit, and the control unit will send a request to the main control unit via internal l 2 C link.
  • the main control unit will allow the subsequent communication from that particular control unit to the main control unit and the main control unit will relay the data to the attached printer via built-in parallel interface. Similar processes can be implemented for other serial-port devices.
  • a particular TeamProcessor sends a request through COM2 of RAP to its control unit and the control unit will sent it to the main control unit. If available after checking the usage table for USB device, the main control unit will send an OK signal back to that control unit, which further turns on the USB switch on board. In so doing, the USB interface on that particular TeamProcessor can directly hook up with the workgroup-based USB device, such as Camcorder, via the common USE. bus.
  • WSA fail-over scheme-based services can be implemented such that mission critical components in a WSA, such as TeamChassis, TeamPanel, TeamProcessor, TeamServer, are either fault-tolerant or fail-over capable, so that mission critical applications won't be disrupted.
  • mission critical components in a WSA such as TeamChassis, TeamPanel, TeamProcessor, TeamServer
  • TeamPanel the mission critical capability is related to its main control unit, which has dual microprocessors, so if the first one should fail, the second one can take over and send an alarm to TeamManager, which can further notify the management console.
  • TeamChassis it is fault-tolerant due to the fact that it is equipped with dual power supplies and external UPS.
  • IDE1 in TeamProcessorl and TeamServerl
  • IDE2 and TeamServer2 IDE3 and TeamServer3, IDE4 and TeamServer4
  • IDE5 and TeamServer5 IDE6 and TeamServer ⁇
  • IDE7 and TeamServer7 IDE8 and TeamServer ⁇ . Therefore if the TeamServerl should fail, other TeamProcessors still can get the information from TeamProcessorl on the IDE1. If IDE1 should fail, other TeamProcessors can get the information directly from TeamServerl .
  • the same scenario applies to the other seven (7) fail-over groups.
  • Database on TeamServer ⁇ is controlled by TP5 and Database on TeamServer7 is controlled by TP7 and are basically the same application-specific databases, as discussed earlier. However, if Database-TP5 should fail, Database-TP7 will immediately be updated by TeamProcessor7, based on all the related batch files collected from TeamServerl to TeamServer ⁇ and instantly become ready for services.
  • One of the preferred methods regarding WSA application-based load balancing services can be implemented such that application-based TeamProcessors in a WSA can be load-balanced by using TeamPanels.
  • application-based query-based requests come from the Internet using HTTP protocol. The incoming query-based traffic will first go through the routers.
  • the router then sends all the requests to TeamManager TP8.
  • TeamManager then can distribute incoming traffic loads to TP1 , TP2, TP3 and TP4 via internal FTP port or proprietary ports via workgroup peer-to-peer link using Ethernet.
  • TeamManager (TP8) maintains a round-robin-based load-balance status table and the main control unit of the TeamPanel maintains various vital sign status tables, based on each application-based TeamProcessor's CPU usage and response time.
  • the control unit Since any workgrouped TeamProcessor will routinely transfer vital signs and the like to its attached control unit via COM2 of RAP, the control unit will repack the data and notify the main control unit. Once the main control unit allows the linkage to take place, the basic control unit can download the data to main control unit's memory buffers, which can be allocated for various vital-sign status tables. Based on these real-time status tables, the main control unit can detect which TeamProcessor may have failed or overloaded. When any of the situations happens, the main control unit will report it to TeamManager.
  • TeamManager will immediately try to take out the TeamProcessor in question from the round-robin sequence, until the notice from the main control unit is again received a ⁇ to returning that particular TeamProcessor back into the round-robin sequence. If it is a failed situation, TeamManager will try to establish the communication with that particular TeamProcessor in question via workgroup peer-to-peer link. If there is no response, then the TeamManager will notify the main control unit to reset the TeamProcessor via the "reset" line of RAP, resulting in partial or full recovery and acting accordingly.
  • WSA file and database services can be implemented such that the file and database on any particular TeamServer can be directly accessed and shared among TeamProcessors. It is done by installing as many read-only database engines on as many TeamProcessors for direct access-based secondary TeamServers, and the primary TeamProcessor will be installed with the full- fledged database engine, which can have the absolute privileges applied to the database on its primary TeamServer.
  • TeamManager TP ⁇
  • TeamManager keeps a series of status and usage tables for all the facilities attached. One of the tables keeps a concurrent listing of every TeamServer's primary TeamProcessor, so that there will be no double-write data-integrity breakdown occurring on any of the TeamServers.
  • TeamManager will always ensure that there is only one TeamProcessor that can update a particular TeamServer at any given time.
  • One of the preferred methods regarding WSA security services can be implemented such that any unauthorized intrusions into a WSA will be detected. Since TeamManager TP ⁇ will be receiving all the incoming requests and distribute the load among TeamProcessors, it is imperative that TeamManager should be installed with security enhancement and firewall capability to ward off any possible external attacks.
  • TeamManager TP ⁇ can filter out any questionable incoming request by implementing either SSL-based, or OS-based or higher-level application-based access-encrypted security measures, and redirecting those legitimate requests to those application-based TeamProcessors via workgroup peer-to-peer link using Ethernet, segregating into two different security-based zones.
  • Each application-based TeamProcessor comes up with the reply, which may involve accessing the application- specific database and sends it back to the requester by including the correct internal IP address with content-encrypted security measures.
  • TeamManager can decrypt the content and redirect to the right TeamProcessor, which handles the previous request.
  • This type of sticky port approach known as persistent session- based on factors such as source IP address and special information contained in the user- authentication-device request protocol or in returned cookies, can also be securely implemented, which is essential for running web-based e-commerce application services efficiently.
  • One of the preferred methods regarding WSA fail-over services can be implemented such that a number of agent-based management software programs, i.e., TeamSoft are devised to be incorporated with all the above functional services based on the defined data structure and data flow of the preferred configuration. Only the current TeamManager will be installed with the server-portion of TeamSoft, while the rest of the TeamProcessors will be installed with the client-portion of TeamSoft. As long as there is one TeamProcessor active, the remote management console computer can take control of that TeamProcessor and make it serve as TeamManager, so that it can reboot any failed TeamProcessor, and the inventive workgroup server array may be back to functioning normally.
  • TeamSoft agent-based management software programs
  • each TeamProcessor can initiate the detection based on whether its fail-over counterpart is alive via TeamManager. If not alive, then that TeamProcessor will assume the tasks that its failed counterpart was servicing. For example, if TP5 should fail, TeamManager will assign TP6 with the privilege of TeamServer5 and the task to update the database. If TP6 should fail, TeamManager will assign TP5 the privilege of TeamServer ⁇ and redirect TP6 traffic to TP5 by notifying incoming requests with TP5 IP address instead of TP6.
  • the TeamSoft also includes workgroup diagnosis of problems with automatic corrective action built-in.
  • WSA performance-gauging services can be implemented such that the optimal performance in a WSA can be obtained by adjusting the values of some key parameters.
  • the inventive workgroup server array performance is hinged upon the following three factors: 1 ) TeamManager firewall operation, 2) the number of application-based TeamProcessors, 3) the size of the application-specific database. If the firewall operation installed in TeamManager TP ⁇ takes too much time in fulfilling content decryption security and upper layer access- based security, it will decrease the number of incoming requests per minute. However, this issue can be resolved by attaching fire-wall-based routers, which can perform network layer filtering and also upper layer filtering.
  • the number of application-based TeamProcessors decreases, the number of outgoing replies per minute will decrease.
  • the application-centric database is constructed based on non-loyalty traffic, it tends to render out only ready-made information, which may grow occasionally for satisfying non-loyalty based traffic.
  • the database is going to grow considerably.
  • the time need to retrieve the data from the database for forming up a reply page is not an issue, because the database on a TeamServer can be readily accessible without depending on any other TeamProcessor.
  • non-loyalty application-based there are two scenarios: 1 ) non-loyalty application-based and 2) loyalty application-based.
  • the optimal performance of the inventive workgroup server array is dependent on the number of application-based TeamProcessors. Based on the computing power and the complex degree of service, one TeamProcessor can handle X number of incoming requests and reduce outgoing replies in one minute without degrading the service, which is considered as the acceptable quality of service (QOS). Therefore, four TeamProcessors can accommodate 4X number of incoming requests in a steady state operation.
  • QOS acceptable quality of service
  • the inventive workgroup server array can still accommodate the peek-time operation by assigning TP6 and TP7 as application- based TeamProcessors and joining the round robin load-balancing algorithm operated by TeamManager.
  • a 12-TeamProcessor-based workgroup server array in which eight ( ⁇ ) out of twelve (12) are application-based TeamProcessors, can accommodate ⁇ X number of non-loyalty traffic in a steady state operation and 10X number of traffic in a peek-time operation. If the incoming traffic is more than 10X, then the second workgroup server array is needed.
  • the optimal performance of the inventive workgroup server array is dependent on the number of application-based TeamProcessors and the size of the loyalty-based database. If the size of the database is too large and the number of incoming requests generated is more than all the TeamProcessors can handle, then the database needs to be downsized to satisfy the steady state operation and the excess should move to the second workgroup server array.
  • a 12-TeamProcessor based workgroup server array can accommodate ⁇ X number of loyalty-based traffic, which can be converted into Y number of loyalty-based users that can be installed on the application-centric database. In the peek-time situation, Y number of users will generate 10X number of loyalty-based traffic, which still meets the acceptable QOS.
  • the inventive workgroup server array can always re-adjust X and Y number to ensure the acceptable Quality-of-Service, based on the information gathered by TeamManager. Therefore, the performance measurements for the inventive workgroup server array are parameters X and Y, and the optimal operating point as well as the prediction of problems for needing increased resource, can be derived.
  • the degree of service is higher, which may lower the number of X and Y.
  • the QOS of the inventive workgroup server array will still be intact.
  • FIG. 3B illustrates a workgroup server cluster comprising a plurality of single-application-based workgroup server arrays, each having a mutually exclusive database segment. Since each workgroup server array is QOS capable, the overall workgroup server cluster is also QOS capable.
  • a highly available and scalable mission critical web-based application can be accommodated by a workgroup server cluster, which contains the first workgroup server array, up to the nth workgroup server array. Since it is loyalty- based, the router can immediately distribute the right incoming traffic to the right TeamManager based on the right IP address, because this information is either installed in the "cookies" of their browsers or in the chip-based smart cards that can be used for network access and user-authentication. For non-loyalty-based situation, the router together with the Domain Name Server (DNS), which converts the URL into IP addresses, can distribute the incoming load to a non-loyalty-based workgroup server cluster's multiple TeamManagers by using the built-in round-robin capability.
  • DNS Domain Name Server
  • the database server program should be fast and simple to run, without having the need of complicated intelligence built in because the web-based application is well defined and the database associated with it should also be well defined.
  • the time spent for data retrieval should be as short as possible, so that X and Y can be larger numbers yielding better performance.
  • FIG. 4 Shown in FIG. 4 is a preferred embodiment of an overall web-server system for highly available and scalable mission critical Intranet, Extranet and Internet applications, integrating with multiple serial-chained and parallel-chained workgroup server clusters and creating an ideal and secure distributed computing environment.
  • the inter-communication among different workgroup server clusters can be implemented securely by using proprietary port with SSL-based, OS-based or application-based content and access security measures, so that any foreign communication won't be allowed to access any workgroup server cluster.
  • each workgroup server array's TeamServers whether hard disk-based, tape-based or optical-disk-based, can all be converted as FC devices, which can then be accessed and maintained by any of the SAN-based (Storage Area Network) backend database processors.
  • SAN-based Storage Area Network
  • every workgroup server array's application-centric file and database servers or data caching servers for the backend data center SAN-based sophisticated file and database servers are equipped with more intelligent database engines.
  • the present invention incorporates a number of unique components: 1 ) TeamProcessors, 2) TeamServers and TeamServer cards, 3) TeamPanels, 4) TeamLink cards, 5) TeamChassis, and 6) TeamRack. Based on these unique components, the present invention also employs a number of unique methods to build the preferred workgroup server arrays.
  • WSA server-pair method 2 ) WSA server-pair method, 2) WSA multi-workgroup link method, 3) WSA server coordination and supervisory method, 4) WSA internal, onsite and remote "front-panel” switching method, 5) WSA device sharing method, 6) WSA fail-safe and recovery method, 7) WSA load balancing method, ⁇ ) WSA file/database sharing method, 9) WSA security-based method, 10) WSA TeamSoft-based management method, and 11 ) WSA optimal performance- gauging method.
  • WSC workgroup server clusters
  • the present invention employs a number of unique methods to build the preferred "Front-Office" web-based server farms. They are 1 ) multiple WSCs serial-chained method, 2) multiple WSCs parallel-chained method, 3) Multiple serial-chained and parallel-chained WSCs linked with storage area network (SAN) method.
  • SAN storage area network
  • the present invention provides a workgroup server array and its related architecture for building various highly available, scalable and mission- critical server clusters in a secure distributed computing environment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Abstract

L'invention se rapporte à un procédé et à un appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail, idéal pour des applications Intranet avec accès à Internet, des applications Extranet et des applications Internet. L'ensemble de serveurs de cette invention comporte une pluralité d'ordinateurs (408) de groupes de travail ou d'équipes, qui sont équipés de serveurs à accès direct, propres à des groupes de travail, et de dispositifs de commande modulaires (1). Le procédé de mise en oeuvre consiste à créer des capacités de reprise, insensible aux défaillances et propres à des groupes de travail; à assurer un contrôle au moyen de consoles et un soutien à la gestion, et à recevoir des applications avec accès à Internet, hautement disponibles et à géométrie variable, associées à des performances optimales. Les ensembles de serveurs de groupes de travail peuvent servir de blocs de construction de base destinés à la construction de grappes de serveurs de grande échelle, de sorte qu'un plus grand nombre d'utilisateurs peut être desservi simultanément. En outre, une architecture fondée sur l'ensemble de serveurs de groupes de travail est créée aux fins de la construction de diverses grappes de serveurs critiques, à géométrie variable et hautement disponibles, qui permettent d'offrir des services informatiques répartis à des applications critiques sur Internet et Extranet, et des applications Intranet propres à des entreprises.
PCT/US2000/013595 1996-11-01 2000-05-17 Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail WO2000072167A1 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
JP2000620492A JP4864210B2 (ja) 1999-05-20 2000-05-17 作業グループサーバー実施の方法と装置
KR1020017000934A KR20010074733A (ko) 1999-05-20 2000-05-17 작업집단 서버 어레이를 실현하기 위한 방법 및 장치
CA002338025A CA2338025C (fr) 1999-05-20 2000-05-17 Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail
AU52738/00A AU5273800A (en) 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array
EP00937591A EP1114372A4 (fr) 1999-05-20 2000-05-17 Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail
US09/744,194 US6715100B1 (en) 1996-11-01 2000-05-17 Method and apparatus for implementing a workgroup server array

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13531899P 1999-05-20 1999-05-20
US60/135,318 1999-05-20

Publications (1)

Publication Number Publication Date
WO2000072167A1 true WO2000072167A1 (fr) 2000-11-30

Family

ID=22467552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/013595 WO2000072167A1 (fr) 1996-11-01 2000-05-17 Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail

Country Status (7)

Country Link
EP (1) EP1114372A4 (fr)
JP (1) JP4864210B2 (fr)
KR (1) KR20010074733A (fr)
CN (1) CN1173281C (fr)
AU (1) AU5273800A (fr)
CA (1) CA2338025C (fr)
WO (1) WO2000072167A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002288058A (ja) * 2001-01-25 2002-10-04 Yahoo Inc 高性能クライアントサーバ通信システム
FR2832235A1 (fr) * 2001-11-09 2003-05-16 Dell Products Lp Systeme et procede pour utiliser des configurations de systeme dans un systeme informatique modulaire
EP1428149A1 (fr) * 2001-09-21 2004-06-16 Polyserve, Inc. Systeme et procede pour environnement multinoeud a memoire partagee
WO2003014893A3 (fr) * 2001-08-10 2004-07-29 Sun Microsystems Inc Systemes d'ordinateurs
WO2008157667A1 (fr) * 2007-06-21 2008-12-24 Microsoft Corporation Compteur pour matériel informatique
WO2008157746A1 (fr) * 2007-06-21 2008-12-24 Microsoft Corporation Expérience de calcul de facturation à l'utilisation mesurée
US7865326B2 (en) 2004-04-20 2011-01-04 National Instruments Corporation Compact input measurement module
US9247079B2 (en) 2013-01-28 2016-01-26 Kyocera Document Solutions Inc. Information processing apparatus

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100334546C (zh) * 2003-07-08 2007-08-29 联想(北京)有限公司 实现机群监控系统使用多种数据库系统的方法和装置
KR100609082B1 (ko) * 2004-07-16 2006-08-08 주식회사 세미라인 미션 크리티컬한 생산설비 관리장치
US7373433B2 (en) * 2004-10-22 2008-05-13 International Business Machines Corporation Apparatus and method to provide failover protection in an information storage and retrieval system
US8332925B2 (en) * 2006-08-08 2012-12-11 A10 Networks, Inc. System and method for distributed multi-processing security gateway

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5530892A (en) * 1993-03-16 1996-06-25 Ht Research, Inc. Single chassis multiple computer system having separate displays and keyboards with cross interconnect switching for work group coordinator
US5612865A (en) 1995-06-01 1997-03-18 Ncr Corporation Dynamic hashing method for optimal distribution of locks within a clustered system
US5748897A (en) * 1996-07-02 1998-05-05 Sun Microsystems, Inc. Apparatus and method for operating an aggregation of server computers using a dual-role proxy server computer
US5802391A (en) * 1993-03-16 1998-09-01 Ht Research, Inc. Direct-access team/workgroup server shared by team/workgrouped computers without using a network operating system
US5822531A (en) 1996-07-22 1998-10-13 International Business Machines Corporation Method and system for dynamically reconfiguring a cluster of computer systems
US5933596A (en) 1997-02-19 1999-08-03 International Business Machines Corporation Multiple server dynamic page link retargeting
US6049823A (en) * 1995-10-04 2000-04-11 Hwang; Ivan Chung-Shung Multi server, interactive, video-on-demand television system utilizing a direct-access-on-demand workgroup
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5283897A (en) * 1990-04-30 1994-02-01 International Business Machines Corporation Semi-dynamic load balancer for periodically reassigning new transactions of a transaction type from an overload processor to an under-utilized processor based on the predicted load thereof
JPH04148363A (ja) * 1990-10-11 1992-05-21 Toshiba Corp マルチコンピュータシステム
JPH0756838A (ja) * 1993-08-11 1995-03-03 Toshiba Corp 分散サーバ制御装置
US5768623A (en) * 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
JPH09160885A (ja) * 1995-12-05 1997-06-20 Hitachi Ltd クラスタ型計算機装置の負荷分散方法
US5704032A (en) * 1996-04-30 1997-12-30 International Business Machines Corporation Method for group leader recovery in a distributed computing environment
US5875290A (en) * 1997-03-27 1999-02-23 International Business Machines Corporation Method and program product for synchronizing operator initiated commands with a failover process in a distributed processing system
JPH1165862A (ja) * 1997-08-14 1999-03-09 Nec Corp マルチプロセッサ資源分割管理方式

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5530892A (en) * 1993-03-16 1996-06-25 Ht Research, Inc. Single chassis multiple computer system having separate displays and keyboards with cross interconnect switching for work group coordinator
US5802391A (en) * 1993-03-16 1998-09-01 Ht Research, Inc. Direct-access team/workgroup server shared by team/workgrouped computers without using a network operating system
US5612865A (en) 1995-06-01 1997-03-18 Ncr Corporation Dynamic hashing method for optimal distribution of locks within a clustered system
US6049823A (en) * 1995-10-04 2000-04-11 Hwang; Ivan Chung-Shung Multi server, interactive, video-on-demand television system utilizing a direct-access-on-demand workgroup
US5748897A (en) * 1996-07-02 1998-05-05 Sun Microsystems, Inc. Apparatus and method for operating an aggregation of server computers using a dual-role proxy server computer
US5822531A (en) 1996-07-22 1998-10-13 International Business Machines Corporation Method and system for dynamically reconfiguring a cluster of computer systems
US5933596A (en) 1997-02-19 1999-08-03 International Business Machines Corporation Multiple server dynamic page link retargeting
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CGHUN-HSING WU ET AL.: "A World-wide web server on a multicomputer system", IEEECOMPUT. SEC., US, 12 June 1966 (1966-06-12), pages 522 - 528
See also references of EP1114372A4

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002288058A (ja) * 2001-01-25 2002-10-04 Yahoo Inc 高性能クライアントサーバ通信システム
JP4504609B2 (ja) * 2001-01-25 2010-07-14 ヤフー! インコーポレイテッド 高性能クライアントサーバ通信システム
WO2003014893A3 (fr) * 2001-08-10 2004-07-29 Sun Microsystems Inc Systemes d'ordinateurs
EP1428149A1 (fr) * 2001-09-21 2004-06-16 Polyserve, Inc. Systeme et procede pour environnement multinoeud a memoire partagee
EP1428149B1 (fr) * 2001-09-21 2012-11-07 Hewlett-Packard Development Company, L.P. Systeme et procede pour environnement multinoeud a memoire partagee
FR2832235A1 (fr) * 2001-11-09 2003-05-16 Dell Products Lp Systeme et procede pour utiliser des configurations de systeme dans un systeme informatique modulaire
US6567272B1 (en) 2001-11-09 2003-05-20 Dell Products L.P. System and method for utilizing system configurations in a modular computer system
US7865326B2 (en) 2004-04-20 2011-01-04 National Instruments Corporation Compact input measurement module
WO2008157667A1 (fr) * 2007-06-21 2008-12-24 Microsoft Corporation Compteur pour matériel informatique
WO2008157746A1 (fr) * 2007-06-21 2008-12-24 Microsoft Corporation Expérience de calcul de facturation à l'utilisation mesurée
RU2456668C2 (ru) * 2007-06-21 2012-07-20 Майкрософт Корпорейшн Вычисление измеренной платы за использование
US9247079B2 (en) 2013-01-28 2016-01-26 Kyocera Document Solutions Inc. Information processing apparatus

Also Published As

Publication number Publication date
CN1173281C (zh) 2004-10-27
AU5273800A (en) 2000-12-12
JP4864210B2 (ja) 2012-02-01
CN1310821A (zh) 2001-08-29
JP2003500742A (ja) 2003-01-07
EP1114372A1 (fr) 2001-07-11
EP1114372A4 (fr) 2009-09-16
CA2338025C (fr) 2004-06-22
CA2338025A1 (fr) 2000-11-30
KR20010074733A (ko) 2001-08-09

Similar Documents

Publication Publication Date Title
US6715100B1 (en) Method and apparatus for implementing a workgroup server array
US6816905B1 (en) Method and system for providing dynamic hosted service management across disparate accounts/sites
US20110093740A1 (en) Distributed Intelligent Virtual Server
CN100544342C (zh) 存储系统
US7711845B2 (en) Apparatus, method and system for improving application performance across a communications network
US7296268B2 (en) Dynamic monitor and controller of availability of a load-balancing cluster
US7225356B2 (en) System for managing operational failure occurrences in processing devices
US8499086B2 (en) Client load distribution
US20050108593A1 (en) Cluster failover from physical node to virtual node
US20050080891A1 (en) Maintenance unit architecture for a scalable internet engine
US20030237016A1 (en) System and apparatus for accelerating content delivery throughout networks
US20070162558A1 (en) Method, apparatus and program product for remotely restoring a non-responsive computing system
US20030142628A1 (en) Network fabric management via adjunct processor inter-fabric service link
CA2338025C (fr) Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail
EP1312007A1 (fr) Procede et systeme de gestion dynamique de services heberges
US20070180116A1 (en) Multi-layer system for scalable hosting platform
US9848060B2 (en) Combining disparate applications into a single workload group
CA2433564C (fr) Procede et appareil de mise en oeuvre d'un ensemble de serveurs de groupes de travail
WO2006121448A1 (fr) Systeme distribue de traitement et de gestion de donnees a architecture variable
KR200368680Y1 (ko) 원격 공유 분산 처리 장치
Yang et al. Applying linux high-availability and load balancing servers for video-on-demand (VOD) systems

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 00800947.3

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AU BA BG BR CA CN CZ EE GE HR HU ID IL IS JP KE KP KR KZ LK LR LT LV MD MG MK MN MW MX NO NZ PL RO RU SD SG SI SK TJ TR UA US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

WWE Wipo information: entry into national phase

Ref document number: 09744194

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2338025

Country of ref document: CA

Kind code of ref document: A

Ref document number: 2338025

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2000 620492

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020017000934

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
REEP Request for entry into the european phase

Ref document number: 2000937591

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2000937591

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000937591

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020017000934

Country of ref document: KR

WWW Wipo information: withdrawn in national office

Ref document number: 1020017000934

Country of ref document: KR