CA2338025C - A method and apparatus for implementing a workgroup server array - Google Patents

A method and apparatus for implementing a workgroup server array Download PDF

Info

Publication number
CA2338025C
CA2338025C CA002338025A CA2338025A CA2338025C CA 2338025 C CA2338025 C CA 2338025C CA 002338025 A CA002338025 A CA 002338025A CA 2338025 A CA2338025 A CA 2338025A CA 2338025 C CA2338025 C CA 2338025C
Authority
CA
Canada
Prior art keywords
workgroup
teamprocessor
teamprocessors
teammanager
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA002338025A
Other languages
French (fr)
Other versions
CA2338025A1 (en
Inventor
Ivan Chung-Shung Hwang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA002433564A priority Critical patent/CA2433564C/en
Publication of CA2338025A1 publication Critical patent/CA2338025A1/en
Application granted granted Critical
Publication of CA2338025C publication Critical patent/CA2338025C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1017Server selection for load balancing based on a round robin mechanism
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1034Reaction to server failures by a load balancer

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Multi Processors (AREA)

Abstract

A method and apparatus for implementing a workgroup server array ideal for web-based Intranet, Extranet and Internet applications. The inventive server array comprises a plurality of team/workgroup computers (408) equipped with workgroup-based direct-access servers and modular controlling devices (1), creating workgroup-based fault-tolerant and fail-over capabilities, providing console-based monitoring and management support, and accommodating highly available and scalable web-based applications wit optimal performance. These workgroup server arrays can be used as the basic building blocks to construct large-scale serve clusters, so that more users can be served concurrently. Furthermore, a workgroup-server-array-based architecture is created for building various highly available, scalable and mission critical server clusters, which enable distributed computing services for enerris-based Intranet, Extract and Internet mission critical applications.

Description

A METHOD AND APPARATUS FOR IMPLEMENTING A WORKGROUP
SERVER ARRAY
FIELD OF THE INVENTION
The present invention generally relates to a server cluster, and more particularly to a method and apparatus for implementing a workgroup server array and its architecture for building various server clusters to accommodate scalable web-based Intranet, Extranet and Internet mission-critical applications.
The inventive server array comprises team/workgroup computer equipped with workgroup-based direct-access servers and controlling devices, as described in Applicant's Patent No. 5,802,391 entitled "DIRECT-ACCESS TEAM/WORKGROUP
SERVER SHARED BY TEAMNVORKGROUPED COMPUTERS WITHOUT USING
A NETWORK OPERATING SYSTEM". Furthermore, this inventive server array creates a workgroup-server-array-based architecture, which can be employed to construct various highly available, scalable and mission-critical server clusters.
PRIOR ART
The explosion of innovative Internet technology is significantly influencing the way applications are written and deployed. The hundreds of thousands of Internet web sites that were once static "brochure-ware" are quickly becoming highly interactive Internet applications with transactional capabilities. Inside large corporations, developers are using Web technology to integrates enterprise applications into large-scale Intranets. Between corporations, business partners are building secure Extranets to streamline their supply chains and improve communication.
As web-based applications expand on the Internet, and on enterprise Intranets and Extranets, the functions they perform are becoming increasingly mission critical.
Moreover, as businesses continue to apply web-based technologies to mission critical tasks, they will require sophisticated approaches ifor making their applications highly available and scalable.
In order to achieve high scalability and availability requirements, the trend is toward systems that involve many servers working together, i.e., server clusters to deliver applications that the end users request. Furthermore, a large-scale web-based service requires architecture to build server clusters, so that availability, scafability, reliability, performance, management and security issues can be accommodated.
However, current technologies available for building a highly scalable, highly available and mission-critical web-application-based server cluster by using a plurality of individual servers, tend to create a single-servE~r-based 3-tier architecture, hereinafter referred as SS-3 architecture. This SS-3 architecture generally requires first-tier components, which are load balancers, second-tier components, which are application servers, and third-tier components, which are database and file servers.
Each individual server, which can be PC-based, ;>uper-micro-based or mini-computer-WO OOI721b7 PCT/US00/13595 based, comprises multiple CPU's with parallel processing capabilities using an Operating System, such as WinNT, Solaris, Linux and Unix.
Based on SS-3 architecture, a highly available and scalable server cluster for web-based applications can thus be built. However, the .architecture also creates the following disadvantages:
1. Pertaining to each tiered component:
ay Load balancers - Analyze all the incoming tr<affic and re-direct each individual web-based query/request to one of the available second-tiered application servers that are attached. The Toad balancer distributes requests to specific second-tiered web-based application servers based on the nature of the request and the availability and capability of the load-balanced web application server. There are three basic types of load balancers: switches, software balancers and appliance balancers. However, the Internet connection will likely be clogged if any of the above-mentioned load balancers is stressed.
b) Application servers - Receive the assignment from the first-tiered load balancer, cant' out the web-based applications and intertace with the third-tier database and file servers for application-oriented data retrieval. However, each application server may be different from one another, based on different hardware and software configuration, creating management complexity for the load balancer.
In addition, each application server handles both loyalty-based and non-loyalty based queries, creating non-coherent program groups with different levels of security measures. Furthermore, each application server does not have the remote boot capability, unless a network-access-based secondary processor is included, so that if the primary processor of the server fails, the secondary processor accessed by other network-based management servers can then be triggered to reboot the primary processor.
c) DatabaselFile servers - Are client-server-based servers that process databaselfile queries from all the second-tiered application servers deemed as clients.
Since there is no differentiation between the loyalty-based and the non-loyalty-based traffic, application-oriented data for loyalty-based and non-loyalty-based are all sorted in one central file server and one database server, crE:ating potential databaselfile retrieval bottlenecks if too many concurrent queries occur. Furthermore, if these file and database servers are implemented as part of a data center, which contains multiple distributed database and file servers that are linked to a plurality of SAN-enabled {storage-area-network) storage devices, the complexity of managing such a data center is high. It is due to the fact that-complicai:ed database software programs are required in both client-centric servers and server-centric servers.
However, it is not idea! to lump application-oriented data and business sensitive data in one data center, because extra security measures, such as firewall filtering, have to be put forth to guard against any potential risk of being sabotaged by web-based browsing activities.
d) The inter-tier communication switches - Are required between the first tiered toad balancer and the second-tiered application servers and between the application servers and the third-tiered file and database servers. Since every component is network-based, all the communication between servers is handled through these two switches, creating unnecessary inter-tiered traffic bottlenecks and management overhead.
e) More tiers means more components, which create more single-point failures - Based on SS-3 architecture, all the load balancers, application servers, file and database servers, routers and switches should have a fail-over scheme, so that mission critical applications can be maintained without failure. Even though the overall fail-over scheme can be developed, it is not efficient and cost-effective, due to the fact that there are too many hardware configurations and software programs involved.

2. Server cluster management:
a~ The monitoring and management of single-server-based server clusters become complicated because of the complexity of each component in regard to inter-tiered communication. Single software upgrades tend to create software incompatibility due to the fact that there are tao many involved software programs that also may need to be upgraded from various venders.
b) The overall performance is nat easily optimized. ~~nce a server cluster is built based on SS-3 architecture, it has to meet the criteria of at least handling steady-state operation smoothly and accommodating peak-time operation without glitches.
However, there are no distributed small-scale optimal points that can be gauged, thereby adding uncertain factors in controlling the steady-state operation and restricting necessary measures in dealing with the peak-time operation.
c) High availability and cost-effective linear scatability are difficult to maintain if too many database centric requests are to be serviced concurrently due to high-speed web access is prevalent. Currently, web-based queries are based on 56 kbps narrow-band transfer rate and the related services are centered in we;b-page delivery.
However, if the prevalent data transfer rate jumps to 1 Mbps or higher by using cable modem or ADSL and the prevalent services are centered in personal database-centric web-page delivery, the SS-3 architecture will have difficulties in maintaining high availability. ~ It is due to the fact that 20 times more traffic is generated within the server cluster, stressing the capability of the fail-over load balancers, creating bottlenecks between inter-tiered communications and severely diminishing the return on the SS-3-based scalability.

SUMMARY OF THE INVENTION
The aforementioned sever cluster, which is based on single-server-based architecture, can not adequately provide highly availak>le and scalable solutions for large-scale web-based mission-critical applications efficiently and cost-effectively.
The objects of this invention are accomplished by not: only resolving the above-mentioned deficiencies, but also by devising technological breakthroughs in building a workgroup-based server-array and its architecture so that highly available and scalable solutions for large-scale web-based mission-critical applications can be accommodated efficiently and cost-effectively.
The present invention employs a plurality of teamlworkgroup computers, hereinafter referred to as TeamProcessors, housed in workgroup-computer chassis, hereinafter referred to as TeamChassis, together with a plurality c~f workgroup-based direct-access servers, hereinafter referred to as TeamServers, as described in Applicant's Patent No. 5,802,391. Based on these building blocks, various workgroup server array configurations can be implemented.
The present invention further comprises a unique modular workgroup-based controlling and monitoring device, hereinafter referred to as TeamPanel, which provides local and remote monitoring and reboot management, task switching, load J
balancing and fail-over control functions. In addition, any particularly configured workgroup server array can be accommodated either Iby a single or by multiple TeamPanels cascaded together.

The present invention further comprises a plurality of the above-mentioned Team-building blocks, so that preferred workgroup server arrays for various configurations can be built to provide a number of unique underlying functions. Based on the preferred data structure and data flow, these underlying functions, include, but are not limited to, internal/external controlled task switching, workgroup-based device sharing, load balancing, fail-over, monitoring and management, security and performance measurements.
In accordance with one aspect of the present invention there is provided in a multiple processor computer system having a plurality of interconnected TeamProcessors each having a multiple CPU computing platform and a workgroup server link connecting the TeamProcessor to a shared plurality of direct access team servers, a method of team server coordination and supervision, the method comprising the steps of: arbitrarily selecting a first one of said TeamProcessors as TeamManager; each remaining TeamProcessor reporting its management-based activities status including its inventory, disk space and CPU usage to the selected TeamManager through the TeamProcessor operating system; employing said selected TeamManager to monitor the status of all of the remaining TeamProcessors; said TeamManager compiling a management-based status table corresponding to status information received from said TeamProcessors.
The present invention and its related architecture, resolve the deficiencies inherent in the conventional single-server-based architecture by eliminating unnecessary network-access-based components and replacing them with workgroup-based direct-access components, thus reducing unnecessary network traffic and decreasing the number of single-point failures.
Furthermore, a plurality of workgroup server arrays based on a specific application can be formed as a workgroup server cluster, so that highly available and scalable mission critical web services based on that particular application can be 7a accommodated. In addition, a plurality of various application-based workgroup server clusters can be constructed in both serial and parallel manners to provide large scale multi-application web-based solutions for accommodating thousands of users concurrently even with broadband Quality of Service (QOS) intact.

BRIEF DESCRIPTION OF THE DRAWINGS
The aforementioned aspects and advantages of the present invention, as well as additional aspects and advantages thereof will be more fully understood hereinafter, as a result of a detailed description of a preferred embodiment thereof, when taken in conjunction with the following drawings in which:
FIG. 1A is a functional block diagram illustrating the preferred workgroup processor, i.e., TeamProcessor, as one of the apparatuses~for building a preferred workgroup server array.
FIG. 1 B is a functional block diagram illustrating the preferred workgroup computer chassis, i.e., TeamChassis, which can house multiple TeamProcessors, as one of the apparatuses for building a preferred workgroup server array.
FIG. 1 C is a functional block diagram illustrating ones of the preferred integrated configurations, which comprises eight (8) preferred TE~amProcessors networked and workgrouped together via multiple links, as well as four (4) preferred TeamServers, as one of the embodiments of the present invention.
FIG. 1 D is a functional block diagram illustrating the preferred modular workgroup-based monitoring and management, i.e., TeamPanel, which comprises four (4) basic control units and one (1 ) main control unit with dual processors for connecting up to four (4) TeamProcessors, and can be enclosed in a T'eamChassis with Front-Panel built-in.

FIG. 1 E is a functional block diagram illustrating a modular cascading of a primary TeamPane! and a secondary TeamPanel, accommodating an eight (8) TeamProcessor configuration.
FIG. 2A is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising eight {8) TeamProcessors, four (4) SCSI-disk-based TeamServers and two (2) cascaded TeamPanels, all evenly enclosed in two (2) TeamChassis.
FIG. 2B is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising four (4) TeamProcessors, two (2) SCSI-disk-based TeamServE:rs and one {1 ) TeamPanel, all enclosed in one TeamChassis.
FIG. 2C is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising twelve (12) TeamProcessors, six (6) SCSI-disk-based TeamServ~ers linked using dual SCSI
channels and three (3) cascaded TeamPanels, all evenly enclosed in three (3) TeamChassis.
FIG. 3A is a functional block diagram illustrating a methodical implementation of a - preferred data structure and data flow onto a preferred eight {8) TeamProcessor server array in which a plurality of underlying functions for use with internal operations, fail-over, load balance, security, management and optimal performance measurements can all be installed.
FIG. 3B is a functional block diagram illustrating a workgroup server cluster comprising a plurality of single-application workgroup server arrays, each providing a mutually exclusive database segment based on the optimal performance measurement, so that inter workgroup-based underlying functions, such as high availability and scalability can be installed.
FIG. 4 is a functional block diagram illustrating a preiferred integration of various security zone-based application-oriented workgroup server clusters and backend database servers using FC-AL hub or FC Switches, creating a preferred data centerlwarehouse configuration in a distributed computing environment for web-based mission-critical applications.

DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
Reference will be made to the preferred embodiment of the invention illustrated in FIGs. 1-4, based on team/workgroup computers used as the preferred building blocks of workgroup server array.
A teamlworkgroup computer is a group of computers, which are workgrouped together via a workgroup peer-to-peer fink, and can all be connected to a number of direct-access workgroup servers via a workgroup server link. The details are described in Applicant's Patent No. 5,530,892 entitled "SINGLE CHASSIS MULTIPLE
COMPUTER SYSTEM HAVING SEPARATE DLSPLAYS AND KEYBOARDS WITH
CROSS INTERCONNECT SWITCHING FOR WORK GROUP COORDINATOR" and in Applicant's Patent No. 5,802,391 entitled DIRECT-ACCESS TEAMIV110RKGROUP
SERVER SHARED BY TEAMIWORKGROUPED COMPUTERS WITHOUT USING A
NETWORK OPERATING SYSTEM". In addition, the workgroup peer-to-peer link and the workgroup server link can be connected together, if they are using the same physical layer cabling, capable of running both storage-based and communication-based data link protocols, such as modified SCSI, as described in both of the aforementioned Patents. These workgrouped computers., each hereinafter referred to as TeamProcessor, are based on either the same or diffEarent CPUIOS platforms and these direct-access workgroup servers, each hereinafter referred to as TeamServer, can be formatted with the same file system that is suppoi~ed by different operating systems. TeamServers can be implemented with disk-based, tape-based and optical-based drives; as wet! as implemented with fault-tolerant disk-arrays.
Each TeamProcessor, based on a particular OS, is installed with that particular OS-centric workgroup server link interface card, i.e., TeamServer card, to recognize all the TeamServers as the direct-access local drives. However, each TeamServer has only one primary TeamProcessor that has the absolute privilege to read, write and create files. Furthermore, one physical hard disk drive, as well as a fault-tolerant disk array can be partitioned and formatted into multiple logical drives, each logical drive being controlled by each different TeamProcessor as the primary processor. Even though al!
of these TeamProcessors are connected on the internal network fink and instatled with network operation system, these TeamServers are not mapped as network-accessible drives throughout TeamProcessors.
Moreover, a highly integrated teamlworkgroup computer, hereinafter referred to as TeamPro computer, contains multiple TeamProcessorvs, all enclosed in one vvorkgroup TeamChassis as described in Applicant's Patent No. .'5,577,205 entitled "CHASSIS
FOR A MULTIPLE COMPUTER SYSTEM". The TearnPro computer is further equipped with a monitoring and management device, i.e., TeamPanel as a means to control and interface with each TeamProcessor through one console monitor and one RAP (remote-access-port)-based device, which is connprised of two {2) serial ports, one (1 ) keyboard, one (1 ) system LED, one (1 ) buzzer and one (1 ) reset button as described in Patent No. 5,530,892 entitled "SINGLE CHASSIS MULTIPLE
COMPUTER SYSTEM HAVING SEPARATE DISPLA'r'S AND KEYBOARDS WITH
CROSS INTERCONNECT SWITCHING FOR WORK GROUP COORDINATOR" .
As shown in FIG. 1A, the preferred Teamlworkgroup~ computer-based . . TeamProcessor; based on~-a PC computing platform, generally contains either one-way, two-way or four-way Intel Pentium CPUs WINNT PCI-based motherboard with 128 MB RAM, on a floppy disk interface module, an IDE interface module, a VGA
card module, a Sound card module, a USB module, a para~Ilel interface module, a RAP
module, a network link LAN module using Ethernet, a workgroup peer-to-peer link module using Ethernet, a workgroup peer-to-peer link: module using SCSI and a workgroup server link module using SCSI. A TeamProcessor further is equipped with module-based external peripheral drives and devices such as floppy disk, IDE
disk and optical drives, a VGA monitor, a USB-based digital camera, a mouse, a network Ethernet-based hub and switches, SCSI disk and tape driives, a printer and a set of speakers.
As shown in FIG. 1 B, the preferred workgroup computer' chassis, i.e., TeamChassis, encloses four (4) CPU-card-based TeamProcessors and a number of module-based drives and devices, such as IDE-based disk and optical drives, SCSI drives and TeamPanel. The same TeamChassis can also enclose fi~ro (2) mother-board-based TeamProcessors with various module-based drives and devices. TeamChassis can further be equipped with internal redundant pov~rer suppii~es, smart-power management, hot swappable disks and fans, and external UPS.
The maximum number of individual TeamProcessors that can be workgrouped together to form a workgroup server array is constrained by the internal workgroup server link. If the workgroup server link uses SCSI-II, the: effective length to ensure proper data transmission is six (6) meters and the number of nodes that can be attached is sixteen (16). That is why TeamChassis, which can enclose at least two (2) TeamProcessors, is used to support a better workgroup peer-to-peer link-based SCSI
cable scheme, as the first TeamProcessor extends the cable from external and the second extends the cable for external connection. The same TeamChassis can also house four (4) CPU-card based TeamProcessors, allowing the SCSI cable to be even shorter. Currently, there are 4~different SCSI standards, from FAST SCSI, Ultra SCSI, Ultra2, LVD SCSI and Ultra3, LVD SCSI. Each standard has both narrow (8-bit) and wide (16-bit) configurations. Therefore, the preferred SC:SI implementation is to use Ultra-wide LVD SCSI, which has the maximum data rate at 160MBIsec with the cable length up to twelve (12) meters.
FIG. -1 C shows a preferred workgroup link integration, in which eight (8) preferred TeamProcessors are linked by a workgroup peer-to-peer link using SCSI and four (4) SCSI hard-disk-based TeamServers are linked by a workgroup server link using SCSI.
These TeamProcessors and TeamServers are connected together by using the same SCSI cable. By doing so, every TeamProcessor can direct access each TeamServer without involving other TeamProcessors, especially the primary TeamProcessor that has the absolute privileges. As illustrated in FIG. 1 C, each SCSt-disk-based TeamServer has two (2) logical drives and each TeamProcessor is allocated one logical drive and enabled with absolute privilege. A TeamServer can only be accessed in a read-only fashion by other non-primary TeamProc~essors.
FIG. 1 C also illustrates the workgroup peer-to-peer link using Ethernet via TeamLink cards with Ethernet hub, so that if the workgroup peer-to-peer link using SCSI
is faulty, the workgroup peer-to-peer fink using Ethernet can be~ the alternative communication link, or vice versa. The major benefit of implementing workgroup peer-to-peer link using Ethemet is that the inter-TeamProcessor communications within the workgroup won't adversely affect the network traffic, as well as other workgroups' inter-TeamProcessor communications. The workgroup peer-to-peer link using Ethemet can accommodate various inter-TeamProcessor communir:ations, such as mapped-drive-based, socket-based, and security-encryptionldecryption-based. Other equivalent peripheral buses besides SCSI can also be adopted as the de facto link that can merge workgroup peer-to-peer link and workgroup server link together, as Long as their data-link layer is capable of implementing storage-based and communication-based protocols, either standardized or proprietary. Howevs~r, depending on the configuration, the workgroup peer-to-peer link based on any of applicable peripheral buses may not be necessary, as long as the workgroup server link and the workgroup peer-to-peer link using Ethemet are established.
FIG. 1 D illustrates the preferred version of TeamPartel, which comprises four (4) basic control units and one main control unit and connects up to four (4) TeamProcessors via RAP, VGA, USB and audio port. The basic control unit contains a micro-processor and three (3} switches controlled by the micro-processor for allowing VGA signal, audio signal and USB signals to flow through onto the common VGA, audio, USB buses that link to other basic control units and the main control unit. In addition, there is a TeamPanei-based communication link using 12C, which connects to other basic control units and the main control unit and there is a set of ten (10) interface signals, which connect to the front panel.
The preferred main control unit may contain dual microprocessors for fault-tolerance, which provide the physical layer interfaces to hook up with a keyboard, serial-based devices and a printer, categorized as the workgroup sharable devices arriong workgrouped TeamProcessors. The main control unit also keeps various status tables for tracking each workgrouped TeamProcessor's vital signs, CPU load and activities, as well as usage tables for supervising common buses and peripheral devices so that after checking the tables for no conflicting usage, it can allow requests from TeamProcessors to be carried out sequentially.
The preferred front-panel contains finro interactive push-buttons; one for selecting the chosen TeamProcessor for external VGA-based monitor to display and for the external keyboard and the mouse to control, the other one for resetting the chosen TeamProcessor. There are also three sets of LED's, which indicate power onloff, primary system disk activity and select enabled, respectively. Both the TeamPanel functional board and the front-panel are enclosed in a T'eamChassis so that the cabling scheme is easier to arrange.
The default TearnProcessor that controls the TeamPar~el is called TeamManager.
For workgroup communication to TeamManager, any TeamProcessor can first transfer the message to its attached control unit via COM2 of RAP, and then the control unit re-packs the message with izC protocol header and notifies the main control unit via TeamPanel internal link using 12C. Once the main control unit allows the' linkage to take place, the basic control unit can communicate directly with the TeamManager through TeamPanel internal 12C link, thereby, for instance, reporting the current status of its attached TeamProcessor. Moreover, the TeamPanel internal link can be used as an alternative communication link to workgroup peer-to-peer links using SCSI
and Ethernet. Also for fail-over purpose, replace COM1-based mouse device with USB-based mouse. Therefore, if COM2 of RAP should fail, then COM1 of RAP can take over and provide the data communication between TE~amProcessor and its attached basic control unit.
FIG. 1 E shows two (2) TeamPanels cascaded together to connect eight (8) preferred workgrouped TeamProcessors. The first TeamPanel, i.e., TP-408M and the second TeamPanel, i.e., TP-408C are connected via the common VGA, Audio, USB and 12C
buses, whereas TP-408C doesn't have the main control unit, so that the main control unit in TP-408M will supervise all the basic control units in TP-408C. The TeamManager which controls the first TeamPanel, wiill also be the TeamManager of the second TeamPanel. For communication to Team~Manager, any TeamProcessor of the second TeamPanel, will first transfer the message to its attached control unit via COM2 of RAP and then the control unit re-packs the message with 12C protocol header and notifies the main control unit in the first TeamPanel via internal 12C
link. Once the main control unit allows the linkage to take place, the basic control unit of the second TeamPanel can communicate directly with the TeamIManager of the first TeamPanel through TeamPanel internal LZC link. Based on the same scenario, any particularly configured workgroup server array can be accommodated either by a single TeamPanel or by multiple TeamPanels cascaded together. The front-panel of each TeamPanel can be enclosed in each TeamChassis, nor can be extended to an external box for easy monitoring and control of multiple TeamPanels. Multiple TeamChassis that contain all the workgroup server array's TeamProcessors can be housed in a TeamRack, which can also house additional TeamS~srvers in additional TeamChassis and is further equipped with a cable distribution box that houses all the inter-TeamChassis cables, as well as all the incoming and outgoing cables.
FiG. 2A is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising eight TeamProcessors, four SCSI-disk-based TeamServers and two cascaded TeamPanels, enclosed in two TeamChassis that can be further housed in a TeamRack.
FIG. 2B is a functional block diagram illustrating a preferred workgroup server array in accordance with one embodiment of the present inventive system, comprising four TeamProcessors, two SCSI-disk-based TeamServers and one TeamPanel, enclosed in one TeamChassis that can be further housed in a TeamRack.
FIG. 2C is a functional block diagram illustrating a prefen~ed workgroup server array in accordance with one embodiment of the present inventive system, comprising twelve TeamProcessors, six SCSI-disk-based TeamServers linked by two workgroup server links using dual SCSt channels, and three cascaded TeamPanels, enclosed in three TeamChassis that can be further housed in a TeamRack.
FIG. 3A illustrates a preferred configuration with defined data flows, which are designed to carry out various underlying functions using an eight (8)-TeamProcessor workgroup server array ~as shown in FIG. 2A. Based on the preferred configuration, eight (8) TeamProcessors can be functionally classified into tyro groups: 1 ) Application/file service processors (TP1-TP4), 2) databas~elfile servicellaad balance/firewall processors, (TP5-TP8). Each TeamProcessor has its primary SCSI-disk-based TeamServer, which can be operated as a read-only TeamServer, hereinafter referred to as secondary TeamServer, for other seven TeamProcessors.
Therefore, during the boot up, each TeamProcessor will recognize one IDE-based system drive, together with one primary TeamServer and seven secondary WO 00/721b~ PCTIUS00/13595 TeamServers, functioning as workgroup direct-access servers without using the NOS
mapping scheme. In addition, the above primary and secondary TeamServers accessed by all the workgrouped TeamProcessors can also be implemented with multiple fault-tolerant disk arrays and with dual-channE.l TeamServer cards to distribute traffic on two SCSI channels.
Applicationlfile service-based TeamProcessors TP1-TP4, each are capable of handling HTTP-based application-oriented web queries from Internet and generating transaction batch files that are written onto both the system IDE drive and its primary TeamServer. Database/file service-based TeamProcEasors TP5-TPB, each are capable of handling FTP-based or proprietary real-time socket port-based database-oriented web queries from Intranet and Extranet and generating transaction batch files that are to be written onto both the system IDE drive and its primary TeamServer.
In addition, TeamProcessor TP5 and TP7, each maintains an application-specific workgroup database that is installed on its primary TeamServer. The:>e two databases are basically the same at the end of the day. The database controlled by TP5 will update during the day based on each batch transaction file generated from TeamServer1-TeamServer4 within a defined time periiod (t). The datG;base controlled by TP7 will be updated at the end of the day based on all the batches generated from TeamServer1-TeamServer4 during the day. TP6 will be handling mo:>tly FTP-based database-oriented web queries from the kntranet, so that TP5 can retrieve from --TeamServerfi and update the database every t period. TP5 will also update the database instantly due to the proprietary real-time sor;,ket-port-based database queries from the Intranet. TP8 will be the default TeamProceasor, i.e., Teamll~anager that controls those two TeamPanels.

Based on the preferred server-pair configuration, a number of unique functional services can be established for the inventive workgroup server array, hereinafter referred as WSA.
One of the preferred methods as to regarding WSA servE:r coordination and supervisory services can be implemented such that TeamManager (TP8) coordinates all workgrouped TeamProcessors and generates management-based activities. The activities include the monitoring of each TeamProcessor's inventory, disk space and CPU usage, which can be generated by the installed OS on each TeamProcessor, as well as the alerts of intrusion, removal and failure that may be taking place on each workgrouped TeamProcessor. Each TeamProcessor will routinely pack the management-based status information and send it via COM2 of RAP to its control unit, which notifies the main control unit and waits for OK to seind instruction from the main control unit via TeamPanel internal 12C link. Once the OH; signal is received, that particular TeamProcessor can direct communication from its control unit to the control unit of TeamManager, which subsequently sends the status information aria COM2 of RAP to TeamManager. TeamManager will always keep a management-based status table regarding all the workgrouped TeamProcessors.
One of the preferred methods regarding WSA internal front-panel switching services can be implemented such that upon requests from itself or any TeamProcessor to check a particular TeamProcessor is still functioning or not, TeamManaCer will send the request to the main control unit, which will further send a diagnostic request to the control unit of that particular TeamProcessor. If there is no response, th~~
main control unit will send a notice to the control unit of TeamManager, which sends the notification to TeamManager via COM2 of RAP. Then, TeamManager can send the alarm message to the LAN-based management console via netwrork link and wait for the response from the operator. The operator can take over the control of TeamManager via management console computer by running Carbon-Copy or similar software. In addition, TeamManager is equipped with a video capture card and the common VGA
bus is also hooked up to a NTSC converter, so that any TeamProcessor's VGA
display can be recaptured into the TeamManager's VGA display. Therefore, TeamManager can be instructed to capture the screen display of the failed TeamProcessor by sending "select" request to the main control unit, which also will allow the subsequent communication from the control unit of TeamManager to the control unit of the failed TeamProcessor. The operator can also send the keyboard strokes to that failed TeamProcessor and act accordingly and save diagnosis file on TeamManager for further analysis. If the operator should decide to reset the failed TeamProcessor, TeamManager will be instructed to send "Reset"command to the control unit of the failed TeamProcessor. That particular control unit will trigger the reset line that links directly to that failed TeamProcessor and reset it. The booting up process can be captured, displayed and saved on TeamManager, so that the operator at the remote management console computer can watch and interact step-by-step H!ith the boot-up process. Moreover, the technical personnel can further analyze based on the save files of diagnosis to determine the location of the problem and derive the solution.
One of the preferred methods regarding WSA onsite front-panel switching services can be implemented such that a focal onsite operator' can use the frorrt panel on the TeamChassis to view, control and reset any of the TE~amProcessors using the TeamPanel-based workgroup devices, such as a VG,A monitor, a set c~f speakers, a keyboard and a mouse. Upon any push-button requEat on the panel for "select"
and "reset", whose signals directly link to the main control unit, the main control unit will first check the usage table, if applicable, for no conflicting usage and then set the related LED blinking. If the push-button activation is intended, the local operator will push the button one more time to trigger the action and the related LE D will be set on.
Once the action is completed, the related LED will be: set off.

One of the preferred methods regarding WSA remote front-panel switching services can be implemented such that any remote computer can take control of TeamManager or any of the TeamProcessors via external modem attached to workgroup-based serial link based on encrypted proprietary access codes. Once the communication is established, the remote computer can perform all the sanne functions as a LAN-based management console computer.
One of the preferred methods regarding WSA device-sharing services can be implemented such that peripheral devices in a WSA can be accessed by TearnManager and any of other TeamProcessors. Whero a particular TeamProcessor needs to access any of the peripherals, such as a printer', the TeamProc~ssor sends a request message through COM2 of RAP to its control unit, and the control unit will send a request to the main control unit via internal 12C link. If available after checking the status and usage table, the main control unit will allow the subsequent communication from that particular control unit to the main control unit and the main control unit will relay the data to the attached printer via built-in parallel interface.
Similar processes can be implemented for other serial-port devices. However, for a USB device, a particular TeamProcessor sends a request through COM:? of RAP to its control unit and the control unit will sent it to the main control unit. If available after checking the usage table for USB device, the main control unit will send an OK
signal back to that control unit, which further turns on the USB switch on board. In so doing, the USB interface on that particular TeamProcessor can directly hook up with the workgroup-based USB device, such as Camcorder, via the common USES bus.
One of the preferred methods regarding WSA fail-over scheme-based :services can be implemented such that mission critical components in a WSA, such as TeamChassis, TeamPanel, TeamProcessor, TeamServE:r, are either fault-tolerant or fail-over capable, so that mission critical applications won't be disrupted.

As for TeamPanel, the mission critical capability is related to its main control unit, which has dual microprocessors, so if the first one should fail, the second one can fake over and send an alarm to TeamManager, which can further notify the management console. As for TeamChassis, it is fault-tolerant due to the fact that it is equipped with dual power supplies and external UPS. As for TeamProcessors, there are four fail-over groups, i.e., TP1 and TP2, TP3 and TP4, TP5 and TP6, TP7 and TPB, because each group member has the same hardware configuration as the other. Thus, in each group, if one should fail, the other will take over or vice versa. Therefore, if TeamManager TP8 has failed, the TP7 will take over ;as TeamManage~r. Moreover, the TP1-TP2 pair and the TP3-TP4 pair are both fail-over groups. TP5-TP6 pair and TP8 pair are also fail-over groups. If TP1-TP2 pair should fail, TP3~TP4 pair will take over, or vice versa. The same scenarios also apply to the TP5-TP6 pair and to the TP7-TP8 pair.
As for file-service-based TeamServers, there are eight (8) fail~over groups, i.e., IDE1 in TeamProcessor1 and TeamServer1, IDE2 and TeamServer2, IDES and TeamServer3, IDE4 and TeamServer4, IDES and TeamServer5, 1DE6 and TeamServer6, IDE7 and TeamServer7, IDE8 and TeamServer8. Therefore if the TeamServer1 should fail, other TeamProcessors still can get the information from TeamProcessor1 on the IDE1. If IDE1 should fail, other TeamProcessors can get the information directly from TeamSenrer1. The same scenario applies to the other seven (7) fail-over groups. As for database-service-based TeamServers; Database on TeamServer5 is controlled by TP5 and Database on 'f'eamServer7 is ~:ontrolled by TP7 and are basically the same application-specific databases, as discussed earlier.
However, if Database-TP5 should fail, Database-TPT will immediately be updated by TeamProcessor7, based on all the related batch files collected from TeamServerl to TeamServer8 and instantly become ready for services.

WO 00!72167 PCT/US00113595 One of the preferred methods regarding WSA application-based load balancing services can be implemented such that application-based TeamProcessors in a WSA
can be load-balanced by using TeamPanels. In a web-based environment, application-based query-based requests come from the Internet using HTTP
protocol.
The incoming query-based traffic will first go through the routers. The router then sends all the requests to TeamManager TPB: TeamManager then can distribute incoming traffic loads to TP1, TP2, TP3 and TP4 via intennal FTP port or proprietary ports via workgroup peer-to-peer link using Ethernet. In a round-robin implementation, TeamManager (TP8) maintains a round-robin-based load-balance statue table and the main control unit of the TeamPanel maintains various vital sign status tables, based on each application-based TeamProcessor's CPU usage and response timed.
Since any workgrouped TeamProcessor will routinely tr~~nsfer vital signs and the like to its attached control unit via COM2 of RAP, the control unit will repack the data and notify the main control unit. Once the main control unit allows the linkage to take place, the basic control unit can download the data to main control uni 's memory buffers, which can be allocated for various vital-sign status tables. Based on these real-time status tables, the main control unit can detect which TeamProcessor may have failed or overloaded. When any of the situations happens, the main control unit will report it to TeamManager. If it is an overloaded situation, TeamManager will immediately try to take out the TeamProcessor in question from the round-robin sequence; until the notice from the main control unit is ac;ain received as to returning that particular TeamProcessor back into the round-robin sequence. If it is a failed situation, TeamManager will try to establish the communiication with that particular TeamProcessor in question via workgroup peer-to-peer link. If there is no response, then the TeamManager will notify the main control unit to reset the Tearr~Processor via the "reset° line of RAP, resulting in partial or full recovery and acting accordingly.

In addition to round robin fashion, there are other intelligent algorithms, such as "least open connections", "fastest measured time or response time", "content type", the number of open connections, and other statistics gathered from application servers".
Since TeamManager TP8 can gather these types of information via workgroup peer-to-peer link one by one and detect the failed TeamProce~ssor(s), various algorithms can be implemented intelligently without overloading one joarticular TeamProcessor and without sending load to the failed TeamProcessor. However, the round-robin algorithm will be the best choice, if all the TeamProcessors are of the same kind, and TeamManager will only have to react to the instructions from the main control unit of the TeamPanel based on abnormal situations.
One of the preferred methods regarding WSA file and database services can be implemented such that the file and database on any particular TeamServer can be directly accessed and shared among TeamProcessors. It is done by installing as many read-only database engines on as many TeamProce:>sors for direct access-based secondary TeamServers, and the primary TeamProcE;ssor will be installed with the full-fledged database engine, which can have the absolute privileges applied to the database on its primary TeamServer. In addition, TeamManager (TPF3) keeps a series of status and usage tables for all the facilities attache>d. One of the tables keeps a concurrent listing of every TeamServer's primary TeamProcessor, so that there will be no double-write data-integrity breakdown occurring on any of the TeamServers.
However, any particular TeamServer's primary Tearn~Pfocessor can-be changed to another TeamProcessor, due to fail-over, different operation needs in different time-zones and temporary supervisory change for upgrade, etc. TeamManager will always ensure that there is only one TeamProcessor that can update a particular TeamServer at any given time.

WO 00/72167 PCTlUS00/13595 One of the preferred methods regarding WSA security services can be implemented such that any unauthorized intrusions into a WSA will be detected. Since TeamManager TP8 will be receiving all the incoming requests and distribute the Toad among TeamProcessors, it is imperative that TeamManager should be installed with security enhancement and firewall capability to ward off cony possible external attacks.
Basically, TeamManager TP8 can filter out any questionable incoming request by implementing either SSL-based, or OS-based or higher-level application-based access-encrypted security measures, and redirecting those legitimate requests to those application-based TeamProcessors via workgroup peer-to-peer link using Ethernet, segregating into two different security-based zones. vEach application-based TeamProcessor comes up with the reply, which may involve accessing the application-specific database and sends it back to the requester by including the correct internal IP
address with content-encrypted security measures. Thu:>, TeamManager can decrypt the content and redirect to the right TeamProcessor, whiich handles the previous request. This type of sticky port approach, known as persistent session- based on factors such as source IP address and special information contained in the user-authentication-device request protocol or in returned cookies, can also be securely implemented, which is essential for running web-based e-commerce application services efficiently.
One of the preferred methods regarding WSA-fail-over services can be implemented such that a number of agent-based management software programs, i.e., TeamSoft are devised to be incorporated with all the above functional cervices based tin the defined data structure and data flow of the preferred configuratian. Only the current TeamManager wilt be installed with the server-portion of TearnSoft, while the rest of the TeamProcessors will be installed with the client-portiion of TeamSoft. As long as there is one TeamProcessor active, the remote management console computer can take control of that TeamProcessor and make it serve as TeamManager, so that it can WO 00/72167 PCTlUS00I13595 reboot any failed TeamProcessor; and the inventive workgroup server array may be back to functioning rvormally. Based on TeamSoft fail-over capability, each TeamProcessor can initiate the detection based on whether its fail-over counterpart is alive via TeamManager. If not alive, then that TeamProcessor will assume the tasks that its failed counterpart was servicing. For example., if TP5 should fail, TeamManager will assign TP6 with the privilege of TeamServer5 and the task to update the database. If TP6 should fail, TeamManager will assign TP5 the privilege of TeamServer6 and redirect TP6 traffic to TP5 by notifying incoming requests with TP5 IP address instead of TP6. The TeamSoft also includes workgroup diagnosis of problems with automatic corrective action built-in.
One of the preferred methods regarding WSA performance-gauging services can be implemented such that the optimal performance in a ~NSA can be obtained by adjusting the values of some key parameters. The inventive workgroup server array performance is hinged upon the following three factors: 1 ) TeamManager firewall operation, 2) the number of application-based TeamF~rocessors, 3) the size of the application-specific database. If the firewall operation installed in TeamManager TP8 takes too much time in fulfilling content decryption security and upper layer access-based security, it will decrease the number of incoming requests per minute.
However, this issue can be resolved by attaching fire-wall-based routers, which can perform network layer filtering and also upper layer filtering.
!f the number of application-based TeamProcessors; decreases, the number of outgoing replies per minute will decrease. As for the database concern, if the application-centric database is constructed based on non-loyalty traffic, it tends to render out only ready-made information, which may Brow occasionally for satisfying non-loyalty based traffic. On the other hand, if constructed based on loyalty traffic, the database is going to grow considerably. However, the time need to rearieve the data from the database for forming up a reply page is not .an issue, because the database on a TeamServer can be readily accessible without depending on any other TeamProcessor.
Therefore, there are two scenarios: 1 ) non-loyalty application-based and 2) loyalty application-based. 1n the non-loyalty-based situation, thE: optimal performance of the inventive workgroup server array is dependent on the number of application-based TeamProcessors. Based on the computing power and the complex degree of service, one TeamProcessor can handle X number of incoming requests and reduce outgoing replies in one minute without degrading the service, which is considered as the acceptable quality of service (QOS). Therefore, four TeamProcessors can accommodate 4X number of incoming requests in a steady state operation. If the peek-time non-loyalty traffic can jump to 6X, the inventive workgroup server array can still accommodate the peek-time operation by assigning TP6 and TP7 as application-based TeamProcessors and joining the round robin load-balancing algorithm operated by TeamManager.
Furthermore, as shown in FIG. 2C, a 12-TeamProcessor-based workgroup server array, in which eight (8) out of twelve (12) are application-based TeamProcessors, can accommodate 8X number of non-loyalty traffic in a steacly state operation and number of traffic in a peek-time operation. If the incoming traffic is more than 1 OX, then the second workgroup server array is needed.
!n a loyalty-based situation, the optimal performance o1' the inventive workgroup server array is dependent on the number of application-(based TeamProcessors and the size of the loyalty-based database. If the size of the database is too large and the number of incoming requests generated is more than all the TeamProcessors can handle, then the database needs to be downsized to satisfy the steady state operation and the excess should move to the second workgroup server array. For example, a 12-TeamProcessor based workgroup server array can accommodate 8X number of loyalty-based traffic, which can be converted into Y number of loyalty-based users that can be installed on the application-centric database. In the peek-time situation, Y
number of users will generate 10X number of loyalty-based traffic, which still meets the acceptable QOS.
The inventive workgroup server array can always re-adjust X and Y number to ensure the acceptable Quality-of-Service, based on the information gathered by TeamManager. Therefore, the performance measurements for the inventive workgroup server array are parameters X and Y, and the optimal operating point as well as the prediction of problems for needing increased resource, can be derived.
For higher bandwidth application, the degree of service is higher, which may lower the number of X and Y. However, the QOS of the inventive workgroup server array will still be intact.
To accommodate more incoming requests of a same application in a loyalty-based scenario, FIG. 3B illustrates a workgroup server cluster comprising a plurality of single-application-based workgroup server arrays, each having a mutually exclusive database segment. Since each workgroup server array is QOS capable, the overall workgroup server cluster is also QOS capable.
By doing so, a highly available and scalable mission critical web-based application can be accommodated by a workgroup server cluster, which contains the first workgroup server array, up to the nth workgroup server array. Since it is loyalty-based, the router can immediately distribute the right incoming traffic to the right TeamManager based on the right IP address, because this information is either installed in the "cookies of their browsers or in the chip-based smart cards that can be used for network access and user-authentication.

For non-loyalty-based situation, the router together with the Domain Name Server (DNS), which converts the URL into IP addresses, can distribute the incoming load to a non-loyalty-based workgroup server cluster's multiple TE;amManagers by using the built-in round-robin capability. In so doing, the load balancing for non-loyalty-based traffic is implemented and the QOS is also intact. This unique method based on workgroup server cluster-based load balancing together with round-robin-based DNS, creates obvious benefit to eliminate the global load balancer, which has to be powertul enough to load balance and manage all the web application servers, creating unnecessary network traffic to overload inter-tiered network switches.
Furthermore, if any TeamManager should fail, the DNS will send the mEasage to the Te,~mManager's fail-over counterpark, which will be automatically assigned to take over and handle the incoming traffic from the DNS because the DNS notifies both IP addresses of TeamManager and its fail-over counterpart.
For either the loyalty-based or the non-loyalty-based scenario, the database server program should be fast and simple to run, without having the need of complicated intelligence built in because the web-based application is well defined and the database associated with it should also be well defined.. The time spent for data retrieval should be as short as possible, so that X and 1' can be larger numbers yielding better performance - - Since the incoming requests from-a user/surfer may involve many different web-based applications, a plurality of different application-based workgroup server clusters should be installed. Shown in FIG. 4 is a preferred embodiment of an overall w~.b-server system for highly available and scalable mission critical Intranet, Extranea and Internet applications, integrating with multiple serial-chained and parallel-chained workgroup server clusters and creating an ideal and secure distributed computing environment.

In addition to zone-based security using firewall based workgroup server array, the inter-communication among different workgroup server clusters can be implemented securely by using proprietary port with SSL-based, OS-based or application-based content and access security measures, so that any foreign communication won't be allowed to access any workgroup server cluster.
Furthermore, by using FC-AL or the like to link all the TeamManagers, each workgroup server array's TeamServers, whether hard disk-based, tape-based or optical-disk-based, can all be converted as FC devices, which can then be accessed and maintained by any of the SAN-based (Storage. Area Network) backend database processors. In doing so, every workgroup server array's application-centric file and database servers or data caching servers for the backend data center SAN-based sophisticated file and database servers are equipped with more intelligent database engines.
in conclusion, the present invention incorporates a number of unique components:1 ) TeamProcessors, 2) TeamServers and TeamServer cards, 3) TeamPanels, 4) TeamLink cards, 5) TeamChassis, and 6) TeamRack. Based on these unique components, the present invention also employs a number of unique methods to build the preferred workgroup server arrays. They are 1 ) UVSA server-pair method, 2) WSA
mufti-workgroup link method, 3) WSA server coordination and supervisory method, 4) WSA internal, onsite and remote "front-panel" switching method, 5) WSA device sharing method, 6) WSA fail-safe and recovery metht~d, 7) WSA load balancing method, 8) WSA fileldatabase sharing method, 9) WSA security-based method, 10) WSA TeamSoft-based management method, and 11 ) WSA optimal performance-gauging method. Moreover, based on those inventive workgroup sender arrays, the present invention employs a number of unique methods to build the preferred workgroup server clusters (WSC). They are 1 ) WSC structure method, 2) WSC
load balancing method, 3) WSC cache-centric database method, 4) WSC user-WO OOI72t67 PCT/US00/13595 authentication-loyalty-centric workgroup database method. Lastly, based on those inventive workgroup server clusters, the present invention employs a number of unique methods to build the preferred "Front-Office" web-based server farms. They are 1 ) multiple WSCs serial-chained method, 2) multiple WSCs parallel-chained method, 3) Multiple serial-chained and parallel-chained WSCs linked with storage area network (SAN) method.
As will now be understood, the present invention provides a workgroup server array and its related architecture for building various highly available, scalable and mission-critical server clusters in a secure distributed computing environmenfi:
Additional advantages and modifications will readily occur to those skilled in the art.
The invention in its broader aspects is therefore not limited to the specific details, representative apparatus, and illustrative examples shovvn and describe~9. .
Accordingly, departures may be made from such details without departing from the spirit or the scope of Applicant's general inventive concE:pt. The invention is defined in the following claims.
What is claimed is:

Claims (5)

1. In a multiple processor computer system having a plurality of interconnected TeamProcessors each having a multiple CPU computing platform and a workgroup server link connecting the TeamProcessor to a shared plurality of direct access team servers, a method of team server coordination and supervision, the method comprising the steps of:

arbitrarily selecting a first one of said TeamProcessors as TeamManager;
each remaining TeamProcessor reporting its management-based activities status including its inventory, disk space and CPU usage to the selected TeamManager through the TeamProcessor operating system;
employing said selected TeamManager to monitor the status of all of the remaining TeamProcessors;
said TeamManager compiling a management-based status table corresponding to status information received from said TeamProcessors.
2. The method recited in claim 1 wherein each of said TeamProcessors is connected through a VGA link to a common monitor and further comprising the steps of:
using said TeamManager to monitor diagnostics of each of said TeamProcessors;
capturing the VGA link of any failed TeamProcessor by the TeamManager;
and having said TeamManager reset a failed TeamProcessor.
3. The method recited in claim 1 further comprising the steps of:
allocating at least one TeamProcessor for load balancing;
allocating at least one TeamProcessor for database service; and allocating at least one other TeamProcessor for application-specific services.
4. The method recited in claim 1 further comprising the step of pairing the TeamProcessors to provide fault tolerant takeover by one TeamProcessor for another of a pair.
5. The method recited in claim 1 wherein said computer system has an additional apparatus for monitoring TeamProcessor status; the method further comprising the steps of:
employing said additional apparatus for monitoring status of said TeamProcessors;
said additional apparatus being the final arbitrator of load balancing among said TeamProcessors;
said additional apparatus instructing said TeamManager to alter load distribution among said TeamProcessors to achieve said load balancing.
CA002338025A 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array Expired - Lifetime CA2338025C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002433564A CA2433564C (en) 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13531899P 1999-05-20 1999-05-20
US60/135,318 1999-05-20
PCT/US2000/013595 WO2000072167A1 (en) 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CA002433564A Division CA2433564C (en) 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array

Publications (2)

Publication Number Publication Date
CA2338025A1 CA2338025A1 (en) 2000-11-30
CA2338025C true CA2338025C (en) 2004-06-22

Family

ID=22467552

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002338025A Expired - Lifetime CA2338025C (en) 1999-05-20 2000-05-17 A method and apparatus for implementing a workgroup server array

Country Status (7)

Country Link
EP (1) EP1114372A4 (en)
JP (1) JP4864210B2 (en)
KR (1) KR20010074733A (en)
CN (1) CN1173281C (en)
AU (1) AU5273800A (en)
CA (1) CA2338025C (en)
WO (1) WO2000072167A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7325030B2 (en) * 2001-01-25 2008-01-29 Yahoo, Inc. High performance client-server communication system
EP1466510B1 (en) * 2001-08-10 2017-09-27 Oracle America, Inc. Server blade
CN1302419C (en) * 2001-09-21 2007-02-28 泛伺服公司 System and method for a multi-node environment with shared storage
US6567272B1 (en) 2001-11-09 2003-05-20 Dell Products L.P. System and method for utilizing system configurations in a modular computer system
CN100334546C (en) * 2003-07-08 2007-08-29 联想(北京)有限公司 Method and device for realizing machine group monitoring system using multiple kind data base
US7865326B2 (en) 2004-04-20 2011-01-04 National Instruments Corporation Compact input measurement module
KR100609082B1 (en) * 2004-07-16 2006-08-08 주식회사 세미라인 Management equipment for the Mission Critical System
US7373433B2 (en) * 2004-10-22 2008-05-13 International Business Machines Corporation Apparatus and method to provide failover protection in an information storage and retrieval system
US8332925B2 (en) * 2006-08-08 2012-12-11 A10 Networks, Inc. System and method for distributed multi-processing security gateway
US20080319925A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Computer Hardware Metering
US20080319910A1 (en) * 2007-06-21 2008-12-25 Microsoft Corporation Metered Pay-As-You-Go Computing Experience
JP5777649B2 (en) 2013-01-28 2015-09-09 京セラドキュメントソリューションズ株式会社 Information processing device

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5283897A (en) * 1990-04-30 1994-02-01 International Business Machines Corporation Semi-dynamic load balancer for periodically reassigning new transactions of a transaction type from an overload processor to an under-utilized processor based on the predicted load thereof
JPH04148363A (en) * 1990-10-11 1992-05-21 Toshiba Corp Multi-computer system
TW372294B (en) * 1993-03-16 1999-10-21 Ht Res Inc Multiple computer system
US5802391A (en) * 1993-03-16 1998-09-01 Ht Research, Inc. Direct-access team/workgroup server shared by team/workgrouped computers without using a network operating system
JPH0756838A (en) * 1993-08-11 1995-03-03 Toshiba Corp Distributed server controller
US5612865A (en) 1995-06-01 1997-03-18 Ncr Corporation Dynamic hashing method for optimal distribution of locks within a clustered system
US5768623A (en) * 1995-09-19 1998-06-16 International Business Machines Corporation System and method for sharing multiple storage arrays by dedicating adapters as primary controller and secondary controller for arrays reside in different host computers
US6049823A (en) * 1995-10-04 2000-04-11 Hwang; Ivan Chung-Shung Multi server, interactive, video-on-demand television system utilizing a direct-access-on-demand workgroup
JPH09160885A (en) * 1995-12-05 1997-06-20 Hitachi Ltd Load distribution method for cluster type computer device
US5704032A (en) * 1996-04-30 1997-12-30 International Business Machines Corporation Method for group leader recovery in a distributed computing environment
US5748897A (en) * 1996-07-02 1998-05-05 Sun Microsystems, Inc. Apparatus and method for operating an aggregation of server computers using a dual-role proxy server computer
US5822531A (en) 1996-07-22 1998-10-13 International Business Machines Corporation Method and system for dynamically reconfiguring a cluster of computer systems
US5933596A (en) * 1997-02-19 1999-08-03 International Business Machines Corporation Multiple server dynamic page link retargeting
US5875290A (en) * 1997-03-27 1999-02-23 International Business Machines Corporation Method and program product for synchronizing operator initiated commands with a failover process in a distributed processing system
JPH1165862A (en) * 1997-08-14 1999-03-09 Nec Corp Multiprocessor resource decentralization management system
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access

Also Published As

Publication number Publication date
KR20010074733A (en) 2001-08-09
CN1310821A (en) 2001-08-29
AU5273800A (en) 2000-12-12
WO2000072167A1 (en) 2000-11-30
CA2338025A1 (en) 2000-11-30
CN1173281C (en) 2004-10-27
JP4864210B2 (en) 2012-02-01
EP1114372A4 (en) 2009-09-16
JP2003500742A (en) 2003-01-07
EP1114372A1 (en) 2001-07-11

Similar Documents

Publication Publication Date Title
US6715100B1 (en) Method and apparatus for implementing a workgroup server array
US7930397B2 (en) Remote dynamic configuration of a web server to facilitate capacity on demand
US7711845B2 (en) Apparatus, method and system for improving application performance across a communications network
US7296268B2 (en) Dynamic monitor and controller of availability of a load-balancing cluster
US6816905B1 (en) Method and system for providing dynamic hosted service management across disparate accounts/sites
US8499086B2 (en) Client load distribution
US8645542B2 (en) Distributed intelligent virtual server
US6078960A (en) Client-side load-balancing in client server network
EP0817020B1 (en) A name service for a redundant array of internet servers
US7475108B2 (en) Slow-dynamic load balancing method
US20030237016A1 (en) System and apparatus for accelerating content delivery throughout networks
CA2338025C (en) A method and apparatus for implementing a workgroup server array
US20070180116A1 (en) Multi-layer system for scalable hosting platform
US20070162558A1 (en) Method, apparatus and program product for remotely restoring a non-responsive computing system
US9848060B2 (en) Combining disparate applications into a single workload group
Yang et al. Building an adaptable, fault tolerant, and highly manageable web server on clusters of non-dedicated workstations
Choi Performance test and analysis for an adaptive load balancing mechanism on distributed server cluster systems
CA2433564C (en) A method and apparatus for implementing a workgroup server array
Zhu et al. A scheduling framework for web server clusters with intensive dynamic content processing
KR100382217B1 (en) method of transmitting data in a pyramid propagation way by establishing a plurality of clients into a hierarchical connection and apparatus for the same
WO2006121448A1 (en) A variable architecture distributed data processing and management system
KR200368680Y1 (en) a remote sharing distributed processing system
Pierre et al. From Web Servers to Ubiquitous Content Delivery
Moon et al. A High-Performance LVS System For Webserver Cluster.

Legal Events

Date Code Title Description
EEER Examination request
MKEX Expiry

Effective date: 20200517