US6460144B1 - Resilience in a multi-computer system - Google Patents
Resilience in a multi-computer system Download PDFInfo
- Publication number
- US6460144B1 US6460144B1 US09/385,937 US38593799A US6460144B1 US 6460144 B1 US6460144 B1 US 6460144B1 US 38593799 A US38593799 A US 38593799A US 6460144 B1 US6460144 B1 US 6460144B1
- Authority
- US
- United States
- Prior art keywords
- node
- standby
- nodes
- computer
- failed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
- G06F11/1662—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/203—Failover techniques using migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2038—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
Definitions
- This invention relates to techniques for achieving resilience in a multi-computer system.
- Such systems are often used to support a large number of users, and to store very large databases.
- a typical system may consist of 8 server computers, supporting up to 50,000 users and may store one or more 300 GigaByte databases.
- a method of operating a computer system comprising a plurality of computers, a plurality of system disk units, one for each of said computers, and a plurality of further disk units, one for each of said computers, the method comprising:
- FIG. 1 is a block diagram of a multi-node computer system embodying the invention.
- FIG. 2 is a flow chart showing a recovery process for handling failure of one of the nodes of the system.
- FIG. 3 is a block diagram showing an example of the system after reconfiguration by the recovery process.
- Node this means an individual computer hardware configuration.
- each node comprises an ICL Xtraserver computer.
- Each node has a unique identity number.
- each server comprises a specific Microsoft NT installation.
- Each server has a unique server name, and is capable of being hosted (i.e. run) on any of the nodes. A server can, if necessary, be shut down and relocated to another node.
- this shows a system comprising N+1 nodes 10 .
- N of the nodes are active, while the remaining one is a standby.
- N equals four (i.e. there are 5 nodes altogether).
- Each of the nodes 10 hosts a server 11 .
- the system also includes a system administration workstation 12 , which allows a (human) operator or system administrator to monitor and control the system.
- Each server displays its name and current operational state on the workstation 12 .
- One or more other systems may also be controlled and monitored from the same workstation.
- the disk array 13 is an EMC Symmetrix disk array. This consists of a large number of magnetic disk units, all of which are mirrored (duplexed) for resilience.
- the disk array includes a number of further disks, providing a Business Continuance Volume (BCV).
- a BCV is effectively a third plex, which can be connected to or disconnected from the primary plexes under control of EMC Timefinder software, running on the workstation 12 .
- the BCV data can be synchronised with the primary plexes so as to provide a backup, or can be disconnected from the primary plexes, so as to provide a snapshot of the main data at a given point in time. When the BCV has been split in this way, it can be reconnected at any time and the data then copied from the primary plexes to the BCV, or vice versa, to resynchronise them.
- the system also includes an archive server 14 connected to the disk array 13 and to a number of robotic magnetic tape drives 15 .
- the archive server periodically performs an offline archive of the data in each database, by archiving the copy of the database held in the BCV to tape.
- the BCV is then brought back into synchronism with the main database, before again being broken away to form the recovery BCV, using the EMC TimeFinder software.
- the disk array 13 includes a number of system disks 16 , one for each of the servers 11 .
- Each system disk holds the NT operating system files and configuration files for its associated server: in other words, the system disk holds all the information that defines the “personality” of the server installation.
- Each of the system disks has a BCV disk 17 associated with it, holding a backup copy of the associated system disk. Normally, each BCV disk 17 is disconnected from its corresponding system disk; it is connected only if the system disk changes, so as to synchronise the two copies.
- a recovery process is initiated on the system administration workstation 12 .
- the recovery process comprises a script, written in the scripting language associated with the Timefinder software. The process guides the system administrator through a recovery procedure, which reconfigures the system to cause the standby node to pick up the system disk BCV of the failed node, thereby relocating the server on the failed node on to the standby node and vice versa.
- the recovery process makes use of a predetermined set of device files, one for every possible combination of node and server. Since in this example there are five servers and five nodes (including the standby), there are 25 possible combinations, and hence 25 such device files are provided. Each of these files is identified by a name in the form n(N)_is_(S) where N is a node identity number, and S is the last three digits of the server name. (Other conventions could of course be used for naming the files). Each device file contains all the information required to install the specified server on the specified node.
- the recovery process comprises the following steps:
- Step 201 The recovery process first confirms the identity of the failed system with the administrator. This step is required only if more than one system is managed from the same system administration workstation.
- Step 202 The recovery process then queries the administrator to obtain the identity numbers of the failed node and the standby node. The administrator can determine these node numbers using information displayed on the system administration workstation 12 .
- Step 203 The recovery process next queries the system administrator to obtain the name of the failed server (i.e. the server currently running on the failed node). The recovery process also automatically determines the name of the standby server —this is a predetermined value for each system.
- Step 204 The recovery process also automatically determines the device identifiers for the BCVs associated with the failed server and the standby server, using a lookup table which associates each server name with a particular device identifier.
- Step 205 The recovery process then calls the BCV QUERY command in the Timefinder software, so as to determine the current states of these two BCVs. These should both be in the disconnected state.
- the recovery process aborts, prompting the system administrator to call the appropriate technical support service.
- Step 206 If both of the BCVs are in the disconnected state, the recovery process continues by prompting the administrator to ensure that both the failed server and the standby server are shut down. The recovery process waits for confirmation that this has been done.
- Step 207 When both the failed server and the standby server have been shut down, the recovery process constructs two device file names as follows:
- the first file name is n(W)_is_(X) where W is the node number of the standby node and X is the last three digits of the failed server's name.
- the second file name is n(Y)_is_(Z) where Y is the node number of the failed node and Z is the last three digits of the standby server's name.
- Step 208 The recovery process then calls the Timefinder BCV RESTORE command passing it the first device file name as a parameter. This causes the BCV of the failed node to be linked to the system disk of the standby server, and initiates copying of the data from this BCV to the system disk. It can be seen that the effect of this is to relocate the server that was running on the failed node on to the standby node.
- the recovery process also calls the BCV RESTORE command, passing it the second device file name as a parameter.
- This causes the BCV of the standby node to be linked to the system disk of the failed server, and initiates copying of the data from this BCV to the system disk. The effect of this is therefore to relocate the server that was running on the standby node on to the failed node.
- FIG. 3 shows the case where node 1 has failed, and where node 4 is the standby.
- the BCV disk of the standby node is linked to the system disk of the failed node
- the BCV of the failed node is linked to the system disk of the standby
- the recovery process checks for error responses, and reports any such responses to the administrator. It also writes all actions to a log file immediately prior to the action.
- Step 209 After issuing the restore commands, the recovery process prompts the administrator to restart the recovered server (i.e. the server which has migrated from the failed node to the standby node), stating the new node name it will run on. The standby node therefore now becomes an active node.
- the recovered server i.e. the server which has migrated from the failed node to the standby node
- restore commands run in the background and typically take about an hour to complete.
- the recovered server can be restarted immediately, and its data accessed, without waiting for the restore commands to complete.
- Step 210 The recovery procedure monitors for completion of the BCV restore operations, using the Timefinder BCV Query command.
- Step 211 When the restore operations are complete, the recovery procedure issues a Timefinder BCV Split command, which disconnects the BCVs from the system disks. Recovery is now complete, and the recovery process terminates.
- the failed node Once the failed node has been fixed, it can be rebooted as required, and will become the standby server. The recovery procedure can then be repeated if any of the active nodes fails.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Hardware Redundancy (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (8)
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB9819523 | 1998-09-08 | ||
GBGB9819523.3A GB9819523D0 (en) | 1998-09-08 | 1998-09-08 | Archiving and resilience in a multi-computer system |
GB9819524 | 1998-09-09 | ||
GBGB9819524.1A GB9819524D0 (en) | 1998-09-09 | 1998-09-09 | Archiving and resilience in a multi-computer system |
GB9900473 | 1999-01-12 | ||
GB9900473A GB2345769A (en) | 1999-01-12 | 1999-01-12 | Failure recovery in a multi-computer system |
Publications (1)
Publication Number | Publication Date |
---|---|
US6460144B1 true US6460144B1 (en) | 2002-10-01 |
Family
ID=27269473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/385,937 Expired - Lifetime US6460144B1 (en) | 1998-09-08 | 1999-08-30 | Resilience in a multi-computer system |
Country Status (5)
Country | Link |
---|---|
US (1) | US6460144B1 (en) |
EP (1) | EP0987630B1 (en) |
JP (1) | JP3967499B2 (en) |
AU (1) | AU753898B2 (en) |
DE (1) | DE69927223T2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2004004180A1 (en) * | 2002-06-28 | 2004-01-08 | Harris Corporation | Software fault tolerance between nodes |
US6829687B2 (en) * | 2000-12-28 | 2004-12-07 | International Business Machines Corporation | Volume data net backup |
US20050081083A1 (en) * | 2003-10-10 | 2005-04-14 | International Business Machines Corporation | System and method for grid computing |
US20070156781A1 (en) * | 2006-01-05 | 2007-07-05 | Aditya Kapoor | Detecting failover in a database mirroring environment |
US20070174690A1 (en) * | 2006-01-04 | 2007-07-26 | Hitachi, Ltd. | Restarting method using a snapshot |
US20080244580A1 (en) * | 2007-03-29 | 2008-10-02 | Hitachi, Ltd. | Redundant configuration method of a storage system maintenance/management apparatus |
US8806272B2 (en) | 2010-11-30 | 2014-08-12 | Japan Science And Technology Agency | Dependability maintenance system, change accommodation cycle execution device, failure response cycle execution device, method for controlling dependability maintenance system, control program, and computer-readable storage medium storing the control program |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE602004027424D1 (en) | 2004-10-18 | 2010-07-08 | Fujitsu Ltd | OPERATIONAL MANAGEMENT PROGRAM, OPERATIONAL MANAGEMENT |
JP4734258B2 (en) | 2004-10-18 | 2011-07-27 | 富士通株式会社 | Operation management program, operation management method, and operation management apparatus |
EP1811376A4 (en) | 2004-10-18 | 2007-12-26 | Fujitsu Ltd | Operation management program, operation management method, and operation management apparatus |
GB2419699A (en) | 2004-10-29 | 2006-05-03 | Hewlett Packard Development Co | Configuring supercomputer for reliable operation |
GB2419696B (en) * | 2004-10-29 | 2008-07-16 | Hewlett Packard Development Co | Communication link fault tolerance in a supercomputer |
US8572431B2 (en) | 2005-02-23 | 2013-10-29 | Barclays Capital Inc. | Disaster recovery framework |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4371754A (en) * | 1980-11-19 | 1983-02-01 | Rockwell International Corporation | Automatic fault recovery system for a multiple processor telecommunications switching control |
US4466098A (en) * | 1982-06-11 | 1984-08-14 | Siemens Corporation | Cross channel circuit for an electronic system having two or more redundant computers |
US5155729A (en) * | 1990-05-02 | 1992-10-13 | Rolm Systems | Fault recovery in systems utilizing redundant processor arrangements |
US5278969A (en) * | 1991-08-02 | 1994-01-11 | At&T Bell Laboratories | Queue-length monitoring arrangement for detecting consistency between duplicate memories |
US5408649A (en) * | 1993-04-30 | 1995-04-18 | Quotron Systems, Inc. | Distributed data access system including a plurality of database access processors with one-for-N redundancy |
US5600808A (en) * | 1989-07-20 | 1997-02-04 | Fujitsu Limited | Processing method by which continuous operation of communication control program is obtained |
US5870537A (en) * | 1996-03-13 | 1999-02-09 | International Business Machines Corporation | Concurrent switch to shadowed device for storage controller and device errors |
US5974114A (en) * | 1997-09-25 | 1999-10-26 | At&T Corp | Method and apparatus for fault tolerant call processing |
US6167531A (en) * | 1998-06-18 | 2000-12-26 | Unisys Corporation | Methods and apparatus for transferring mirrored disk sets during system fail-over |
US6205557B1 (en) * | 1998-06-09 | 2001-03-20 | At&T Corp. | Redundant call processing |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3136287A1 (en) * | 1981-09-12 | 1983-04-14 | Standard Elektrik Lorenz Ag, 7000 Stuttgart | Multicomputer system in particular for a videotex computer centre |
-
1999
- 1999-08-13 DE DE69927223T patent/DE69927223T2/en not_active Expired - Lifetime
- 1999-08-13 EP EP99306404A patent/EP0987630B1/en not_active Expired - Lifetime
- 1999-08-30 US US09/385,937 patent/US6460144B1/en not_active Expired - Lifetime
- 1999-09-06 AU AU47388/99A patent/AU753898B2/en not_active Ceased
- 1999-09-08 JP JP25385899A patent/JP3967499B2/en not_active Expired - Fee Related
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4371754A (en) * | 1980-11-19 | 1983-02-01 | Rockwell International Corporation | Automatic fault recovery system for a multiple processor telecommunications switching control |
US4466098A (en) * | 1982-06-11 | 1984-08-14 | Siemens Corporation | Cross channel circuit for an electronic system having two or more redundant computers |
US5600808A (en) * | 1989-07-20 | 1997-02-04 | Fujitsu Limited | Processing method by which continuous operation of communication control program is obtained |
US5155729A (en) * | 1990-05-02 | 1992-10-13 | Rolm Systems | Fault recovery in systems utilizing redundant processor arrangements |
US5278969A (en) * | 1991-08-02 | 1994-01-11 | At&T Bell Laboratories | Queue-length monitoring arrangement for detecting consistency between duplicate memories |
US5408649A (en) * | 1993-04-30 | 1995-04-18 | Quotron Systems, Inc. | Distributed data access system including a plurality of database access processors with one-for-N redundancy |
US5621884A (en) * | 1993-04-30 | 1997-04-15 | Quotron Systems, Inc. | Distributed data access system including a plurality of database access processors with one-for-N redundancy |
US5870537A (en) * | 1996-03-13 | 1999-02-09 | International Business Machines Corporation | Concurrent switch to shadowed device for storage controller and device errors |
US5974114A (en) * | 1997-09-25 | 1999-10-26 | At&T Corp | Method and apparatus for fault tolerant call processing |
US6205557B1 (en) * | 1998-06-09 | 2001-03-20 | At&T Corp. | Redundant call processing |
US6167531A (en) * | 1998-06-18 | 2000-12-26 | Unisys Corporation | Methods and apparatus for transferring mirrored disk sets during system fail-over |
Non-Patent Citations (1)
Title |
---|
Kramer, "Fault-Tolerant LANs Guard Against Malfunction, Data Loss", PC Week, vol. 4, No. 37, Sep. 15, 1987, pp. C26-30. |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6829687B2 (en) * | 2000-12-28 | 2004-12-07 | International Business Machines Corporation | Volume data net backup |
US7424640B2 (en) | 2002-06-28 | 2008-09-09 | Harris Corporation | Hybrid agent-oriented object model to provide software fault tolerance between distributed processor nodes |
US6868067B2 (en) | 2002-06-28 | 2005-03-15 | Harris Corporation | Hybrid agent-oriented object model to provide software fault tolerance between distributed processor nodes |
US20050081110A1 (en) * | 2002-06-28 | 2005-04-14 | Rostron Andy E. | Hybrid agent-oriented object model to provide software fault tolerance between distributed processor nodes |
WO2004004180A1 (en) * | 2002-06-28 | 2004-01-08 | Harris Corporation | Software fault tolerance between nodes |
US20050081083A1 (en) * | 2003-10-10 | 2005-04-14 | International Business Machines Corporation | System and method for grid computing |
US7693931B2 (en) | 2003-10-10 | 2010-04-06 | International Business Machines Corporation | System and method for grid computing |
US20070174690A1 (en) * | 2006-01-04 | 2007-07-26 | Hitachi, Ltd. | Restarting method using a snapshot |
US8024601B2 (en) | 2006-01-04 | 2011-09-20 | Hitachi, Ltd. | Restarting method using a snapshot |
US7644302B2 (en) * | 2006-01-04 | 2010-01-05 | Hitachi, Ltd. | Restarting method using a snapshot |
US20100088543A1 (en) * | 2006-01-04 | 2010-04-08 | Hitachi, Ltd. | Restarting Mehtod Using a Snapshot |
US20070156781A1 (en) * | 2006-01-05 | 2007-07-05 | Aditya Kapoor | Detecting failover in a database mirroring environment |
US9268659B2 (en) * | 2006-01-05 | 2016-02-23 | Emc Corporation | Detecting failover in a database mirroring environment |
US20080244580A1 (en) * | 2007-03-29 | 2008-10-02 | Hitachi, Ltd. | Redundant configuration method of a storage system maintenance/management apparatus |
US20110047410A1 (en) * | 2007-03-29 | 2011-02-24 | Hitachi, Ltd. | Redundant configuration method of a storage system maintenance/management apparatus |
US8078904B2 (en) * | 2007-03-29 | 2011-12-13 | Hitachi, Ltd. | Redundant configuration method of a storage system maintenance/management apparatus |
US7836333B2 (en) * | 2007-03-29 | 2010-11-16 | Hitachi, Ltd. | Redundant configuration method of a storage system maintenance/management apparatus |
US8806272B2 (en) | 2010-11-30 | 2014-08-12 | Japan Science And Technology Agency | Dependability maintenance system, change accommodation cycle execution device, failure response cycle execution device, method for controlling dependability maintenance system, control program, and computer-readable storage medium storing the control program |
Also Published As
Publication number | Publication date |
---|---|
EP0987630A3 (en) | 2004-09-29 |
AU753898B2 (en) | 2002-10-31 |
EP0987630A2 (en) | 2000-03-22 |
EP0987630B1 (en) | 2005-09-14 |
DE69927223D1 (en) | 2005-10-20 |
JP3967499B2 (en) | 2007-08-29 |
DE69927223T2 (en) | 2006-07-13 |
JP2000099359A (en) | 2000-04-07 |
AU4738899A (en) | 2000-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4744804B2 (en) | Information replication system with enhanced error detection and recovery | |
US6658589B1 (en) | System and method for backup a parallel server data storage system | |
US10146453B2 (en) | Data migration using multi-storage volume swap | |
US5805897A (en) | System and method for remote software configuration and distribution | |
JP4400913B2 (en) | Disk array device | |
US6611923B1 (en) | System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server | |
US7689862B1 (en) | Application failover in a cluster environment | |
US7546484B2 (en) | Managing backup solutions with light-weight storage nodes | |
US7290017B1 (en) | System and method for management of data replication | |
JP3957278B2 (en) | File transfer method and system | |
US6978282B1 (en) | Information replication system having automated replication storage | |
US6460144B1 (en) | Resilience in a multi-computer system | |
US20050188248A1 (en) | Scalable storage architecture | |
US20050149684A1 (en) | Distributed failover aware storage area network backup of application data in an active-N high availability cluster | |
US5615330A (en) | Recovery method for a high availability data processing system | |
JP2000099359A5 (en) | ||
US20070282926A1 (en) | System, Method and Computer Program Product for Storing Transient State Information | |
CN117632374A (en) | Container mirror image reading method, medium, device and computing equipment | |
JPH09293001A (en) | Non-stop maintenance system | |
GB2345769A (en) | Failure recovery in a multi-computer system | |
CN114257512A (en) | Method and system for realizing high availability of ambari big data platform | |
WO2003003209A1 (en) | Information replication system having enhanced error detection and recovery | |
CN118764498A (en) | Hardware support platform system for information processing system | |
Salinas et al. | Oracle Real Application Clusters Administrator's Guide 10g Release 1 (10.1) Part No. B10765-02 Copyright© 1998, 2004, Oracle. All rights reserved. Primary Authors: David Austin, Mark Bauer Contributing Authors: Jonathan Creighton, Rajiv Jayaraman, Raj Kumar, Dayong Liu, Venkat Maddali | |
Salinas et al. | Oracle Real Application Clusters Administrator's Guide, 10g Release 1 (10.1) Part No. B10765-01 Copyright© 1998, 2003, Oracle. All rights reserved. Primary Author: David Austin and Mark Bauer. Contributor: Jonathan Creighton, Rajiv Jayaraman, Raj Kumar, Dayong Liu, Venkat Maddali, Michael |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL COMPUTERS LIMITED, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ASHCROFT, DEREK WILLIAM;ATKINSON, GEOFFREY ROBERT;MCKIRGEN, PHILIP;AND OTHERS;REEL/FRAME:010347/0873;SIGNING DATES FROM 19990718 TO 19990917 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |