WO1995000906A1 - Method for improving disk mirroring error recovery in a computer system including an alternate communication path - Google Patents

Method for improving disk mirroring error recovery in a computer system including an alternate communication path Download PDF

Info

Publication number
WO1995000906A1
WO1995000906A1 PCT/US1994/007009 US9407009W WO9500906A1 WO 1995000906 A1 WO1995000906 A1 WO 1995000906A1 US 9407009 W US9407009 W US 9407009W WO 9500906 A1 WO9500906 A1 WO 9500906A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer
mass storage
storage system
data
server
Prior art date
Application number
PCT/US1994/007009
Other languages
French (fr)
Inventor
Richard Rollins
Michael Ohran
Randall C. Johnson
Scott Bonsteel
Richard S. Ohran
Original Assignee
Vinca Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vinca Corporation filed Critical Vinca Corporation
Priority to AU72111/94A priority Critical patent/AU7211194A/en
Publication of WO1995000906A1 publication Critical patent/WO1995000906A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2002Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant
    • G06F11/2007Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media
    • G06F11/201Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where interconnections or communication control functionality are redundant using redundant communication media between storage system components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2033Failover techniques switching over of hardware resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2046Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2038Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation

Definitions

  • This invention relates to
  • the invention of Major provides a
  • server computer systems 110 and 120 connected to 1 network 101, from which they receive requests from
  • Server computer system 120 has computer 121 which .2 includes a central processing unit and appropriate .3 memory systems and other peripherals.
  • Mass storage .4 systems 112 and 113 are connected to computer 111, -5 and mass storage systems 122 and 123 are connected .6 to computer 121.
  • Mass storage systems 112 and 123 .7 are optional devices for storing operating system
  • This operating system handles server
  • Server computer system 110 writes the
  • computer system 100 provides a
  • the system is not fault tolerant ii even though the failed server has been restored. 1 If a server has been unavailable due to
  • Figure 3 is a flow diagram illustrating 10 the steps to be taken when a processor failure is U detected.
  • Figure 4 is a flow diagram illustrating
  • the mass storage system 240 is
  • connection means 241 can be implemented, depending
  • Connection means 241 can
  • 8 two-channel switches may be a part of the mass
  • computers 111 and 121 can each have two
  • Mass storage system 113 will be on a SCSI bus 2 connected to both computers 111 and 121, and mass
  • Each computer 111 contains an interface
  • 9 node can be connected to up to four other network
  • a message is routed by each network node to
  • Mass storage system 113's node is connected
  • Figure 3 is a 1 flow diagram illustrating the steps to be taken
  • NLM NetWare Loadable Module
  • the NLM 5 operating system by means of requests.
  • step 302 detection of the failure of
  • step 303 SFT-III remembers all data not mirrored on server 121 following its failure as long as the amount of data to be remembered does not exceed the capacity of the system resource remembering the data. If the particular operating system does not remember non-mirrored data, step 303 would have to be performed by the particular implementation of the method of the present invention.
  • step 304 of the method sets connection means 241 to disconnect mass storage system 122 from computer 121 of failed server 120, 1 and to connect it to computer 111 of non-failing
  • test mass storage system 122 determines if it is
  • Step 305 commands the operating system of
  • step 306 SFT-III will update mass storage system 122 using the information remembered at step 303 and, after the two mass storage systems are consistent (i.e., contain identical mirrored copies of the stored information), step 307 will begin mirroring all information on both mass storage systems 113 and 122 and resume normal operation of the system. If an operating system different than SFT-III does not provide this automatic update for consistency and mirroring, the implementation of the method will have to provide an equivalent service.
  • step 301 detecting the failure of server 120 (step 301), (2) setting communications means 241 to disconnect mass storage system 122 from computer 121 and connecting it to computer 111 (step 304), (3) determining if mass storage system 122 was the cause of the failure of server 120 (also part of step (304), and (4) 1 commanding SFT-III to scan for mass storage systems
  • Figure 4 is a flow diagram illustrating
  • Step 401
  • step 402 the
  • connection means 241 sets connection means 241 to disconnect mass
  • step 403 mass storage system 122 is
  • step 404 The steps involved with making
  • .4 122 could be switched from computer 111 to computer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Hardware Redundancy (AREA)

Abstract

A method for reducing the time necessary to recover from a processor (111, 121) failure in a fault-tolerant computer system with redundant server computer systems (110, 120) with their own disk storage systems is disclosed and claimed. In normal operation whenever data is to be written to disk storage, each of the servers writes an identical copy of the data to its own disk storage system. When a server processor fails and then is restored to operation, that server's disk storage system must be made identical to (consistent with) the disk storage system of the non-failing server before the system is again fault tolerant. This method improves performance by electronically transferring the disk storage system from the failing server to a non-failing server, having the non-failing server keep the transferred disk storage system identical to its normal disk storage system, and reconnecting the transferred disk storage system to the failed server when it again becomes available. This minimizes the processing time required to make the disk storage contents identical, both at the time of failure and at the time of restoration.

Description

METHOD FOR IMPROVING DISK MIRRORING ERROR RECOVERY IN A COMPUTER SYSTEM INCLUDING AN ALTERNATE COMMUNICATION PATH
TL SPECIFICATION
2
3 To all whom it may concern:
A Be it known that Richard Rollins, Michael
5 Ohran, Randall C. Johnson, Scott Bonsteel, and
6 Richard S. Ohran, citizens of the United States of
7 America, have invented a new and useful invention
8 entitled METHOD FOR IMPROVING ERROR RECOVERY
9 PERFORMANCE IN A FAULT-TOLERANT COMPUTER SYSTEM of -0 which the following comprises a complete
-1 specification. .2
.3 A .5 .6 .7 -8 .9 1 METHOD FOR IMPROVING ERROR RECOVERY PERFORMANCE
2 IN A FAULT-TOLERANT COMPUTER SYSTEM
3
4 Microfiche Appendix. This specification includes a
5 Microfiche Appendix which includes 1 page of
6 microfiche and a total of 13 frames. The
7 Microfiche Appendix includes computer source code
8 illustrative of one preferred embodiment of the
9 present invention. .0
-l Background of the Invention
.2 Field of the Invention. This invention relates to
.3 fault-tolerant computer systems, and in particular
A to the methods used to recover from a computer
.5 failure in a system with redundant computers each
-6 with its own mass storage system(s).
.7 Description of Related Art. It is often desirable
.8 to provide continuous operation of computer
.9 systems, particularly file servers which support a
10 number of user workstations or personal computers
11 on a network. To achieve this continuous j. operation, it is necessary for the computer system
2 to be tolerant of software and hardware problems or
3 faults. This is generally done by having redundant computers and redundant mass storage systems, such
5 that a backup computer or disk drive is immediately
6 available to take over in the event of a fault.
7 A number of techniques for implementing a
8 fault-tolerant computer system are described in
9 Major et al., United States Patent 5,157,663, which .0 is hereby incorporated by reference in its
.1 entirety, and Major's cited references. In
.2 particular, the invention of Major provides a
.3 replicated network file server capable of
A recovering from the failure of either the computer
.5 or the mass storage system of one of the two file
-6 servers. It has been used by Novell to implement
-7 its SFT-III fault-tolerant file server product.
-8 Figure 1 illustrates the hardware
.9 configuration for a fault-tolerant computer system ιo 100, such as described in Major. There are two
;i server computer systems 110 and 120 connected to 1 network 101, from which they receive requests from
2 client computers. While we refer to computers 110
3 and 120 as "server computer systems" or simply
4 "servers" and show them in that role in the
5 examples herein, this should not be regarded as
6 limiting the present invention to computers used
7 only as servers for other computer systems.
8 Server computer system 110 has computer
9 111 which includes a central processing unit and .0 appropriate memory systems and other peripherals. .1 Server computer system 120 has computer 121 which .2 includes a central processing unit and appropriate .3 memory systems and other peripherals. Mass storage .4 systems 112 and 113 are connected to computer 111, -5 and mass storage systems 122 and 123 are connected .6 to computer 121. Mass storage systems 112 and 123 .7 are optional devices for storing operating system
.8 routines and other data not associated with read
.9 and write requests received from network 101. o Finally, there is an optional communications link il 131 between computers 111 and 121. 1 The mass storage systems can be
2 implemented using magnetic disk drives, optical
3 discs, magnetic tape drives, or any other medium
4 capable of handling the read and write requests of
5 the particular computer system.
6 An operating system or other control
7 program runs on server computer systems 110 and
8 120, executed by computers 111 and 121,
9 respectively. This operating system handles server
0 requests received from network 101 and controls
1 mass storage systems 112 and 113 on server 110, and
2 mass storage systems 122 and 123 on server 12fl, as
3 well as any other peripherals attached to computers
4 111 and 121.
5 While Figure 1 illustrates only two
6 server computer systems 110 and 120, because that 7 is the most common (and lowest cost) configuration 8 for a fault-tolerant computer system 100,
9 configurations with more than two server computer 0 systems are possible and do not depart from the r spirit and scope of the present invention. 1 In normal operation, both server computer
2 system 110 and server computer system 120 handle
3 each mass storage write request received from
4 network 101. Server computer system 110 writes the
5 data from the network request to mass storage
6 system 113, and server computer system 120 writes
7 the data from the network request to mass storage
8 system 122. This results in the data on mass
9 storage system 122 being the mirror image of the .0 data on mass storage system 113 and the states of .1 server computer systems 110 and 120 are generally .2 consistent. In the following discussion, the
.3 process of maintaining two or more identical copies
.4 of information on separate mass storage systems is
.5 referred to as "mirroring the information". .6 (For read operations, either server
_7 computer system 110 or server computer system 120
.8 can handle the request without involving the other
.9 server, since a read operation does not change the
10 state of the information stored on the mass storage
:i systems. ) i Although computer system 100 provides a
2 substantial degree of fault tolerance, when one of
3 server computer systems 110 or 120 fails, the fault
4 tolerance of the system is reduced. In the most
5 common case of two server computer systems, as
6 illustrated by Figure 1, the failure of one server
7 computer system results in a system with no further
8 tolerance to hardware faults or many software
9 faults.
-0 In a fault-tolerant computer system such
-l as described above, it is necessary after a failed
.2 server computer system has been restored to bring
.3 the previously-failed computer system into a state
A consistent with the server computer system that has
.5 continued operating. This requires writing all the
.6 changes made to the mass storage system of the non-
.7 failing server to the mass storage system of the
.8 previously-failed server so that the mass storage
.9 systems again mirror each other. Until that has
10 been accomplished, the system is not fault tolerant ii even though the failed server has been restored. 1 If a server has been unavailable due to
2 its failure for a period of time during which there
3 have been only a limited number of changes made to
4 the mass storage system of the non-failing server,
5 it is possible for the non-failing server to
6 remember all the changes made (for example, by
7 keeping them in a list stored in its memory) and
8 forward the changes to the previously-failed server
9 when it has been restored to operation. The
.0 previously-failed server can then update its mass
-l storage system with the changes and make it
.2 consistent with the non-failing server. This
.3 process typically does not cause excessive
.4 performance degradation to the non-failing server
.5 for any substantial period of time.
.6 However, if there have been more changes
.7 than can be conveniently remembered by the non-
.8 failing server, then the non-failing server must
.9 transfer all the information from its mass storage ιo system to the previously-failed server for writing i on its mass storage system in order to ensure that <*1 the two servers are consistent. This is a very
2 time consuming and resource-intensive operation,
3 especially if the non-failing server must also
4 handle server requests from the network while this
5 transfer is taking place. For very large mass
6 storage systems, as would be found on servers
7 commonly in use today, and with a reasonably high
8 network request load, it might be many hours before
9 the mass storage systems are again consistent and 0 the system is again fault tolerant. Additionally, 1 the resource-intensiveness of the recovery 2 operation can cause very substantial performance 3 degradation of the non-failed server in processing 4 network requests.
15 Summary of the Invention 6 It is an object of the present invention 7 to provide tolerance to disk faults even though the 8 computer of a server computer system has failed.
19 This is achieved by electronically switching the
20 mass storage system used for network requests from ll the failed server computer system to the non- 1 failing server computer system. After the mass
2 storage system from the failed server computer
3 system has been connected to the non-failing
4 server's computer, it is made consistent with the
5 mass storage system of the non-failing server.
6 This is typically a quick and simple operation.
7 From that point on, the mass storage system from
8 the failed server it is operated as a mirrored disk
9 system, with each change being written by the non- .0 failing server's computer to both the non-failing .1 server's original mass storage system and to the
.2 mass storage system previously on the failed server
.3 and now connected to the non-failing server's-
.4 computer.
.5 While operating in this mode, the system
.6 will no longer be tolerant to processor failures if
-7 the non-failing server is the only remaining server
.8 (as would be the case in the common two-server
.9 configuration described above), but the system
10 would be tolerant to failures of one of the mass i storage systems. 1 It is a further object of the present
2 invention to minimize the time the system is not
3 fault tolerant by eliminating the need for time-
4 consuming copying of the information stored on the
5 mass storage system of the non-failing server to
6 the mass storage of the previously-failed server to
7 make the two mass storage systems again consistent
8 and permit mirroring of information again.
9 This is also achieved by electronically :o switching the mass storage system from the failed :i server computer system to the non-failing server
12 computer system. If this switch is accomplished
13 after there have been only a small number of
L4 changes to the mass storage system of the non-
.5 failing server, the mass storage system from the
16 failed server computer system can be quickly
17 updated and made consistent, allowing mirroring to
18 resume.
19 Furthermore, since the mirroring of the io invention keeps the information on the mass storage
11 system from the failed server consistent while it is connected to the non-failing sever computer system, when the mass storage system is reconnected to the previously-failed server only those changes made between the time it was disconnected from the non-failed server and when it becomes available on the previously-failed server need to be made before it is again completely consistent and mirroring by the two servers (and full fault tolerance) resumes. This results in avoiding the substantial performance degradation experienced by the non- failing server during recovery using the prior art recovery method described above. As a result, the invention provides rapid recovery from a fault in the system. These and other features of the invention will be more readily understood upon consideration of the attached drawings and of the following detailed description of those drawings and the presently preferred embodiments of the invention. Brief Description of the Drawings 1 Figure 1 illustrates a prior art
2 implementation of a fault-tolerant computer system
3 with two server computer systems.
4 Figure 2 illustrates the fault-tolerant
5 computer system of Figure 1, modified to permit the
6 method of the invention by including means for
7 connecting a mass storage system to either server's
8 computer.
9 Figure 3 is a flow diagram illustrating 10 the steps to be taken when a processor failure is U detected.
12 Figure 4 is a flow diagram illustrating
13 the steps to be taken when the previously-failed
14 processor becomes available.
15 Detailed Description of the Invention
16 Referring to fault-tolerant computer
17 system 200 of Figure 2, and comparing it to prior
18 art fault-tolerant computer system 100 as
19 illustrated in Figure 1, we see that mass storage
20 systems 113 and 122, which were used for storing 2i the information read or written in response to 1 requests from other computer systems on network
2 101, are now part of reconfigurable mass storage
3 system 240. In particular, mass storage system 113
4 can be selectively connected by connection means
5 241 to either computer 111 or computer 121 (or
6 possibly both computers 111 and 121, although such
7 dual connection is not necessary for the present
8 invention), and mass storage system 122 can
9 likewise be independently selectively connected to -0 either computer 111 or computer 121 by connection .1 means 241. The mass storage system 240 is
.2 reconfigurable because of the ability to select and
.3 change connections between mass storage devices and
.4 computers.
.5 While Figure 2 illustrates the most
.6 common dual server configuration anticipated by the
.7 inventors, other configurations with more than two
.8 servers are within the scope of the present
.9 invention, and the extension of the techniques
10 described below to other configurations will be ii obvious to one skilled in the art. 1 There are a number of ways such
2 connection means 241 can be implemented, depending
3 on the nature of the mass storage system interface
4 to computers 111 or 121. Connection means 241 can
5 be two independent two-channel switches, which
6 electronically connect all the interface signals
7 from a mass storage system to two computers. Such
8 two-channel switches may be a part of the mass
9 storage system (as is common for mass storage
-0 systems intended for use with mainframe computers)
.1 or can be a separate unit. A disadvantage of using
.2 two-channel switches is the large number of
.3 switching gates that are necessary if the number of
-4 data and control lines in the mass storage
.5 interface is large. That number increases rapidly
.6 when there are more than two server computer
.7 systems in fault-tolerant computer system 200. For
.8 example, a fault-tolerant computer system with
.9 three computers connected to three mass storage ιo systems would require 2.25 times the number of ii switching gates as the system illustrated in Figure 1 2. (The number of switching gates is proportional
2 to the number of computers times the number of mass
3 storage systems. ) The number of switching gates
4 can be reduced by not connecting every mass storage
5 system to every computer, although such a
6 configuration would be less flexible in its
7 reconfiguration ability.
8 Another implementation of connection
9 means 241 is for both computer 111 and computer 121 -0 to have interfaces to a common bus to which mass
.1 storage systems 113 and 122 are also connected. An
.2 example of such a bus is the small computer system
.3 interface (SCSI) as used on many workstations and
.4 personal computers. When a computer wishes to
.5 access a mass storage system, the computer requests
.6 ownership of the bus through an appropriate bus
17 arbitration procedure, and when ownership is
18 granted, the computer performs the desired mass
19 storage operation. A disadvantage of this
20 implementation is that only one computer (the one l. with current bus ownership) can access a mass
2 storage system at a time.
3 If it is desirable to use a standard SCSI
4 bus as means 241 for connecting mass storage
5 systems 113 and 122 to computers 111 and 121, and
6 to allow simultaneous access of the mass storage
7 systems 113 and 122 by their respective server's
8 computers, computers 111 and 121 can each have two
9 SCSI interfaces, one connected to mass storage
0 system 113 and one connected to mass storage system
1 122. Mass storage system 113 will be on a SCSI bus 2 connected to both computers 111 and 121, and mass
3 storage system 122 will be on a second SCSI bus,
4 also connected to both computers 111 and 121. If
5 computer 111 or computer 121 is not using a 6 particular mass storage system, it will configure 7 its SCSI interface to be inactive on that mass 8 storage systems particular bus.
9 In the preferred embodiment, a high-speed o serial network between computers 111 and 121 and l' mass storage systems 113 and 122 forms connection 1 means 241. Each computer 111 contains an interface
2 to the network, and requests to a mass storage
3 system 113 or 122 are routed to the appropriate
4 network interface serving the particular mass
5 storage system. Although a bus-type network, such
6 as an Ethernet, could be used, the network of the
7 preferred embodiment has network nodes at each
8 computer and at each mass storage system. Each
9 node can be connected to up to four other network
.0 nodes. A message is routed by each network node to
-l a next network node closer to the message's final
12 destination.
.3 For the fault-tolerant computer system
-4 configuration of Figure 2, one network connection
.5 from the node at computer 111 is connected to the
.6 node for mass storage system 113, and another
.7 network connection from the node at computer 111 is
18 connected to the node for mass storage system 122.
19 Similar connections are used for computer 121.
20 Mass storage system 113's node is connected
21 directly to the nodes for computers 111 and 121, -l and mass storage system 122's node is similarly
2 connected (but with different links) to computers
3 111 and 121. Routing of messages is trivial, since
4 there is only one link between each computer and
5 each mass storage system.
6 The particular connecting means 241 used
7 to connect computers 111 and 121 to mass storage
8 systems 113 and 122 is not critical to the method
9 of the present invention, so long as it provides 10 for the rapid switching of a mass storage system il from one computer to another without affecting the
12 operation of the computers . Any such means for
13 connecting a mass storage system to two or more
14 computers is usable by the method of the present
15 invention.
16 The method of the present invention is
17 divided into two portions, a first portion for
18 reacting to a processor failure and a second
19 portion for recovering from a processor failure.
20 The first portion of the method of the present
2i invention is illustrated by Figure 3, which is a 1 flow diagram illustrating the steps to be taken
2 when a processor failure is detected. The
3 description of the method provided below should be
4 read in light of Figure 2. For purposes of
5 illustration, it will be assumed that connection
6 means 241 initially connects mass storage system
7 113 to computer 111 and mass storage system 122 to
8 computer 121, providing an equivalent to the
9 configuration illustrated in Figure 1 although the 0 connection means 241 of Figure 2 facilitates this l equivalent configuration. Information mirroring as 2 described above is being performed by computers 111 3 and 122. It is also assumed that computer 121 has experienced a fault, causing server computer system 5 120 to fail. 6 The method starts in step 301, with each 7 computer 111 and 122 waiting to detect a failure of 8 another server's computer 111 and 122. Such 9 failure can be detected by probing the status of 0 the other server's computer by a means appropriate 1 to the particular operating system being used and 1 the communications methods between the servers. In
2 the case of Novell's SFT-III, the method will be
3 running as a NetWare Loadable Module, or NLM, and
4 be capable of communicating directly with the'
5 operating system by means of requests. The NLM
6 will make a null request to the SFT-III process.
7 This null request will be such that it will never
8 normally run to completion, but will remain in the
9 SFT-III process queue. (It will require minimal -0 resources while it remains in the process queue.)
.1 In the event of a failure of server computer system
.2 121, SFT-III running on server computer system 111
.3 will indicate the failure of the null request to
.4 the NLM of the method, indicating the failure of
.5 server 121. Because a processor failure has been
.6 detected, the method depicted in Figure 3 proceeds
17 to step 302.
.8 In step 302, detection of the failure of
.9 server 121 causes the discontinuation of mirroring
10 information on the failed server 121. This il discontinuation can either be done automatically by the operating system upon its detection of the failure of server 121, or by the particular implementation of the preferred embodiment of the method of the present invention. In the case, of SFT-III, the discontinuation of mirroring on server 121 is performed by the SFT-III operating system. Step 303 of the method is performed next. In step 303, SFT-III remembers all data not mirrored on server 121 following its failure as long as the amount of data to be remembered does not exceed the capacity of the system resource remembering the data. If the particular operating system does not remember non-mirrored data, step 303 would have to be performed by the particular implementation of the method of the present invention. The step of remembering all non- mirrored data could be performed by any technique known to persons skilled in the art. Next, step 304 of the method sets connection means 241 to disconnect mass storage system 122 from computer 121 of failed server 120, 1 and to connect it to computer 111 of non-failing
2 server 110. At this point, the method can quickly
3 test mass storage system 122 to determine if it is
4 the cause of the failure of server 120. If it is,
5 there is no fault-tolerance recovery possible using
6 the method, and mass storage system 122 can be
7 disconnected from computer 111 at connection means
8 241. If mass storage system 122 is not the cause
9 of server 120's failure, then the cause must be -0 computer 121, and the method can continue to
.1 achieve limited fault tolerance in the presence of
.2 the computer 121's failure.
.3 Step 305 commands the operating system of
-4 server 110 to scan for new mass storage systems,
.5 causing the operating system to determine that mass
.6 storage system 122 is now connected to computer
.7 111, along with mass storage system 113. SFT-III
.8 will detect through information on mass storage
.9 systems 113 and 122 that they contain similar
10 information, but that mass storage system 122 is il not consistent with mass storage system 113. In step 306, SFT-III will update mass storage system 122 using the information remembered at step 303 and, after the two mass storage systems are consistent (i.e., contain identical mirrored copies of the stored information), step 307 will begin mirroring all information on both mass storage systems 113 and 122 and resume normal operation of the system. If an operating system different than SFT-III does not provide this automatic update for consistency and mirroring, the implementation of the method will have to provide an equivalent service. Note that when SFT-III is used, the only steps of the method that must be performed by the NETWARE loadable module are: (1 ) detecting the failure of server 120 (step 301), (2) setting communications means 241 to disconnect mass storage system 122 from computer 121 and connecting it to computer 111 (step 304), (3) determining if mass storage system 122 was the cause of the failure of server 120 (also part of step (304), and (4) 1 commanding SFT-III to scan for mass storage systems
2 so that it finds the newly-connected mass storage
3 system 122 (step 305). All the other steps are
4 performed as part of the standard facilities of
5 SFT-III. In other embodiments of the invention,
6 responsibility for performing the steps of the
7 method may be allocated differently.
8 Figure 4 is a flow diagram illustrating
9 the second portion of the invention - the steps to .0 be taken when previously-failed server 120 becomes .1 available again. Server 120 would typically become .2 available after correction of the problem that
.3 caused its failure described above. Step 401
.4 determines that server 102 is available and the
.5 method proceeds to step 402. In step 402, the
.6 method sets connection means 241 to disconnect mass
.7 storage system 122 from computer 111 after
,8 commanding SFT-III on server 110 to remove mass
.9 storage system 122 from its active mass storage io systems. Due to the unavailability of mass storage l- system 122 on server 110, data mirroring on server 1 110 will be stopped by SFT-III and it will begin
2 remembering changes to mass storage system 113 not
3 made to mass storage system 122 to be used in
4 making the storage systems consistent later.
5 In step 403, mass storage system 122 is
6 reconnected to computer 121, and in step 404, SFT-
7 III on server 120 is commanded to scan for the
8 newly-connected mass storage system 122. This
9 returns mass storage system 122 to the computer 121 0 to which it was originally connected prior to a 1 server failure. When SFT-III on server 120 detects 2 mass storage system 122, it communicates with 3 server 110 over link 131. At this point, the 4 operating systems on servers 110 and 120 work 5 together to make mass storage system 122 again 6 consistent with mass storage system 113 (i.e.-, by 7 remembering interim changes to mass storage system 8 113 and writing them to mass storage system 122), 9 and when consistency is achieved, data mirroring on o the two servers resumes. At this point, recovery l from the server failure is complete. 1 In an SFT-III system, the only steps of
2 the method that the NetWare Loadable Module must
3 perform are: (1) detecting the availability of
4 server 120 (step 401), (2) removing mass storage
5 system 122 from the operating system on server 110
6 (step 402), (3) disconnecting mass storage system
7 122 from computer 111 and connecting it to computer
8 121 by setting connection means 241 (step 403), and
9 (4) commanding SFT-III on server 120 to scan for
.0 mass storage so that it locates mass storage system
-l 122 (step 404). The steps involved with making
.2 mass storage systems 113 and 122 consistent and
.3 reestablishing data mirroring (step 405) are
.4 performed as part of the standard facilities of
.5 SFT-III. In other embodiments of the invention,
.6 responsibility for the steps of the method may be
.7 allocated differently.
.8 Figure 2 illustrates optional mass
.9 storage systems 112 and 123 attached to computers
10 111 and 121, respectively. While these two mass i storage systems are not required by the method of 1 the present invention, they are useful during the
2 restoration of a failed server. They provide
3 storage for the operating system and other
4 information needed by failed server 120 to begin
5 operation before mass storage system 122 is
6 switched from computer 111 to computer 121. Were
7 mass storage system 123 not available, some means
8 of having mass storage system 122 connected both to
9 computer 121 (for initializing its operation
-0 following correction of its failure) and computer
.1 111 (for continued disk mirroring) would be
.2 necessary. Alternatively, if the initialization
.3 time of server 120 is short, mass storage system
.4 122 could be switched from computer 111 to computer
.5 121 at the start of server 120's initialization,
-6 though this would result in more changes that must
.7 be remembered and made before data mirroring can
.8 begin again.
.9 It is to be understood that the above io described embodiments are merely illustrative of l numerous and varied other embodiments which may constitute applications of the principles of the invention. Such other embodiments may be readily devised by those skilled in the art without departing from the spirit or scope of this invention and it is our intent they be deemed within the scope of our invention.

Claims

1
2 Claims
3 We claim:
4 1. A method for rapid failure recovery and
5 system restoration in a fault-tolerant computer
6 system, said computer system comprising:
7 (A) a first server computer system,
8 comprising a first computer executing an
9 operating system; o (B) a second server computer system, 1 comprising a second computer executing an 2 operating system; 3 (C) a first mass storage system connected to
4 said first computer; 5 (D) a second mass storage system; and 6 (E) means for connecting said second mass 7 storage system to said first computer and to 8 said second computer; 9 WHEREIN whenever said first computer writes o data to said first mass storage system, said second computer writes a mirror copy of said data to said second mass storage system, the method comprising the steps of: (1) detecting a failure of said second computer; (2) discontinuing causing said writing of said mirror copy on said second mass storage system; (3) remembering data written to said first mass storage system but not written to said second mass storage system; (4) configuring said second mass storage system to record information from said first computer; (5) writing said remembered data to said second mass storage system; (6 ) whenever new data is written to said first mass storage system, writing a mirror copy of said new data to said second mass storage system; (7) detecting said second computer's availability; (8) reconfiguring said second mass storage system to record information from said second computer; (9) reestablishing data mirroring such that whenever said first computer writes data to said first mass storage system, said second computer writes a mirror copy of said data on said second mass storage system. 2. A method as in claim 1 wherein step (1) is performed by said first computer. 3. A method as in claim 2 wherein step (2) is performed by said first computer. 4. A method as in claim 1 wherein step (3) is performed by said first computer. 5. A method as in claim 4 wherein step (5) is performed by said first computer. 6. A method as in claim 5 wherein step (6) is performed by said first computer. * 7. A method as in claim 1, wherein said first mass storage system and said second mass storage
3 system each comprise at least one magnetic disk drive. 8. A method as in claim 1, wherein said means
6 for connecting said second mass storage system
7 comprises a serial network. 9. A method as in claim 1 wherein said operating
9 systems are the SFT-III operating system.
0 10. A method as in claim 9 wherein steps (1), (4)
1 and (5) are performed by a NETWARE loadable module.
3 11. A method for rapid failure recovery and
4 system restoration in a fault-tolerant computer
5 system, said computer system comprising:
6 (A) a first server computer system,
7 comprising a first computer executing an
8 operating system;
9 (B) a second server computer system, o comprising a second computer executing an l' operating system; (C) a first mass storage system connected to said first computer; (D) a second mass storage system; and (E) means for selectively connecting said second mass storage system to said first computer and to said second computer; WHEREIN in the absence of a fault said second mass storage system is connected to said second computer; and WHEREIN whenever said first computer writes data to said first mass storage system said first computer can also cause said second computer to write a mirror copy of said data to said second mass storage system, the method of the invention comprising: (1) on said first computer, detecting a failure of said second computer; (2) on said first computer, discontinuing causing said writing of said mirror copy on said second mass storage system by said second computer; l (3) on said first computer, remembering data
2 written to said first mass storage system but
3 not written to said second mass storage
4 system;
5 (4) on said first computer, setting said
6 means for connecting said second mass storage
7 system to connect said second mass storage
8 system to said first computer;
9 (5) on said first computer, commanding said 0 operating system of said first computer to 1 scan for mass storage systems such that said 2 operating system of said first computer will 3 determine that both said first mass storage 4 system and said second mass storage system 5 are now connected to said first computer; 6 (6) on said first computer, writing said 7 remembered data to said second mass storage 8 system; 9 (7) on said first computer, whenever new data 0 is written to said first mass storage system. writing a mirror copy of said new data to said second mass storage system; (8) on said first computer, detecting said second computer's availability; (9) on said first computer, commanding said operating system of said first computer to remove said second mass storage system; (10) setting said means for connecting said second mass storage system to connect said second mass storage system to said second computer; (11) on said second computer, commanding said operating system of said second computer to scan for mass storage systems such that said operating system of said second computer will determine that said second mass storage system is now connected to said second computer; (12) reestablishing data mirroring such that whenever said first computer writes data to said first mass storage system said first computer also causes said second computer to write a mirror copy of said data on said second mass storage system.
12. A method as in claim 11, wherein said first mass storage system and said second mass storage system each comprise at least one magnetic disk drive.
13. A method as in claim 12, wherein said means for connecting said second mass storage system comprises a serial network.
14. A method for rapid failure recovery in a fault-tolerant computer system, said computer system comprising: (A) a first server computer system, comprising a first computer executing an operating system; (B) a second server computer system, comprising a second computer; (C) a first mass storage system connected to said first computer; (D) a second mass storage system; and (E) means for selectively connecting said second mass storage system to said first computer and to said second computer; WHEREIN in the absence of a fault said second mass storage system is connected to said second computer; and WHEREIN whenever said first computer writes data to said first mass storage system said first computer can also cause said second computer to write a mirror copy of said data on said second mass storage system, the method of the invention comprising said first computer performing the steps of: ( 1 ) detecting a failure of said second computer; (2) discontinuing causing said writing of said mirror copy on said second mass storage system by said second computer; (3) remembering data written to said first mass storage system but not written to said second mass storage system; (4) setting said means for connecting said second mass storage system to connect said second mass storage system to said first computer; (5) commanding said operating system of said first computer to scan for mass storage systems such that said operating system. of said first computer will determine that both said first mass storage system and said second mass storage system are now connected to said first computer; (6 ) writing said remembered data to said second mass storage system; (7 ) whenever new data is written to said first mass storage system, writing a mirror copy of said new data to said second mass storage system.
15. A method as in claim 14, wherein said first mass storage system and said second mass storage system each comprise at least one magnetic disk drive.
16. A method as in claim 15, wherein said means for connecting said second mass storage system comprises a serial network.
17. A method for system restoration in a fault- tolerant computer system, said computer system comprising: (A) a first server computer system, comprising a first computer executing an operating system; (B) a second server computer system, comprising a second computer executing an operating system; (C) a first mass storage system connected to said first computer; (D) a second mass storage system; and r (E) means for connecting said second mass
2 storage system to said first computer and to
3 said second computer;
4 WHEREIN said second computer is initially
5 unavailable for use, and
6 WHEREIN said second mass storage system is
7 initially connected to said first computer, the
8 method comprising:
9 (1) on said first computer, detecting said
0 second computer's availability;
1 (2) on said first computer, commanding said
2 operating system of said first computer to
3 remove said second mass storage system;
4 (3) setting said means for connecting said
5 second mass storage system to connect said 6 second mass storage system to said second 7 computer; 8 (4) on said second computer, commanding said 9 operating system of said second computer to 0 scan for mass storage systems such that said 1' operating system of said second computer will 1 determine that said second mass storage
2 system is now connected to said second
3 computer;
4 (5) reestablishing data mirroring such that
5 whenever said first computer writes data to
6 said first mass storage system said first
7 computer also causes said second computer to
8 write a mirror copy of said data on said
9 second mass storage system.
-0
18. A method as in claim 17, wherein said first
.1 mass storage system and said second mass storage
.2 system each comprise at least one magnetic disk
.3 drive.
.4
19. A method as in claim 18, wherein said means
.5 for connecting said second mass storage system
.6 comprises a serial network.
.7
20. A method as in claim 17 wherein said
.8 operating system is the SFT-III operating system.
.9
21. A method as in claim 20 wherein steps (1),
20 (4) and (5) are performed by a NETWARE loadable il module. 2
22. A method for rapid failure recovery in a
3 fault-tolerant computer system, said computer
4 system comprising:
5 (A) a first server computer system,
6 comprising a first computer executing an
7 operating system;
8 (B) a second server computer system,
9 comprising a second computer executing an .0 operating system;
.1 (C) a first mass storage system connected to
.2 said first computer;
.3 (D) a second mass storage system; and
A (E) means for connecting said second mass
.5 storage system to said first computer and to
.6 said second computer;
.7 WHEREIN whenever said first computer writes
.8 data to said first mass storage system, said second
.9 computer writes a mirror copy of said data to said io second mass storage system, il the method comprising the steps of: 1 (1) detecting a failure of said second
2 computer;
3 (2) discontinuing causing said writing of
4 said mirror copy on said second mass storage
5 system;
6 (3) remembering data written to said first
7 mass storage system but not written to said
8 second mass storage system;
9 (4) configuring said second mass storage o system to record information from said first 1 computer; 2 (5) writing said remembered data to said 3 second mass storage system; and
4 (6)^ whenever new data is written to said 5 first mass storage system, writing a mirror 6 copy of said new data to said second mass 7 storage system. 8 9
23. A method for system restoration in a fault- o tolerant computer system, said computer system 1 comprising: 1- (A) a first server computer system,
2 comprising a first computer executing an
3 operating system;
4 (B) a second server computer system,
5 comprising a second computer executing an
6 operating system;
7 (C) a first mass storage system connected to
8 said first computer;
9 (D) a second mass storage system;
0 (E) means for connecting said second mass
.1 storage system to said first computer and to
.2 said second computer;
3 WHEREIN said second computer is initially
A unavailable for use; and
.5 WHEREIN said second mass storage system is
.6 initially configured to record information from
.7 said first computer,
.8 the method comprising the steps of:
.9 (1) detecting said second computer's
:o availability; (2) reconfiguring said second mass storage system to record information from said second computer; (3) establishing data mirroring such that whenever said first computer writes data to said first mass storage system, said second computer writes a mirror copy of said data on said second mass storage system.
24. A method for rapid failure recovery and system restoration in a fault-tolerant computer system, the method comprising the steps of: (1) obtaining a computer system, the computer system comprising: (A) a first server computer system, comprising a first computer executing an operating system; (B) a second server computer system, comprising a second computer executing an operating system; (C) a first mass storage system connected to said first computer; (D) a second mass storage system; and (E) means for connecting said second mass storage system to said first computer and to said second computer; (2) operating said computer system such that absent a fault, whenever said first computer writes data to said first mass storage system, said second computer writes a mirror copy of said data to said second mass storage system; (3) detecting a failure of said second computer; (4) discontinuing causing said writing of said mirror copy on said second mass storage system; (5) remembering data written to said first mass storage system but not written to said second mass storage system; (6) configuring said second mass storage system to record information from said first computer; (7) writing said remembered data to said second mass storage system; (8) whenever new data is written to said first mass storage system, writing a mirror copy of said new data to said second mass storage system; (9) detecting said second computer's availability; (10) reconfiguring said second mass storage system to record information from said second computer; (11) reestablishing data mirroring such that whenever said first computer writes data to said first mass storage system, said second computer writes a mirror copy of said data on said second mass storage system.
PCT/US1994/007009 1993-06-23 1994-06-21 Method for improving disk mirroring error recovery in a computer system including an alternate communication path WO1995000906A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU72111/94A AU7211194A (en) 1993-06-23 1994-06-21 Method for improving disk mirroring error recovery in a computer system including an alternate communication path

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US8139193A 1993-06-23 1993-06-23
US08/081,391 1993-06-23

Publications (1)

Publication Number Publication Date
WO1995000906A1 true WO1995000906A1 (en) 1995-01-05

Family

ID=22163849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/007009 WO1995000906A1 (en) 1993-06-23 1994-06-21 Method for improving disk mirroring error recovery in a computer system including an alternate communication path

Country Status (2)

Country Link
AU (1) AU7211194A (en)
WO (1) WO1995000906A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167531A (en) * 1998-06-18 2000-12-26 Unisys Corporation Methods and apparatus for transferring mirrored disk sets during system fail-over
EP1376361A1 (en) * 2001-03-26 2004-01-02 Duaxes Corporation Server duplexing method and duplexed server system
US7389312B2 (en) * 1997-04-28 2008-06-17 Emc Corporation Mirroring network data to establish virtual storage area network
US7783931B2 (en) 2007-05-04 2010-08-24 International Business Machines Corporation Alternate communication path between ESSNI server and CEC
US8086572B2 (en) 2004-03-30 2011-12-27 International Business Machines Corporation Method, system, and program for restoring data to a file
CN103907094A (en) * 2011-10-31 2014-07-02 国际商业机器公司 Serialization of access to data in multi-mainframe computing environments
CN104090729A (en) * 2014-07-04 2014-10-08 浙江宇视科技有限公司 Method and device for repairing mirror image synchronization through service write operation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5157663A (en) * 1990-09-24 1992-10-20 Novell, Inc. Fault tolerant computer system
US5295258A (en) * 1989-12-22 1994-03-15 Tandem Computers Incorporated Fault-tolerant computer system with online recovery and reintegration of redundant components

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5295258A (en) * 1989-12-22 1994-03-15 Tandem Computers Incorporated Fault-tolerant computer system with online recovery and reintegration of redundant components
US5157663A (en) * 1990-09-24 1992-10-20 Novell, Inc. Fault tolerant computer system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7389312B2 (en) * 1997-04-28 2008-06-17 Emc Corporation Mirroring network data to establish virtual storage area network
US6167531A (en) * 1998-06-18 2000-12-26 Unisys Corporation Methods and apparatus for transferring mirrored disk sets during system fail-over
EP1376361A1 (en) * 2001-03-26 2004-01-02 Duaxes Corporation Server duplexing method and duplexed server system
EP1376361A4 (en) * 2001-03-26 2005-11-16 Duaxes Corp Server duplexing method and duplexed server system
US7340637B2 (en) * 2001-03-26 2008-03-04 Duaxes Corporation Server duplexing method and duplexed server system
US8086572B2 (en) 2004-03-30 2011-12-27 International Business Machines Corporation Method, system, and program for restoring data to a file
US7783931B2 (en) 2007-05-04 2010-08-24 International Business Machines Corporation Alternate communication path between ESSNI server and CEC
CN103907094A (en) * 2011-10-31 2014-07-02 国际商业机器公司 Serialization of access to data in multi-mainframe computing environments
CN104090729A (en) * 2014-07-04 2014-10-08 浙江宇视科技有限公司 Method and device for repairing mirror image synchronization through service write operation
CN104090729B (en) * 2014-07-04 2017-08-15 浙江宇视科技有限公司 The method and device of mirror image synchronization is repaired by business write operation

Also Published As

Publication number Publication date
AU7211194A (en) 1995-01-17

Similar Documents

Publication Publication Date Title
US5812748A (en) Method for improving recovery performance from hardware and software errors in a fault-tolerant computer system
US6134673A (en) Method for clustering software applications
JP4751117B2 (en) Failover and data migration using data replication
US7318138B1 (en) Preventing undesired trespass in storage arrays
EP1062581B1 (en) Highly available file servers
JP2505928B2 (en) Checkpoint mechanism for fault tolerant systems
EP0760503B1 (en) Fault tolerant multiple network servers
US20010056554A1 (en) System for clustering software applications
US20070043972A1 (en) Systems and methods for split mode operation of fault-tolerant computer systems
EP0889410B1 (en) Method and apparatus for high availability and caching data storage devices
JPH0420493B2 (en)
US8751878B1 (en) Automatic failover during online data migration
US8392753B1 (en) Automatic failover during online data migration
US7565565B2 (en) Automated error recovery of a licensed internal code update on a storage controller
CN100520724C (en) Failure switch recovery realization network system and method
EP1687721B1 (en) Computer cluster, computer unit and method to control storage access between computer units
WO1995000906A1 (en) Method for improving disk mirroring error recovery in a computer system including an alternate communication path
US7437445B1 (en) System and methods for host naming in a managed information environment
US20030204539A1 (en) Facility protection utilizing fault tolerant storage controllers
US6687852B1 (en) Ultra reliable disk memory for duplex processor platforms
JPH07262033A (en) Duplex database system and operation thereof
JPH09288590A (en) Virtual computer system
JPH08249281A (en) Online processing system
CN117667528A (en) High availability method and system for distributed storage system with fault migration recovery
JPS62296264A (en) Control system for structure of data processing system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR BY CA CH CN CZ DE DK ES FI GB HU JP KP KR KZ LK LU LV MG MN MW NL NO NZ PL PT RO RU SD SE SK UA US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

WD Withdrawal of designations after international publication

Free format text: US

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA