WO2003025802A1 - A system and method for collaborative caching in a multinode system - Google Patents

A system and method for collaborative caching in a multinode system Download PDF

Info

Publication number
WO2003025802A1
WO2003025802A1 PCT/US2002/030084 US0230084W WO03025802A1 WO 2003025802 A1 WO2003025802 A1 WO 2003025802A1 US 0230084 W US0230084 W US 0230084W WO 03025802 A1 WO03025802 A1 WO 03025802A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
lock
operating system
storage
block
Prior art date
Application number
PCT/US2002/030084
Other languages
French (fr)
Inventor
Brent A. Kingsbury
Sam Revitch
Terence M. Rokop
Ken Dove
Original Assignee
Polyserve, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Polyserve, Inc. filed Critical Polyserve, Inc.
Priority claimed from US10/251,626 external-priority patent/US7111197B2/en
Publication of WO2003025802A1 publication Critical patent/WO2003025802A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/1827Management specifically adapted to NAS

Definitions

  • the present invention relates generally to computer systems. More specifically, a system and method for collaborative caching in a multi-node file system is disclosed.
  • multiple nodes may be set up to share data storage.
  • a lock may be used.
  • the present invention addresses such a need.
  • Figure 1 is a block diagram of a system for accessing data according to an embodiment of the present invention.
  • Figure 2 is another block diagram of a system according to an embodiment of the present invention.
  • Fig. 3 is a block diagram of software components inside a node according to an embodiment of the present invention.
  • Figs. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data.
  • Figs. 5 A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data.
  • Figure 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention.
  • the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, or a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. It should be noted that the order of the steps of disclosed processes may be altered within the scope of the invention.
  • FIG. 1 is a block diagram of a system for accessing data according to an embodiment of the present invention.
  • Figure 3 is a block diagram of a system for a multi-node environment according to an embodiment of the present invention.
  • servers 300A-300D are coupled via network interconnects 302.
  • the network interconnects 302 can represent any network infrastructure such as an Ethernet, InfiniBand network or Fibre Channel network capable of host-to-host communication.
  • the servers 300A-300D are also coupled to the data storage interconnect 304, which in turn is coupled to shared storage 306A-306D.
  • the data storage interconnect 304 can be any interconnect that can allow access to the shared storage 306A-306D by servers 300A-300D.
  • the data storage interconnect 304 is a Fibre Channel switch, such as a Brocade 3200 Fibre Channel switch.
  • the data storage network might be an iSCSI or other IP storage network, InfiniBand network, or another kind of host-to-storage network.
  • the network interconnects 302 and the data storage interconnect 304 may be embodied in a single interconnect.
  • Servers 300A-300D can be any computer, preferable an off-the-shelf computer or server or any equivalent thereof. Servers 300A-300D can each run operating systems that are independent of each other. Accordingly, each server 300A-300D can, but does not need to, run a different operating system. For example, server 300A may run Microsoft windows, while server 300B runs Linux, and server 300C can simultaneously run a Unix operating system.
  • An advantage of running independent operating systems for the servers 300A-300D is that the entire multi-node system can be dynamic. For example, one of the servers 300A-300D can fail while the other servers 300A-300D continue to operate.
  • the shared storage 306A-306D can be any storage device, such as hard drive disks, compact disks, tape, and random access memory.
  • a filesystem is a logical entity built on the shared storage.
  • the shared storage 306A-306D is typically considered a physical device while the filesystem is typically considered a logical structure overlaid on part of the storage, the filesystem is sometimes referred to herein as shared storage for simplicity.
  • shared storage can mean the physical storage device, a portion of a filesystem, a filesystem, filesystems, or any combination thereof.
  • FIG. 2 is another block diagram of a system according to an embodiment of the present invention.
  • the system preferably has no single point of failure.
  • servers 300 - 300D' are coupled with multiple network interconnects 302A-302D.
  • the servers 300 -300D' are also shown to be coupled with multiple storage interconnects 304A-304B.
  • the storage interconnects 304A- 304B are each coupled to a plurality of data storage 306 -306D'.
  • the number of servers 300A'-300D', the number of storage intercomiects 304A-304B, and the number of data storage 306A-306D' can be as many as the customer requires and is not physically limited by the system.
  • the operating systems used by servers 300A-300D' can also be as many independent operating systems as the customer requires.
  • Fig. 3 is a block diagram of software components inside a node 300.
  • node 300 is shown to include a buffer cache 350, processes 352, a distributed lock manager (DLM) 354, and a lock caching layer (LCL) 356.
  • a block is kept in the node's cache (in local storage) after node 300 changes the block rather than writing it immediately into the shared storage. In this manner, it is faster if that node 300 can find the latest document in its own buffer cache 350 rather than taking the time to access the shared storage.
  • the distributed lock manager communicates with other DLMs in other nodes and also communicates with the lock caching layer 356.
  • the lock caching layer 356 calls requested tasks before a lock is downgraded or released.
  • a process 352 such as an application or a file system, can obtain a lock on a block via the lock caching layer 356, use it, then eventually relinquish the lock on the block.
  • the block is then stored in buffer cache 350.
  • a search can be performed in the buffer cache 350 to find that block. If the block is not found in the buffer cache, then it can be retrieved from the shared storage.
  • Figs. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data.
  • a process within a particular node requests the lock caching layer (LCL) for a write lock for a document (400).
  • the LCL obtains a distributed lock manager (DLM) lock for that document (402).
  • the LCL grants the LCL lock to the process for that document (404).
  • the LCL caches the DLM lock (406).
  • 400-406 occur within a single node.
  • Another node requests a read lock on the document and the request is received by this node's DLM (408).
  • the DLM asks the LCL to downgrade the DLM lock (450 of Fig. 4B).
  • the LCL determines that there are no local processes using the lock and writes the document to shared storage (452).
  • the LCL informs the DLM that it is down grading the lock from write to read (454).
  • the DLM then passes the lock as well as the latest version of the document to the requesting node (456).
  • Figs. 5 A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data.
  • the example shown in Figs. 5A-5C the example of a requesting node requesting a shared lock is used. Variations of this example can be used to accommodate other types of locks, such as an exclusive lock or a lock with a different level of exclusion.
  • the requesting node asks its DLM for a shared lock (500). It is determined whether the requesting node is the home node (502).
  • a lock home node is the server that is responsible for granting or denying lock requests for a given DLM lock when there is no cached lock reference available on the requesting node. In this embodiment, there is one lock home node per lock. The home node does not necessarily hold the lock locked but if other nodes hold the lock locked or cached, then the home node has a description of the lock since the other nodes that holds the lock locked or cached communicated with the home node in order to get it locked or cached.
  • the DLM of the requesting node requests a shared lock from the home node (504). It is also determined whether a lock is held by a node other than the requesting node (506). If a lock is held by a node other than the requesting node, the home node then gives the requesting node the lock in shared mode (508). The requesting node then reads the content from shared storage (510).
  • the requesting node is the home node (502), then it is determined whether a lock is held by another node (550). If a lock is not held by another node, then the requesting node obtains the lock and reads from shared storage (562). If, however, there is a lock held by another node, then it is also determined whether the other node holds a shared lock (552). If the other node holds a shared lock, then the requesting node grants itself a shared lock (563) and sends a request for content to the owner of the shared lock (564) .
  • the owner of the shared lock sends the content to the requesting node (586), otherwise the owner tells the requesting node that it does not have the content (582) and the requesting node reads the content from shared storage (584). If the other node does not hold a shared lock (552), and instead holds an exclusive lock, then the requesting node sends a request for the downgrade of the lock and content to the owner of the exclusive lock (554). Then, it is determined whether the owner has the content in the local cache (590 of Fig. 5C). If the owner has the content in the local cache, the owner writes the content to shared storage (558). The owner then sends the message to the home node (the requestor) with the content and the downgrade request (560). The requesting node then grants itself a shared lock (596).
  • the owner If the owner does not have the content in the local cache, it sends the downgrade message to the requesting node (592).
  • the requesting node then grants itself a shared lock and reads the content from shared storage (594).
  • a lock is held by a node other than the requesting node (506 of Fig. 5 A)
  • the held lock is a shared lock (600 of Fig. 5D). If it is a shared lock, then it is also determined whether the home node holds the lock (602). If the home node holds the lock, then it sends the lock as well as the content to the requester (608).
  • the home node If the home node does not hold the lock (602), it then sends the content request to the lock holder (612). The content is sent from the lock holder to the home node (614). The home node sends the lock as well as the content to the requester (616).
  • the lock held by another node is not a shared lock (600), for example, it's an exclusive lock
  • the home node sends the content to the requester if the home node has the content in its cache. If, however, the home node does not have the content in its cache, it then notifies the requester that it does not have the content in the cache and the requester retrieves the content from the shared storage.
  • the nodes can access information directly amongst each other, without regularly writing to the shared storage. Accordingly, Figs. 5A-5E still applies to this embodiment except that it would be modified to delete 558 of Fig. 5C, 654 of Fig. 5E, and 662 of Fig. 5E.
  • 560 would also change from “downgrading its lock” to “giving up its lock”.
  • FIG. 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention.
  • each server 300A-300D of Fig 1 includes these software components. In this embodiment, the following components are shown:
  • the Distributed Lock Manager (DLM) 1500 manages matrix- wide locks for the filesystem image 306a-306d, including the management of lock state during crash recovery.
  • the Matrix Filesystem 1504 uses DLM 1500-managed locks to implement matrix- wide mutual exclusion and matrix- wide filesystem 306a-306d metadata and data cache consistency.
  • the DLM 1500 is a distributed symmetric lock manager. Preferably, there is an instance of the DLM 1500 resident on every server in the matrix. Every instance is a peer to every other instance; there is no master/slave relationship among the instances.
  • the lock-caching layer (“LCL”) 1502 is a component internal to the operating system kernel that interfaces between the Matrix Filesystem 1504 and the application- level DLM 1500.
  • the purposes of the LCL 1502 include the following:
  • DLM 1500 It caches DLM 1500 locks (that is, it may hold on to DLM 1500 locks after clients have released all references to them), sometimes obviating the need for kernel components to communicate with an application-level process (the DLM 1500) to obtain matrix-wide locks.
  • the LCL 1502 is the only kernel component that makes lock requests from the user-level DLM 1500. It partitions DLM 1500 locks among kernel clients, so that a single DLM 1500 lock has at most one kernel client on each node, namely, the LCL 1502 itself. Each DLM 1500 lock is the product of an LCL 1502 request, which was induced by a client's request of an LCL 1502 lock, and each LCL 1502 lock is backed by a DLM 1500 lock.
  • the Matrix Filesystem 1504 is the shared filesystem component of The Matrix
  • the Matrix Filesystem 1504 allows multiple servers to simultaneously mount, in read/write mode, filesystems living on physically shared storage devices 306a- 306d.
  • the Matrix Filesystem 1504 is a distributed symmetric matrixed filesystem; there is no single server that filesystem activity must pass through to perform filesystem activities.
  • the Matrix Filesystem 1504 provides normal local filesystem semantics and interfaces for clients of the filesystem.
  • SAN (Storage Area Network) Membership Service 1506 provides the group membership services infrastructure for the Matrix Filesystem 1504, including managing filesystem membership, health monitoring, coordinating mounts and unmounts of shared filesystems 306a-306d, and coordinating crash recovery.
  • Matrix Membership Service 1508 provides the Local, matrix-style matrix membership support, including virtual host management, service monitoring, notification services, data replication, etc.
  • the Matrix Filesystem 1504 does not interface directly with the MMS 1508, but the Matrix Filesystem 1504 does interface with the SAN Membership Service 1506, which interfaces with the MMS 1508 in order to provide the filesystem 1504 with the matrix group services infrastructure.
  • the Shared Disk Monitor Probe 1510 maintains and monitors the membership of the various shared storage devices in the matrix. It acquires and maintains leases on the various shared storage devices in the matrix as a protection against rogue server "split-brain" conditions. It communicates with the SMS 1506 to coordinate recovery activities on occurrence of a device membership transition.
  • Filesystem monitors 1512 are used by the SAN Membership Service 1508 to initiate Matrix Filesystem 1504 mounts and unmounts, according to the matrix configuration put in place by the Matrix Server user interface.
  • the Service Monitor 1514 tracks the state (health & availability) of various services on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored service transitions.
  • Services monitored include HTTP, FTP, Telnet, SMTP, etc.
  • the remedial actions include service restart on the same server or service fail-over and restart on another server.
  • the Device Monitor 1516 tracks the state (health & availability) of various storage-related devices in the matrix so that the matrix server may take automatic remedial action when the state of any monitored device transitions.
  • Devices monitored may include data storage devices 306a-306d (such as storage device drives, solid state storage devices, ram storage devices, JOBDs, RAID arrays, etc.)and storage network devices 304' (such as FibreCharmel Switches, Infmiband Switches, iSCSI switches, etc.).
  • the remedial actions include initiation of Matrix Filesystem 1504 recovery, storage network path failover, and device reset.
  • the Application Monitor 1518 tracks the state (health & availability) of various applications on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored application transitions.
  • Applications monitored may include databases, mail routers, CRM apps, etc.
  • the remedial actions include application restart on the same server or application fail-over and restart on another server.
  • the Notifier Agent 1520 tracks events associated with specified objects in the matrix and executes supplied scripts of commands on occurrence of any tracked event.
  • the Replicator Agent 1522 monitors the content of any filesystem subtree and periodically replicates any data which has not yet been replicated from a source tree to a destination tree.
  • the Matrix Communication Service 1524 provides the network communication infrastructure for the DLM 1500, Matrix Membership Service 1508, and SAN Membership Service 1506.
  • the Matrix Filesystem 1504 does not use the MCS 1524 directly, but it does use it indirectly through these other components.
  • the Storage Control Layber (SCL) 1526 provides matrix- wide device identification, used to identify the Matrix Filesystems 1504 at mount time.
  • the SCL 1526 also manages storage fabric configuration and low level I/O device fencing of rogue servers from the shared storage devices 306a-306d containing the Matrix Filesystems 1504. It also provides the ability for a server in the matrix to voluntarily intercede during normal device operations to fence itself when communication with rest of the matrix has been lost.
  • the Storage Control Layer 1526 is the Matrix Server module responsible for managing shared storage devices 306a-306d. Management in this context consists of two primary functions. The first is to enforce I/O fencing at the hardware SAN level by enabling/disabling host access to the set of shared storage devices 306a-306d. And the second is to generate global(matrix-wide) unique device names (or "labels") for all matrix storage devices 306a-306d and ensure that all hosts in the matrix have access to those global device names.
  • the SCL module also includes utilities and library routines needed to provide device information to the UI.
  • the Pseudo Storage Driver 1528 is a layered driver that "hides” a target storage device 306a-306d so that all references to the underlying target device must pass through the PSD layered driver.
  • the PSD provides the ability to "fence” a device, blocking all I/O from the host server to the underlying target device until it is unfenced again.
  • the PSD also provides an application-level interface to lock a storage partition across the matrix. It also has the ability to provide common matrix- wide 'handles', or paths, to devices such that all servers accessing shared storage in the Matrix Server can use the same path to access a given shared device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system and method are disclosed for accessing data in a multi-node system comprising providing a first node associated with a first operating system (300a); providing a second node associated with a second operating system (300b), wherein the second operating system is independent of the first operating system; providing a storage (306a-d), wherein the first node directly accesses the storage (306a) and the second node directly accesses the storage (306b); requesting a lock for a block by the first node to the second node; obtaining the lock from the second node; and obtaining the block from the second node.

Description

A SYSTEM AND METHOD FOR COLLABORATIVE CACHING
IN A MULTINODE SYSTEM
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims priority to U.S. Provisional Patent Application No. 60/324,196 (Attorney Docket No. POLYP001+) entitled SHARED STORAGE LOCK: A NEW SOFTWARE SYNCHRONIZATION MECHANISM FOR ENFORCING MUTUAL EXCLUSION AMONG MULTIPLE NEGOTIATORS filed September 21, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No. 60/324,226 (Attorney Docket No. POLYP002+) entitled JOUNALLNG
MECHANISM WITH EFFICIENT, SELECTIVE RECOVERY FOR MULTI-NODE ENVIRONMENTS filed September 21, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No. 60/324,224 (Attorney Docket No. POLYP003+) entitled COLLABORATIVE
CACHING IN A MULTI-NODE FILESYSTEM filed September 21, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No 60/324,242 (Attorney Docket No. POLYP005+) entitled DISTRIBUTED MANAGEMENT OF A STORAGE AREA NETWORK filed September 21, 2001 , which is incorporated herein by reference for all purposes. This application claims priority to U.S. Provisional Patent Application No. 60/324,195 (Attorney Docket No. POLYP006+) entitled METHOD FOR IMPLEMENTING JOURNALING AND DISTRIBUTED LOCK MANAGEMENT filed September 21, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No.
60/324,243 (Attorney Docket No. POLYP007+) entitled MATRIX SERVER: A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM WITH COHERENT SHARED FILE STORAGE filed September 21, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No.
60/324,787 (Attorney Docket No. POLYP008+) entitled A METHOD FOR EFFICIENT ON-LINE LOCK RECOVERY IN A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM filed September 24, 2001, which is incorporated herein by reference for all purposes.
This application claims priority to U.S. Provisional Patent Application No.
60/327,191 (Attorney Docket No. POLYP009+) entitled FAST LOCK RECOVERY: A METHOD FOR EFFICIENT ON-LINE LOCK RECOVERY IN A HIGHLY AVAILABLE MATRIX PROCESSING SYSTEM filed October 1, 2001, which is incorporated herein by reference for all purposes.
This application is related to co-pending U.S. Patent Application No. (Attorney Docket No.POLYPOOl) entitled A SYSTEM AND
METHOD FOR SYNCHRONIZATION FOR ENFORCING MUTUAL EXCLUSION AMONG MULTIPLE NEGOTIATORS filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. Patent
Application No. (Attorney Docket No. POLYP002) entitled SYSTEM
AND METHOD FOR JOURNAL RECOVERY FOR MULTINODE ENVIRONMENTS filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. Patent Application No. (Attorney Docket No. POLYP005) entitled A SYSTEM AND
METHOD FOR MANAGEMENT OF A STORAGE AREA NETWORK filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. Patent Application No. (Attorney Docket No. POLYP006) entitled SYSTEM AND METHOD FOR IMPLEMENTING
JOURNALING IN A MULTI-NODE ENVIRONMENT filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. Patent
Application No. (Attorney Docket No. POLYP007) entitled A
SYSTEM AND METHOD FOR A MULTI-NODE ENVIRONMENT WITH SHARED STORAGE filed concurrently herewith, which is incorporated herein by reference for all purposes; and co-pending U.S. Patent Application No. (Attorney Docket No. POLYP009) entitled A SYSTEM AND
METHOD FOR EFFICIENT LOCK RECOVERY filed concurrently herewith, which is incorporated herein by reference for all purposes.
FIELD OF THE INVENTION The present invention relates generally to computer systems. More specifically, a system and method for collaborative caching in a multi-node file system is disclosed.
BACKGROUND OF THE INVENTION
In today's complex network systems, multiple nodes may be set up to share data storage. Preferably, in order to share storage only one node or application is allowed to alter data at any given time. In order to accomplish this synchronization, a lock may be used.
Typically, it can be slow for a node to read or write to a particular block in a shared storage system due to the time it can take to coordinate the locking mechanism and retrieval time of the document from shared storage.
It would be desirable to speed up the time required to obtain access to a shared document. The present invention addresses such a need.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
Figure 1 is a block diagram of a system for accessing data according to an embodiment of the present invention.
Figure 2 is another block diagram of a system according to an embodiment of the present invention.
Fig. 3 is a block diagram of software components inside a node according to an embodiment of the present invention.
Figs. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data.
Figs. 5 A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data.
Figure 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention.
DETAILED DESCRIPTION
It should be appreciated that the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, or a computer readable medium such as a computer readable storage medium or a computer network wherein program instructions are sent over optical or electronic communication links. It should be noted that the order of the steps of disclosed processes may be altered within the scope of the invention.
A detailed description of one or more preferred embodiments of the invention is provided below along with accompanying figures that illustrate by way of example the principles of the invention. While the invention is described in connection with such embodiments, it should be understood that the invention is not limited to any embodiment. On the contrary, the scope of the invention is limited only by the appended claims and the invention encompasses numerous alternatives, modifications and equivalents. For the purpose of example, numerous specific details are set forth in the following description in order to provide a thorough understanding of the present invention. The present invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the present invention is not unnecessarily obscured.
Figure 1 is a block diagram of a system for accessing data according to an embodiment of the present invention. Figure 3 is a block diagram of a system for a multi-node environment according to an embodiment of the present invention. In this example, servers 300A-300D are coupled via network interconnects 302. The network interconnects 302 can represent any network infrastructure such as an Ethernet, InfiniBand network or Fibre Channel network capable of host-to-host communication. The servers 300A-300D are also coupled to the data storage interconnect 304, which in turn is coupled to shared storage 306A-306D. The data storage interconnect 304 can be any interconnect that can allow access to the shared storage 306A-306D by servers 300A-300D. An example of the data storage interconnect 304 is a Fibre Channel switch, such as a Brocade 3200 Fibre Channel switch. Alternately, the data storage network might be an iSCSI or other IP storage network, InfiniBand network, or another kind of host-to-storage network. In addition, the network interconnects 302 and the data storage interconnect 304 may be embodied in a single interconnect.
Servers 300A-300D can be any computer, preferable an off-the-shelf computer or server or any equivalent thereof. Servers 300A-300D can each run operating systems that are independent of each other. Accordingly, each server 300A-300D can, but does not need to, run a different operating system. For example, server 300A may run Microsoft windows, while server 300B runs Linux, and server 300C can simultaneously run a Unix operating system. An advantage of running independent operating systems for the servers 300A-300D is that the entire multi-node system can be dynamic. For example, one of the servers 300A-300D can fail while the other servers 300A-300D continue to operate.
The shared storage 306A-306D can be any storage device, such as hard drive disks, compact disks, tape, and random access memory. A filesystem is a logical entity built on the shared storage. Although the shared storage 306A-306D is typically considered a physical device while the filesystem is typically considered a logical structure overlaid on part of the storage, the filesystem is sometimes referred to herein as shared storage for simplicity. For example, when it is stated that shared storage fails, it can be a failure of a part of a filesystem, one or more filesystems, or the physical storage device on which the filesystem is overlaid. Accordingly, shared storage, as used herein, can mean the physical storage device, a portion of a filesystem, a filesystem, filesystems, or any combination thereof.
Figure 2 is another block diagram of a system according to an embodiment of the present invention. In this example, the system preferably has no single point of failure. Accordingly, servers 300 - 300D' are coupled with multiple network interconnects 302A-302D. The servers 300 -300D' are also shown to be coupled with multiple storage interconnects 304A-304B. The storage interconnects 304A- 304B are each coupled to a plurality of data storage 306 -306D'.
In this manner, there are redundancies in the system such that if any of the components or connections fail, the entire system can continue to operate.
In the example shown in Figure 2, as well as the example shown in Figure 1, the number of servers 300A'-300D', the number of storage intercomiects 304A-304B, and the number of data storage 306A-306D' can be as many as the customer requires and is not physically limited by the system. Likewise, the operating systems used by servers 300A-300D' can also be as many independent operating systems as the customer requires.
Fig. 3 is a block diagram of software components inside a node 300. In this example, node 300 is shown to include a buffer cache 350, processes 352, a distributed lock manager (DLM) 354, and a lock caching layer (LCL) 356. According to an embodiment of the present invention, a block is kept in the node's cache (in local storage) after node 300 changes the block rather than writing it immediately into the shared storage. In this manner, it is faster if that node 300 can find the latest document in its own buffer cache 350 rather than taking the time to access the shared storage.
There are various ways to keep a node from using a stale copy of a block. One way is to invalidate the cached copy of the block associated with a lock when the lock is released. Another way is to invalidate or refresh the cached copy of the block associated with a new lock when a new lock is obtained.
The distributed lock manager communicates with other DLMs in other nodes and also communicates with the lock caching layer 356. The lock caching layer 356 calls requested tasks before a lock is downgraded or released.
A process 352, such as an application or a file system, can obtain a lock on a block via the lock caching layer 356, use it, then eventually relinquish the lock on the block. The block is then stored in buffer cache 350. The next time a process 352 requests that block, a search can be performed in the buffer cache 350 to find that block. If the block is not found in the buffer cache, then it can be retrieved from the shared storage.
Figs. 4A-4B show a flow diagram for a method according to an embodiment of the present invention for accessing data. In this example, a process within a particular node requests the lock caching layer (LCL) for a write lock for a document (400). The LCL obtains a distributed lock manager (DLM) lock for that document (402). The LCL grants the LCL lock to the process for that document (404). When the process is finished and relinquishes the LCL lock, the LCL caches the DLM lock (406). In this example, 400-406 occur within a single node. Another node then requests a read lock on the document and the request is received by this node's DLM (408). The DLM asks the LCL to downgrade the DLM lock (450 of Fig. 4B). The LCL then determines that there are no local processes using the lock and writes the document to shared storage (452). The LCL informs the DLM that it is down grading the lock from write to read (454). The DLM then passes the lock as well as the latest version of the document to the requesting node (456).
Accordingly, by sending the requesting document directly from one node to the other, access to this data is more efficient then having to retrieve it from the shared storage.
Figs. 5 A-5E show another flow diagram of a method according to an embodiment of the present invention for accessing data. In the example shown in Figs. 5A-5C the example of a requesting node requesting a shared lock is used. Variations of this example can be used to accommodate other types of locks, such as an exclusive lock or a lock with a different level of exclusion.
In this example, the requesting node asks its DLM for a shared lock (500). It is determined whether the requesting node is the home node (502). A lock home node, as used herein, is the server that is responsible for granting or denying lock requests for a given DLM lock when there is no cached lock reference available on the requesting node. In this embodiment, there is one lock home node per lock. The home node does not necessarily hold the lock locked but if other nodes hold the lock locked or cached, then the home node has a description of the lock since the other nodes that holds the lock locked or cached communicated with the home node in order to get it locked or cached.
If the requesting node is not the home node, then the DLM of the requesting node requests a shared lock from the home node (504). It is also determined whether a lock is held by a node other than the requesting node (506). If a lock is held by a node other than the requesting node, the home node then gives the requesting node the lock in shared mode (508). The requesting node then reads the content from shared storage (510).
If the requesting node is the home node (502), then it is determined whether a lock is held by another node (550). If a lock is not held by another node, then the requesting node obtains the lock and reads from shared storage (562). If, however, there is a lock held by another node, then it is also determined whether the other node holds a shared lock (552). If the other node holds a shared lock, then the requesting node grants itself a shared lock (563) and sends a request for content to the owner of the shared lock (564) .
It is then determined whether the owner has the content in its local cache (580). If yes, the owner of the shared lock sends the content to the requesting node (586), otherwise the owner tells the requesting node that it does not have the content (582) and the requesting node reads the content from shared storage (584). If the other node does not hold a shared lock (552), and instead holds an exclusive lock, then the requesting node sends a request for the downgrade of the lock and content to the owner of the exclusive lock (554). Then, it is determined whether the owner has the content in the local cache (590 of Fig. 5C). If the owner has the content in the local cache, the owner writes the content to shared storage (558). The owner then sends the message to the home node (the requestor) with the content and the downgrade request (560). The requesting node then grants itself a shared lock (596).
If the owner does not have the content in the local cache, it sends the downgrade message to the requesting node (592). The requesting node then grants itself a shared lock and reads the content from shared storage (594).
If it is determined that a lock is held by a node other than the requesting node (506 of Fig. 5 A), then it is also determined whether the held lock is a shared lock (600 of Fig. 5D). If it is a shared lock, then it is also determined whether the home node holds the lock (602). If the home node holds the lock, then it sends the lock as well as the content to the requester (608).
If the home node does not hold the lock (602), it then sends the content request to the lock holder (612). The content is sent from the lock holder to the home node (614). The home node sends the lock as well as the content to the requester (616).
If the lock held by another node is not a shared lock (600), for example, it's an exclusive lock, then it is determined whether the home node holds the lock (650 of Fig. 5E). If the home node holds the lock, it then writes the content to the shared storage (654). The home node downgrades the exclusive lock to shared and send the shared lock to the requester along with content if known (656). If the home node does not hold the lock (650), it then sends the request for downgrade and content to the owner of the lock (660). The owner of the lock writes the content to shared storage (662). The owner of the lock then sends the content and a message that it is down grading from exclusive lock to shared lock to the home node (664). The home node sends the lock and the content to the requester (666).
It should be noted that in steps 616, 608, 656 and 666, the home node sends the content to the requester if the home node has the content in its cache. If, however, the home node does not have the content in its cache, it then notifies the requester that it does not have the content in the cache and the requester retrieves the content from the shared storage. In another embodiment of the present invention, the nodes can access information directly amongst each other, without regularly writing to the shared storage. Accordingly, Figs. 5A-5E still applies to this embodiment except that it would be modified to delete 558 of Fig. 5C, 654 of Fig. 5E, and 662 of Fig. 5E.
If the requesting node requests an exclusive lock in 500 of Fig. 5 A, rather than a shared lock, then 566 of Fig. 5B would change to "owner of shared lock sends content to requesting node and also gives up the lock to the requesting node".
Likewise, 560 would also change from "downgrading its lock" to "giving up its lock".
614 of Fig. 5C would add that "the owner of the lock gives up the lock to the requestor". And 664 of Fig. 5D would also change from "downgrade" to "give up its lock". Figure 6 is another block diagram of the software components of server 300 according to an embodiment of the present invention. In an embodiment of the present invention, each server 300A-300D of Fig 1 includes these software components. In this embodiment, the following components are shown:
The Distributed Lock Manager (DLM) 1500 manages matrix- wide locks for the filesystem image 306a-306d, including the management of lock state during crash recovery. The Matrix Filesystem 1504 uses DLM 1500-managed locks to implement matrix- wide mutual exclusion and matrix- wide filesystem 306a-306d metadata and data cache consistency. The DLM 1500 is a distributed symmetric lock manager. Preferably, there is an instance of the DLM 1500 resident on every server in the matrix. Every instance is a peer to every other instance; there is no master/slave relationship among the instances.
The lock-caching layer ("LCL") 1502 is a component internal to the operating system kernel that interfaces between the Matrix Filesystem 1504 and the application- level DLM 1500. The purposes of the LCL 1502 include the following:
1. It hides the details of the DLM 1500 from kernel-resident clients that need to obtain distributed locks.
2. It caches DLM 1500 locks (that is, it may hold on to DLM 1500 locks after clients have released all references to them), sometimes obviating the need for kernel components to communicate with an application-level process (the DLM 1500) to obtain matrix-wide locks.
3. It provides the ability to obtain locks in both process and server scopes (where a process lock ensures that the corresponding DLM (1500) lock is held, and also excludes local processes attempting to obtain the lock in conflicting modes, whereas a server lock only ensures that the DLM (1500) lock is held, without excluding other local processes).
4. It allows clients to define callouts for different types of locks when certain events related to locks occur, particularly the acquisition and surrender of DLM 1500- level locks. This ability is a requirement for cache-coherency, which depends on callouts to flush modified cached data to permanent storage when corresponding DLM 1500 write locks are downgraded or released, and to purge cached data when DLM 1500 read locks are released.
The LCL 1502 is the only kernel component that makes lock requests from the user-level DLM 1500. It partitions DLM 1500 locks among kernel clients, so that a single DLM 1500 lock has at most one kernel client on each node, namely, the LCL 1502 itself. Each DLM 1500 lock is the product of an LCL 1502 request, which was induced by a client's request of an LCL 1502 lock, and each LCL 1502 lock is backed by a DLM 1500 lock.
The Matrix Filesystem 1504 is the shared filesystem component of The Matrix
Server. The Matrix Filesystem 1504 allows multiple servers to simultaneously mount, in read/write mode, filesystems living on physically shared storage devices 306a- 306d. The Matrix Filesystem 1504 is a distributed symmetric matrixed filesystem; there is no single server that filesystem activity must pass through to perform filesystem activities. The Matrix Filesystem 1504 provides normal local filesystem semantics and interfaces for clients of the filesystem.
SAN (Storage Area Network) Membership Service 1506 provides the group membership services infrastructure for the Matrix Filesystem 1504, including managing filesystem membership, health monitoring, coordinating mounts and unmounts of shared filesystems 306a-306d, and coordinating crash recovery.
Matrix Membership Service 1508 provides the Local, matrix-style matrix membership support, including virtual host management, service monitoring, notification services, data replication, etc. The Matrix Filesystem 1504 does not interface directly with the MMS 1508, but the Matrix Filesystem 1504 does interface with the SAN Membership Service 1506, which interfaces with the MMS 1508 in order to provide the filesystem 1504 with the matrix group services infrastructure.
The Shared Disk Monitor Probe 1510 maintains and monitors the membership of the various shared storage devices in the matrix. It acquires and maintains leases on the various shared storage devices in the matrix as a protection against rogue server "split-brain" conditions. It communicates with the SMS 1506 to coordinate recovery activities on occurrence of a device membership transition.
Filesystem monitors 1512 are used by the SAN Membership Service 1508 to initiate Matrix Filesystem 1504 mounts and unmounts, according to the matrix configuration put in place by the Matrix Server user interface.
The Service Monitor 1514 tracks the state (health & availability) of various services on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored service transitions. Services monitored include HTTP, FTP, Telnet, SMTP, etc. The remedial actions include service restart on the same server or service fail-over and restart on another server. The Device Monitor 1516 tracks the state (health & availability) of various storage-related devices in the matrix so that the matrix server may take automatic remedial action when the state of any monitored device transitions. Devices monitored may include data storage devices 306a-306d (such as storage device drives, solid state storage devices, ram storage devices, JOBDs, RAID arrays, etc.)and storage network devices 304' (such as FibreCharmel Switches, Infmiband Switches, iSCSI switches, etc.). The remedial actions include initiation of Matrix Filesystem 1504 recovery, storage network path failover, and device reset.
The Application Monitor 1518 tracks the state (health & availability) of various applications on each server in the matrix so that the matrix server may take automatic remedial action when the state of any monitored application transitions. Applications monitored may include databases, mail routers, CRM apps, etc. The remedial actions include application restart on the same server or application fail-over and restart on another server.
The Notifier Agent 1520 tracks events associated with specified objects in the matrix and executes supplied scripts of commands on occurrence of any tracked event.
The Replicator Agent 1522 monitors the content of any filesystem subtree and periodically replicates any data which has not yet been replicated from a source tree to a destination tree.
The Matrix Communication Service 1524 provides the network communication infrastructure for the DLM 1500, Matrix Membership Service 1508, and SAN Membership Service 1506. The Matrix Filesystem 1504 does not use the MCS 1524 directly, but it does use it indirectly through these other components.
The Storage Control Layber (SCL) 1526 provides matrix- wide device identification, used to identify the Matrix Filesystems 1504 at mount time. The SCL 1526 also manages storage fabric configuration and low level I/O device fencing of rogue servers from the shared storage devices 306a-306d containing the Matrix Filesystems 1504. It also provides the ability for a server in the matrix to voluntarily intercede during normal device operations to fence itself when communication with rest of the matrix has been lost.
The Storage Control Layer 1526 is the Matrix Server module responsible for managing shared storage devices 306a-306d. Management in this context consists of two primary functions. The first is to enforce I/O fencing at the hardware SAN level by enabling/disabling host access to the set of shared storage devices 306a-306d. And the second is to generate global(matrix-wide) unique device names (or "labels") for all matrix storage devices 306a-306d and ensure that all hosts in the matrix have access to those global device names. The SCL module also includes utilities and library routines needed to provide device information to the UI.
The Pseudo Storage Driver 1528 is a layered driver that "hides" a target storage device 306a-306d so that all references to the underlying target device must pass through the PSD layered driver. Thus, the PSD provides the ability to "fence" a device, blocking all I/O from the host server to the underlying target device until it is unfenced again. The PSD also provides an application-level interface to lock a storage partition across the matrix. It also has the ability to provide common matrix- wide 'handles', or paths, to devices such that all servers accessing shared storage in the Matrix Server can use the same path to access a given shared device.
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practiced within the scope of the appended claims. It should be noted that there are many alternative ways of implementing both the process and apparatus of the present invention. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims .
WHAT IS CLAIMED IS:

Claims

1. A method of accessing data in a multi-node system comprising: providing a first node associated with a first operating system; providing a second node associated with a second operating system, wherein the second operating system is independent of the first operating system; providing a storage, wherein the first node directly accesses the storage and the second node directly accesses the storage; requesting a lock for a block by the first node to the second node; obtaining the lock from the second node; and obtaining the block the from the second node.
2. The method of claim 1 , further comprising caching the block.
3. The method of claim 1, wherein the second node is a home node.
4. The method of claim 1, further comprising writing the block to the storage.
5. The method of claim 1, wherein the first node includes a first lock manager and the second node includes a second lock manager. 6. The method of claim 1 , wherein the second node is a home node.
7. A method of accessing data in a node configured for a multi-node environment comprising: providing a first operating system wherein the first operating system is independent of a second operating system, wherein the second operating system is associated with a second node; providing a lock manager; requesting a lock for a block from the second node; obtaining the lock from the second node; and obtaining the block from the second node.
10. A method of accessing data by a first node configured for a multi-node environment comprising: obtaining a lock for a block from a second node, wherein the first node includes a first operating system and the second node includes a second operating system independent of the first operating system; altering the block; writing the block to shared storage; relinquishing the lock; caching the block in a local storage.
11. A system of accessing data comprising: a first node configured to request a lock for a block, wherein the first node includes a first operating system; a second node configured to receive the request, send the lock and the block to the first node, wherein the second node includes a second operating system independent of the first operating system; and a storage configured to be accessible by the first and second nodes.
12. A computer program product for accessing data, the computer program product being embodied in a computer readable medium and comprising computer instructions for: providing a lock manager, wherein the lock manager is configured to work in an environment associated with a first operating system, wherein the first operating system is independent of a second operating system, and wherein the second operating system is associated with a second node; requesting a lock for a block from the second node; obtaining the lock from the second node; and obtaining the block from the second node.
PCT/US2002/030084 2001-09-21 2002-09-20 A system and method for collaborative caching in a multinode system WO2003025802A1 (en)

Applications Claiming Priority (30)

Application Number Priority Date Filing Date Title
US32424301P 2001-09-21 2001-09-21
US32422601P 2001-09-21 2001-09-21
US32419501P 2001-09-21 2001-09-21
US32422401P 2001-09-21 2001-09-21
US32424201P 2001-09-21 2001-09-21
US32419601P 2001-09-21 2001-09-21
US60/324,196 2001-09-21
US60/324,243 2001-09-21
US60/324,195 2001-09-21
US60/324,242 2001-09-21
US60/324,224 2001-09-21
US60/324,226 2001-09-21
US32478701P 2001-09-24 2001-09-24
US60/324,787 2001-09-24
US32719101P 2001-10-01 2001-10-01
US60/327,191 2001-10-01
US10/251,626 US7111197B2 (en) 2001-09-21 2002-09-20 System and method for journal recovery for multinode environments
US10/251,689 2002-09-20
US10/251,895 US7437386B2 (en) 2001-09-21 2002-09-20 System and method for a multi-node environment with shared storage
US10/251,893 US7266722B2 (en) 2001-09-21 2002-09-20 System and method for efficient lock recovery
US10/251,645 2002-09-20
US10/251,894 2002-09-20
US10/251,894 US7240057B2 (en) 2001-09-21 2002-09-20 System and method for implementing journaling in a multi-node environment
US10/251,895 2002-09-20
US10/251,893 2002-09-20
US10/251,690 2002-09-20
US10/251,689 US7149853B2 (en) 2001-09-21 2002-09-20 System and method for synchronization for enforcing mutual exclusion among multiple negotiators
US10/251,626 2002-09-20
US10/251,690 US7496646B2 (en) 2001-09-21 2002-09-20 System and method for management of a storage area network
US10/251,645 US20040202013A1 (en) 2001-09-21 2002-09-20 System and method for collaborative caching in a multinode system

Publications (1)

Publication Number Publication Date
WO2003025802A1 true WO2003025802A1 (en) 2003-03-27

Family

ID=27585545

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2002/029721 WO2003054711A1 (en) 2001-09-21 2002-09-20 A system and method for management of a storage area network
PCT/US2002/030084 WO2003025802A1 (en) 2001-09-21 2002-09-20 A system and method for collaborative caching in a multinode system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2002/029721 WO2003054711A1 (en) 2001-09-21 2002-09-20 A system and method for management of a storage area network

Country Status (2)

Country Link
AU (1) AU2002336620A1 (en)
WO (2) WO2003054711A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131939B2 (en) 2004-11-15 2012-03-06 International Business Machines Corporation Distributed shared I/O cache subsystem
CN104035522A (en) * 2014-06-16 2014-09-10 南京云创存储科技有限公司 Large database appliance

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7546333B2 (en) 2002-10-23 2009-06-09 Netapp, Inc. Methods and systems for predictive change management for access paths in networks
DE10393571T5 (en) 2002-10-23 2005-12-22 Onaro, Boston Method and system for validating logical end-to-end access paths in storage area networks
US7961594B2 (en) 2002-10-23 2011-06-14 Onaro, Inc. Methods and systems for history analysis for access paths in networks
GB2409306A (en) * 2003-12-20 2005-06-22 Autodesk Canada Inc Data processing network with switchable storage
JP5060485B2 (en) 2005-09-27 2012-10-31 オナロ インコーポレイテッド A method and system for verifying the availability and freshness of replicated data.
US8826032B1 (en) 2006-12-27 2014-09-02 Netapp, Inc. Systems and methods for network change discovery and host name resolution in storage network environments
US8332860B1 (en) 2006-12-30 2012-12-11 Netapp, Inc. Systems and methods for path-based tier-aware dynamic capacity management in storage network environments
US9042263B1 (en) 2007-04-06 2015-05-26 Netapp, Inc. Systems and methods for comparative load analysis in storage networks
US9246752B2 (en) 2013-06-18 2016-01-26 International Business Machines Corporation Ensuring health and compliance of devices

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5813016A (en) * 1995-03-01 1998-09-22 Fujitsu Limited Device/system for processing shared data accessed by a plurality of data processing devices/systems
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6009466A (en) * 1997-10-31 1999-12-28 International Business Machines Corporation Network management system for enabling a user to configure a network of storage devices via a graphical user interface
US6269410B1 (en) * 1999-02-12 2001-07-31 Hewlett-Packard Co Method and apparatus for using system traces to characterize workloads in a data storage system
US6421723B1 (en) * 1999-06-11 2002-07-16 Dell Products L.P. Method and system for establishing a storage area network configuration
US7062648B2 (en) * 2000-02-18 2006-06-13 Avamar Technologies, Inc. System and method for redundant array network storage
US7844513B2 (en) * 2000-07-17 2010-11-30 Galactic Computing Corporation Bvi/Bc Method and system for operating a commissioned e-commerce service prover
US8219662B2 (en) * 2000-12-06 2012-07-10 International Business Machines Corporation Redirecting data generated by network devices

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5813016A (en) * 1995-03-01 1998-09-22 Fujitsu Limited Device/system for processing shared data accessed by a plurality of data processing devices/systems
US6044367A (en) * 1996-08-02 2000-03-28 Hewlett-Packard Company Distributed I/O store

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8131939B2 (en) 2004-11-15 2012-03-06 International Business Machines Corporation Distributed shared I/O cache subsystem
CN104035522A (en) * 2014-06-16 2014-09-10 南京云创存储科技有限公司 Large database appliance

Also Published As

Publication number Publication date
AU2002336620A1 (en) 2003-07-09
WO2003054711A9 (en) 2004-05-13
WO2003054711A1 (en) 2003-07-03
AU2002336620A8 (en) 2003-07-09

Similar Documents

Publication Publication Date Title
US20040202013A1 (en) System and method for collaborative caching in a multinode system
US7076510B2 (en) Software raid methods and apparatuses including server usage based write delegation
US10534681B2 (en) Clustered filesystems for mix of trusted and untrusted nodes
US8495131B2 (en) Method, system, and program for managing locks enabling access to a shared resource
US9442952B2 (en) Metadata structures and related locking techniques to improve performance and scalability in a cluster file system
US6986015B2 (en) Fast path caching
US7013379B1 (en) I/O primitives
US8028191B2 (en) Methods and systems for implementing shared disk array management functions
US7280536B2 (en) Fast path for performing data operations
US6959373B2 (en) Dynamic and variable length extents
US6973549B1 (en) Locking technique for control and synchronization
EP1315074A2 (en) Storage system and control method
JP2002229837A (en) Method for controlling access to data in shared disc parallel data file
WO2003025802A1 (en) A system and method for collaborative caching in a multinode system
Dyke et al. RAC Concepts

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG UZ VN YU ZA ZM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP