US20050262090A1 - Method, system, and article of manufacture for storing device information - Google Patents

Method, system, and article of manufacture for storing device information Download PDF

Info

Publication number
US20050262090A1
US20050262090A1 US10/851,036 US85103604A US2005262090A1 US 20050262090 A1 US20050262090 A1 US 20050262090A1 US 85103604 A US85103604 A US 85103604A US 2005262090 A1 US2005262090 A1 US 2005262090A1
Authority
US
United States
Prior art keywords
data structure
distributed application
devices
network
computational
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/851,036
Other languages
English (en)
Inventor
Stephen Correl
James Seeger
Martine Wedlake
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/851,036 priority Critical patent/US20050262090A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SEEGER, JAMES JOHN, WEDLAKE, MARTIME BRUCE, CORREL, STEPHEN F.
Priority to JP2007517264A priority patent/JP2007538327A/ja
Priority to KR1020067022721A priority patent/KR101027248B1/ko
Priority to EP05747896A priority patent/EP1769330A2/de
Priority to PCT/EP2005/052331 priority patent/WO2005114372A2/en
Priority to CNB2005800122949A priority patent/CN100468405C/zh
Publication of US20050262090A1 publication Critical patent/US20050262090A1/en
Priority to US11/929,044 priority patent/US7831623B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Definitions

  • the disclosure relates to a method, system, and article of manufacture for storing device information.
  • a storage area network is a special purpose network that interconnects a plurality of storage devices with associated data servers.
  • a SAN may be a high-speed subnetwork of shared storage devices.
  • a storage device is a machine that may comprise a plurality of disks, tapes or other storage media for storing data.
  • a SAN may couple a plurality of hosts, where the hosts may be file servers, to a plurality of storage devices.
  • the SAN may be a storage network that is different from an Internet Protocol (IP) based network.
  • IP Internet Protocol
  • SANs While a SAN may be clustered in proximity to other computing resources, such as, an IBM® z990 mainframe, certain SANs may also extend to remote locations for backup and archival storage by using WAN carrier technologies. SANs can use communication technologies, such as, IBM's optical fiber based Enterprise System Connection (ESCON®), the Fibre Channel technology, etc. SANs may support disk mirroring, backup and restore, archival and retrieval of data, data migration from one storage device to another, and the sharing of data among different servers in a network. Certain SANs may also incorporate subnetworks with network-attached storage (NAS) systems.
  • NAS network-attached storage
  • a plurality of references to a plurality of files corresponding to a plurality of devices are stored in a data structure implemented in a computational device, wherein the computational device is coupled to the plurality of devices via a network.
  • Access is enabled to the data structure to a distributed application, wherein the distributed application uses a stored reference in the data structure to determine a file corresponding to a device, and wherein the distributed application performs data transfer operations with the device via the determined file.
  • the data structure is a directory, wherein the files are device files, and wherein the references are soft links to the device files.
  • the data structure is a registry, wherein entries in the registry include the references.
  • information is received, from another computational device, wherein the information is capable of being used to determine an additional reference that corresponds to an additional file corresponding to an additional device added to the network.
  • the data structure is updated to include the additional reference.
  • an additional device that has been added to the network is discovered.
  • the data structure is updated to include the additional reference.
  • the network is a storage area network, wherein the distributed application is capable of accessing the plurality of devices via a plurality of computational devices.
  • the computational device is a first computational device, wherein the data structure is a first data structure.
  • a second data structure implemented in a second computational device, stores at least one of the plurality of references to the plurality of files corresponding to the plurality of devices, wherein the second computational device is coupled to the plurality of devices via the network, and wherein the distributed application is capable of accessing the plurality of devices via the first and the second data structures.
  • the data structure is capable of being implemented in a plurality of heterogeneous operating systems, and wherein the plurality of devices are heterogeneous.
  • the data structure in implemented locally in the computational device, and wherein the distributed application is capable of initiating the data transfer operations with the device faster by accessing the data structure implemented locally in the computational device in comparison to accessing the data structure implemented remotely from the computational device.
  • an operating system and drivers in the computational device are incapable of directly providing the distributed application with access to information to perform the data transfer operations with the device.
  • the file is a device file, wherein the device is a virtual storage device, wherein the network is a storage area network, and wherein the device file represents a path to the virtual storage device through the storage area network.
  • FIG. 1 illustrates a block diagram of a computing environment, in accordance with certain embodiments
  • FIG. 2 illustrates a block diagram of a host that includes a device directory, in accordance with certain embodiments
  • FIG. 3 illustrates a block diagram that illustrates how a distributed application uses the device directory to access a plurality of devices in a SAN, in accordance with certain embodiments
  • FIG. 4 illustrates operations for generating the device directory and performing Input/Output (I/O) operations with respect to devices in a SAN by using the device directory, in accordance with certain embodiments;
  • FIG. 5 illustrates operations implemented in a host for allowing a distributed application to use the device directory for performing I/O operations with respect to devices in a SAN, in accordance with certain embodiments.
  • FIG. 6 illustrates a computing architecture in which certain embodiments are implemented.
  • FIG. 1 illustrates a computing environment in which certain embodiments are implemented.
  • a plurality of hosts 100 a . . . 100 n are coupled to a plurality of devices 102 a . . . 102 m over a network, such as, a SAN 104 .
  • an administrative server 106 that is capable of performing operations with respect to the hosts 100 a . . . 100 n and the devices 102 a . . . 102 m is also coupled to the SAN 104 .
  • the plurality of hosts 100 a . . . 100 n and the administrative server 106 may comprise any type of computational device, such as, a workstation, a desktop computer, a laptop, a mainframe, a telephony device, a hand held computer, a server, a blade computer, etc.
  • the plurality of hosts 100 a . . . 100 n may include a plurality of device directories 108 a . . . 108 n , where in certain embodiments at least one host includes a device directory.
  • the host 100 a may include the device directory 108 a
  • the host 100 b may include the device directory 108 b
  • the host 100 n may include the device directory 108 n .
  • the device directories 108 a . . . 108 n are file directories and include references to device files corresponding to one or more of the plurality of devices 102 a . . . 102 m .
  • the hosts 100 a . . . 100 n may be heterogeneous and run a plurality of operating systems.
  • the devices 102 a . . . 102 m may comprise any type of storage device known in the art, such as, a disk drive, a tape drive, a CDROM drive, etc.
  • the devices 102 a . . . 102 m may comprise a heterogeneous group of storages devices that are capable of being accessed from the hosts 100 a . . . 100 n and the administrative server 106 via the SAN 104 .
  • the plurality of device 102 a . . . 102 m are shared among the plurality of hosts 100 a . . . 100 n.
  • the SAN 104 may comprise any storage area network known in the art.
  • the SAN 104 may be coupled to any other network (not shown) known in the art, such as, the Internet, an intranet, a LAN, a WAN, etc.
  • a distributed application 110 is capable of running and interacting with software elements in one or more of the plurality of hosts 100 a . . . 100 n .
  • the distributed application 110 may interact with or execute in one or more of the plurality of hosts 100 a . . . 100 n .
  • the distributed application 110 may include any SAN application that uses a plurality of hosts and devices in the SAN 104 .
  • the distributed application 110 may include disaster recovery applications, data interchange applications, data vaulting application, data protection application, etc.
  • the distributed application 110 may have to interact with a plurality of heterogeneous devices 102 a . . . 102 m and heterogeneous host operating systems in the hosts 100 a . . . 100 n , the distributed application 110 may not be able to rely directly on a host operating system, a cluster manager, a logical volume manager, etc., to manage or allow the use of the devices 102 a . . . 102 m in the SAN 104 . Additionally, when the devices 102 a . . . 102 m are shared among the hosts 100 a . . . 100 n , the host operating systems, cluster managers, etc., may not have the information needed to manage the devices 102 a . .
  • FIG. 1 illustrates an embodiment, in which information related to the devices 102 a . . . 102 m is stored in the device directories 108 a . . . 108 n , where the device directories 108 a . . . 108 n are accessible to the distributed application 10 . Additionally, the device directories 108 a . . . 108 n are implemented in a manner, such that, the device directories are operating system neutral and store information related to devices in a form that is suitable for interfacing with the distributed application 110 . Certain embodiments may be implemented in computing environments in which the hosts 100 a . . . 100 n and the devices 102 a . . . 102 m are divided into clusters. The distributed application 110 may run over a cluster based operating system and use the device directories 108 a . . . 108 n for accessing the devices 102 a . . . 102 m.
  • FIG. 2 illustrates a block diagram of a host 200 , where the host 200 represents any of the hosts 100 a . . . 100 n .
  • the host 200 includes system software 202 , a device directory 204 , and is capable of interacting with the distributed application 10 , where in certain embodiments the distributed application 110 may be implemented in one or more hosts 100 a . . . . 100 n .
  • the system software 202 included in the host 200 may include the operating system of the host 200 , various drivers that run in the host 200 , cluster managers that run in the host 200 , logical volume managers that run in the host 200 , etc.
  • the device directory 204 may represent any of the device directories 108 a . . . 108 n . For example, in certain embodiments if the host 200 represents the host 100 a then the device directory 204 represents the device directory 108 a.
  • the device directory 204 includes a plurality of device file links 206 a . . . 206 p , where the device file links 206 a . . . 206 p are references to device files corresponding to the devices 102 a . . . 102 m , where a device file may be used by the distributed application 110 to perform data transfer operations with respect to the device that corresponds to the device file.
  • a device file may be used by the distributed application 110 to perform data transfer operations with respect to the device that corresponds to the device file.
  • the device file link 206 a may be a softlink to the device file “x”.
  • a softlink may indicate the location of the device file “x” in the SAN 104 .
  • a softlink may be represented as “/dev/home/x”, where the file named “x” is stored in the “home” directory of “dev”, where “dev” may include any of the computational devices 100 a . . . 100 n , the administrative server 106 , the devices 102 a . . . 102 m , or any other element capable of storing the file “x” that is coupled to the SAN 104 .
  • the device file resides on a host, such as, hosts 100 a . . . 100 n , and identifies a path or a set of possible paths through the SAN 104 to a storage device, such as storage devices 102 a . . .
  • the device file allows an application to use the corresponding device by opening, reading or writing to the device file.
  • an application such as, the distributed application 110
  • the application can also get certain information about the device, such as, the SAN address of the device, by executing operations against the corresponding device file.
  • the link to a device file is an operating system facility in which a file, instead of being the actual device file, acts as the proxy of the device file.
  • the application can open the link, and may perform operations on the link, similar to the operations on the device file the link points to.
  • the application can also request the operating system to determine which device file the link points to.
  • the device directory 204 is a file directory that includes the device file links 206 a . . . 206 p .
  • the device directory 204 may be any data structure that is capable of storing references to the information related to the devices 102 a . . . 102 m .
  • additional fields such as, an identifier that associates a device file link to a particular device is included in the device directory 204 .
  • the distributed application 110 performs data transfer operations, such as, I/O operations, with respect to the devices 102 a . . . 102 m , by accessing the device files corresponding to the devices 102 a . . . 102 m via the device file links 206 a . . . 206 p that are stored in the device directory 204 .
  • the device directory 204 is created and populated with the device file links 206 a . . . 206 p , prior to an attempted usage of a device file link by the distributed application 110 . Since the device directory 204 is stored locally in the host 200 , the distributed application 110 may initiate data transfer operations with the devices 102 a . . .
  • the time taken to search for a device 102 a . . . 102 m may increase significantly, if references to the devices 102 a . . . 102 m are not stored locally in the device directory 204 .
  • the number of device files to search through may also increase, causing an increase in the time taken to search for a device 102 a . . . 102 m.
  • the device directory 204 is operating system neutral, i.e., the device directory can be stored in the file system of a plurality of operating systems.
  • the distributed application 110 can access the device directory 204 in embodiments in which the hosts 100 a . . . 100 n have heterogeneous operating systems.
  • FIG. 3 illustrates a block diagram that illustrates how the distributed application 1 . 10 uses the device directory 204 to access a plurality of devices in the SAN 104 , in accordance with certain embodiments.
  • the distributed application 110 may need to perform data transfer operations with respect to a device.
  • the distributed application 110 accesses the device file links 206 a . . . 206 p via the device directory 204 in the host 200 .
  • the device file links 200 a . . . 200 p may reference device files 300 a . . . 300 p that correspond to the devices 102 a . . . 102 p .
  • the devices 102 a . . . 102 p may be a subset of the devices 102 a . . . 102 m shown in FIG. 1 .
  • the device file link 206 a may reference the device file 300 a
  • the device file link 206 p may reference the device file 300 p
  • the device files 300 a . . . 300 p may represent either specific individual paths to a storage device 102 a . . . 102 p through the SAN 104 , or a choice of paths to a storage device 102 a . . . 102 p
  • the storage devices 102 a . . . 102 p may include a virtual storage device served by a storage server, such as, the IBM Enterprise Storage Server®.
  • the distributed application 110 determines a device file link, such as, device file link 206 a , 206 p .
  • the distributed application 110 may perform various operations, such as, open 302 a , 304 a , close 302 b , 304 b , update 302 c , 304 c , read (not shown), write (not shown), append (not shown), etc., with respect to the device files 300 a , 300 p .
  • the distributed application 110 may use the device file link 206 a to open 302 a the device file 300 a for initiating data transfer operations with the device 102 a.
  • FIG. 3 illustrates an embodiment in which the distributed application 110 accesses the devices 102 a . . . 102 p in the SAN 104 by using the device directory 204 .
  • FIG. 4 illustrates operations for generating the device directory 204 and performing I/O operations with devices in a SAN 104 by using the device directory 204 , in accordance with certain embodiments of the present invention.
  • the operations described in FIG. 4 may be implemented in the computing environment illustrated in FIG. 1 .
  • Control starts at block 400 where the device directory 204 is created in a host, such as, the host 200 .
  • the device directory 204 may represent any of the device directories 108 a . . . 108 n , and the host 200 may represent the corresponding host 100 a . . . 100 n .
  • the creation of the device directory 204 in a host may be performed by the host or by the administrative server 106 .
  • the distributed application 110 may create the device directory 204 .
  • the device directory 204 may need to be populated or updated if the device directory 204 is empty or a process in the host 200 requests access to a device that is absent in the device directory 204 .
  • the device directory 204 may need to be populated or updated at periodic intervals when device discovery needs to be performed or when other hosts or the administrative server 106 start sending messages that may include updates to the device directory 204 .
  • the device directory 204 may be populated or updated by the execution of the operations described in one or more of the blocks 404 a , 404 b , 404 c and a subsequent execution of the operation described in block 406 .
  • the process may wait in block 402 until a determination is made that the device directory 204 may need to be updated or populated.
  • the distributed application 110 that executes in a host 200 may receive (at block 404 a ) a message from the administrative server 106 to populate or update the device directory 204 .
  • the distributed application 110 that executes in a host 200 may also discover (at block 404 b ) one or more devices 102 a . . . 102 m of interest in the SAN 104 .
  • the distributed application 110 that executes in a host 200 may also receive (at block 404 c ) one or more messages to populate or update the device directory 204 from the other hosts in the SAN 204 .
  • the host 100 a that executes the distributed application 110 may receive a message from the host 100 b to populate or update the device directory 108 a .
  • the message received by the host 100 a may include information that enables the receiving host 100 a to find the corresponding device of interest.
  • the information may include the World Wide Port Name of a device, where the receiving host 100 a can use the World Wide Port Name of the device in association with a storage driver to find the device file of interest. Subsequently, a link can be created in the device directory 108 a of the receiving host 100 a to the corresponding device file.
  • the distributed application 110 may populate or update (at block 406 ) the device directory 204 with the device file links to the corresponding devices based on the messages received or device discovery performed in blocks 404 a , 404 b , 404 c .
  • the distributed application 110 may populate or update the device directory 204 with the device file links 206 a . . . 206 p that references the device files 300 a . . . 300 p corresponding to the devices 100 a . . . 100 p . Therefore, in certain embodiments the populating and updating of the device directory 204 may be performed by the distributed application 110 .
  • applications that are different from the distributed application 110 may populate or update the device directory.
  • the distributed application 110 determines (at block 408 ) whether an operation is to be performed with respect to a selected device. If so, the distributed application 110 performs (at block 410 ) the operation with respect to the selected device by accessing the device file corresponding the selected device from the device directory 204 , and control returns to block 402 for populating or updating the device directory 204 . For example, in certain embodiments, the distributed application 110 may perform an open 304 a on the device file 304 p corresponding to the device 102 p by using the device file link 206 p in the device directory 204 .
  • control returns to block 402 for populating or updating the device directory 204 .
  • the process described in blocks 402 , 404 a , 404 b , 404 c , 406 , 408 , and 410 may be executed repeatedly in the host 200 .
  • an exception, an error condition, a shutdown, or a rebooting of the host 200 may terminate the process described in FIG. 4 .
  • FIG. 4 describes an embodiment in which a device directory 204 that includes references to device files 300 a . . . 300 p corresponding to devices 100 a . . . 100 p is created, populated and updated.
  • the host in which the device directory 204 is located allows the distributed application 110 to perform operations with respect to the devices 100 a . . . 100 p by using the device directory 204 . Since, the device directory 204 is stored locally in the host 200 , the distributed application 110 can access a device faster in comparison to implementations in which the reference to the device is located remotely from the host 200 . Therefore, in certain embodiments while the system software 202 , such as, a host operating system, may manages the device files, the distributed application manages the device directory 204 .
  • FIG. 5 illustrates operations implemented in a host, such as host 200 , for allowing the distributed application 110 to use the device directory 204 for performing I/o operations with respect to the devices 102 a . . . 102 m in the SAN 104 , in accordance with certain embodiments.
  • Control starts at block 500 , where the computational device 200 stores in a data structure 204 implemented in the computational device 200 a plurality of references 206 a . . . 206 p to a plurality of files 300 a . . . 300 p corresponding to a plurality of devices 100 a . . . 100 p , wherein the computational device 200 is coupled to the plurality of devices 100 a . . . 100 p via a network 104 .
  • the computational device 200 enables (at block 502 ) access to the data structure 204 to a distributed application 110 , wherein the distributed application 110 uses a stored reference in the data structure 204 to determine a file corresponding to a device, and wherein the distributed application 110 performs data transfer operations with the device via the determined file.
  • FIG. 5 illustrates how a computational device, such as, the host 200 , allows the distributed application 110 to use the device directory 204 for performing data transfer operations.
  • knowledge about devices in a SAN are cached locally in a host, such that, candidate or used devices are quickly accessible and visible to an administrator or a distributed application 110 .
  • a designated directory is used as a platform-independent, vendor-independent technique for managing devices shared by a distributed application across a plurality of hosts in a SAN environment, where the time and complexity to scan for suitable devices is reduced by storing the designated directory locally in a host.
  • the designated directory may be updated with references to devices when the host is not performing critical operations, where the critical operations are operations that should be completed as soon as possible.
  • the distributed application 10 caches information in the device directories 108 a . . .
  • the cached information may be related to devices that may be accessed or devices that are already in use by the distributed application 110 or other applications.
  • the location of the device directories 108 a . . . 108 n may be designated by the distributed application 110 .
  • the device directories are not used by the any storage device vendor or by any host system software for storing general purpose device files.
  • an administrator may use the administrative server 106 to manually configure the device directories 108 a . . . 108 n .
  • Automated scripts may also be run on the administrative server 106 to configure the device directories 108 a . . . 108 n .
  • administration can occur on the hosts 100 a . . . 100 n , in addition to the administrative server 106 .
  • an administrator could log on to a selected host and add new links in the device directory 204 used by the distributed application 110 and indicate to the distributed application 110 the devices that are available for use by the distributed application 110 on the selected host.
  • the distributed application 110 is enabled to reduce the time needed to scan the devices 102 a . . . 102 m during critical operations.
  • the hosts 100 a . . . 100 n , the distributed application 110 , or the administrative server 106 may create, populate, or update the device directories 108 a . . . 108 n by scanning the devices in the SAN 104 when critical operations are not being processed.
  • the distributed application 110 interacts with the devices 102 a . . . 102 m in a generic manner. In certain other embodiments, the distributed application 110 is able to use devices 102 a . . . 102 m that were not available at the time the distributed application was designed, tested, or documented. In alternative embodiments, additional operations beyond those described in FIGS. 1-5 may be used by the distributed application 110 to locate devices. For example, the distributed application 110 may search other device locations or may be customized to use or prefer certain vendor devices or drivers, or may interact with an operating system on a host to determine devices for data transfer. In certain embodiments, the devices 102 a . . .
  • the 102 m are labeled for use in the device directories without using the system software 202 . Since the system software on each host of the plurality of hosts 100 a . . . 100 n may be different, labeling the devices 102 a . . . 102 m by the system software on each host may lead to conflicts and may interfere with the use of the devices 102 a . . . 102 n by the distributed application 110 .
  • the described techniques may be implemented as a method, apparatus or article of manufacture involving software, firmware, micro-code, hardware and/or any combination thereof.
  • article of manufacture refers to program instructions, code and/or logic implemented in circuitry (e.g., an integrated circuit chip, Programmable Gate Array (PGA), ASIC, etc.) and/or a computer readable medium (e.g., magnetic storage medium, such as hard disk drive, floppy disk, tape), optical storage (e.g., CD-ROM, DVD-ROM, optical disk, etc.), volatile and non-volatile memory device (e.g., Electrically Erasable Programmable Read Only Memory (EEPROM), Read Only Memory (ROM), Programmable Read Only Memory (PROM), Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), flash, firmware, programmable logic, etc.).
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • ROM Read Only Memory
  • PROM Programmable Read Only Memory
  • RAM Dynamic Random Access Memory
  • Code in the computer readable medium may be accessed and executed by a machine, such as, a processor.
  • the code in which embodiments are made may further be accessible through a transmission medium or from a file server via a network.
  • the article of manufacture in which the code is implemented may comprise a transmission medium, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc.
  • the article of manufacture may comprise any information bearing medium known in the art.
  • the article of manufacture comprises a storage medium having stored therein instructions that when executed by a machine results in operations being performed.
  • FIG. 6 illustrates a block diagram of a computer architecture 600 in which certain embodiments may be implemented.
  • the hosts 100 a . . . 100 n and the administrative server 106 may be implemented according to the computer architecture 600 .
  • the computer architecture 600 may include a processor or a circuitry 602 , a memory 604 (e.g., a volatile memory device), and storage 606 . Certain elements of the computer architecture 600 may or may not be found in the hosts 100 a . . . 100 n and the administrative server 106 .
  • the storage 606 may include a non-volatile memory device (e.g., EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, firmware, programmable logic, etc.), magnetic disk drive, optical disk drive, tape drive, etc.
  • the storage 606 may comprise an internal storage device, an attached storage device and/or a network accessible storage device. Programs in the storage 606 may be loaded into the memory 604 and executed by the processor 602 .
  • the circuitry 602 may be in communication with the memory 604 , and the circuitry 602 may be capable of performing operations.
  • the architecture may include a network card 608 to enable communication with a network, such as the storage area network 104 .
  • the architecture may also include at least one input device 610 , such as, a keyboard, a touchscreen, a pen, voice-activated input, etc., and at least one output device 612 , such as a display device, a speaker, a printer, etc.
  • input device 610 such as, a keyboard, a touchscreen, a pen, voice-activated input, etc.
  • output device 612 such as a display device, a speaker, a printer, etc.
  • FIGS. 4 and 5 may be performed in parallel as well as sequentially. In alternative embodiments, certain of the operations may be performed in a different order, modified or removed.
  • FIGS. 1-6 The data structures and components shown or referred to in FIGS. 1-6 are described as having specific types of information. In alternative embodiments, the data structures and components may be structured differently and have fewer, more or different fields or different functions than those shown or referred to in the figures. Therefore, the foregoing description of the embodiments has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the embodiments to the precise form disclosed. Many modifications and variations are possible in light of the above teaching.
  • IBM, ESCON, and Enterprise Storage Server are registered trademarks or trademarks of IBM corporation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Multi Processors (AREA)
US10/851,036 2004-05-21 2004-05-21 Method, system, and article of manufacture for storing device information Abandoned US20050262090A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US10/851,036 US20050262090A1 (en) 2004-05-21 2004-05-21 Method, system, and article of manufacture for storing device information
JP2007517264A JP2007538327A (ja) 2004-05-21 2005-05-20 デバイス情報を格納する方法、システム、およびコンピュータプログラム
KR1020067022721A KR101027248B1 (ko) 2004-05-21 2005-05-20 디바이스 정보 저장 방법, 데이터 프로세싱 시스템 및 컴퓨터 판독가능한 저장 매체
EP05747896A EP1769330A2 (de) 2004-05-21 2005-05-20 Verfahren, systeme und computerprogramme zum speichern von einrichtungsinformationen
PCT/EP2005/052331 WO2005114372A2 (en) 2004-05-21 2005-05-20 Methods, systems, and computer programs for storing device information
CNB2005800122949A CN100468405C (zh) 2004-05-21 2005-05-20 用于存储设备信息的方法、系统和计算机程序
US11/929,044 US7831623B2 (en) 2004-05-21 2007-10-30 Method, system, and article of manufacture for storing device information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/851,036 US20050262090A1 (en) 2004-05-21 2004-05-21 Method, system, and article of manufacture for storing device information

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/929,044 Continuation US7831623B2 (en) 2004-05-21 2007-10-30 Method, system, and article of manufacture for storing device information

Publications (1)

Publication Number Publication Date
US20050262090A1 true US20050262090A1 (en) 2005-11-24

Family

ID=34968820

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/851,036 Abandoned US20050262090A1 (en) 2004-05-21 2004-05-21 Method, system, and article of manufacture for storing device information
US11/929,044 Expired - Fee Related US7831623B2 (en) 2004-05-21 2007-10-30 Method, system, and article of manufacture for storing device information

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/929,044 Expired - Fee Related US7831623B2 (en) 2004-05-21 2007-10-30 Method, system, and article of manufacture for storing device information

Country Status (6)

Country Link
US (2) US20050262090A1 (de)
EP (1) EP1769330A2 (de)
JP (1) JP2007538327A (de)
KR (1) KR101027248B1 (de)
CN (1) CN100468405C (de)
WO (1) WO2005114372A2 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070067353A1 (en) * 2005-09-22 2007-03-22 Cheng Lee-Chu Smart path finding for file operations
US20080091810A1 (en) * 2006-10-17 2008-04-17 Katherine Tyldesley Blinick Method and Apparatus to Provide Independent Drive Enclosure Blades in a Blade Server System with Low Cost High Speed Switch Modules

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2004222340B2 (en) * 2003-03-14 2009-11-12 Intersect Ent, Inc. Sinus delivery of sustained release therapeutics
US9843475B2 (en) * 2012-12-09 2017-12-12 Connectwise, Inc. Systems and methods for configuring a managed device using an image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020161596A1 (en) * 2001-04-30 2002-10-31 Johnson Robert E. System and method for validation of storage device addresses
US6671727B1 (en) * 1999-12-20 2003-12-30 Lsi Logic Corporation Methodology for providing persistent target identification in a fibre channel environment

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173374B1 (en) * 1998-02-11 2001-01-09 Lsi Logic Corporation System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network
US6119131A (en) * 1998-06-12 2000-09-12 Microsoft Corporation Persistent volume mount points
US6496839B2 (en) * 1998-06-12 2002-12-17 Microsoft Corporation Persistent names for logical volumes
US6260120B1 (en) * 1998-06-29 2001-07-10 Emc Corporation Storage mapping and partitioning among multiple host processors in the presence of login state changes and host controller replacement
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US6457098B1 (en) * 1998-12-23 2002-09-24 Lsi Logic Corporation Methods and apparatus for coordinating shared multiple raid controller access to common storage devices
US6549916B1 (en) * 1999-08-05 2003-04-15 Oracle Corporation Event notification system tied to a file system
US6601101B1 (en) * 2000-03-15 2003-07-29 3Com Corporation Transparent access to network attached devices
US7406473B1 (en) * 2002-01-30 2008-07-29 Red Hat, Inc. Distributed file system using disk servers, lock servers and file servers
US20030188022A1 (en) * 2002-03-26 2003-10-02 Falkner Sam L. System and method for recursive recognition of archived configuration data
US20040088294A1 (en) * 2002-11-01 2004-05-06 Lerhaupt Gary S. Method and system for deploying networked storage devices
US6944620B2 (en) * 2002-11-04 2005-09-13 Wind River Systems, Inc. File system creator
JP2004310560A (ja) * 2003-04-09 2004-11-04 Hewlett Packard Japan Ltd アクセス制御システムおよびその方法
US6806756B1 (en) * 2003-06-16 2004-10-19 Delphi Technologies, Inc. Analog signal conditioning circuit having feedback offset cancellation
US7243089B2 (en) * 2003-11-25 2007-07-10 International Business Machines Corporation System, method, and service for federating and optionally migrating a local file system into a distributed file system while preserving local access to existing data
US7698289B2 (en) * 2003-12-02 2010-04-13 Netapp, Inc. Storage system architecture for striping data container content across volumes of a cluster

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6671727B1 (en) * 1999-12-20 2003-12-30 Lsi Logic Corporation Methodology for providing persistent target identification in a fibre channel environment
US20020161596A1 (en) * 2001-04-30 2002-10-31 Johnson Robert E. System and method for validation of storage device addresses

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070067353A1 (en) * 2005-09-22 2007-03-22 Cheng Lee-Chu Smart path finding for file operations
US8595224B2 (en) * 2005-09-22 2013-11-26 International Business Machines Corporation Smart path finding for file operations
US20080091810A1 (en) * 2006-10-17 2008-04-17 Katherine Tyldesley Blinick Method and Apparatus to Provide Independent Drive Enclosure Blades in a Blade Server System with Low Cost High Speed Switch Modules
US7787482B2 (en) 2006-10-17 2010-08-31 International Business Machines Corporation Independent drive enclosure blades in a blade server system with low cost high speed switch modules

Also Published As

Publication number Publication date
JP2007538327A (ja) 2007-12-27
US7831623B2 (en) 2010-11-09
CN1947117A (zh) 2007-04-11
KR20070028362A (ko) 2007-03-12
WO2005114372A2 (en) 2005-12-01
WO2005114372A3 (en) 2006-04-06
KR101027248B1 (ko) 2011-04-06
US20080052296A1 (en) 2008-02-28
EP1769330A2 (de) 2007-04-04
CN100468405C (zh) 2009-03-11

Similar Documents

Publication Publication Date Title
US10838620B2 (en) Efficient scaling of distributed storage systems
US8495131B2 (en) Method, system, and program for managing locks enabling access to a shared resource
US7203774B1 (en) Bus specific device enumeration system and method
US8364645B2 (en) Data management system and data management method
US8166264B2 (en) Method and apparatus for logical volume management
US11403269B2 (en) Versioning validation for data transfer between heterogeneous data stores
US8150936B2 (en) Methods and apparatus to manage shadow copy providers
US6711559B1 (en) Distributed processing system, apparatus for operating shared file system and computer readable medium
US7996643B2 (en) Synchronizing logical systems
US7325078B2 (en) Secure data scrubbing
EP1636690B1 (de) Verwalten einer beziehung zwischen einem zielvolumen und einem quellenvolumen
US7831623B2 (en) Method, system, and article of manufacture for storing device information
US20060059188A1 (en) Operation environment associating data migration method
US7516133B2 (en) Method and apparatus for file replication with a common format
US10209923B2 (en) Coalescing configuration engine, coalescing configuration tool and file system for storage system
US8850132B1 (en) Method and system for providing a shared data resource coordinator within a storage virtualizing data processing system
US8732688B1 (en) Updating system status
US11169728B2 (en) Replication configuration for multiple heterogeneous data stores

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CORREL, STEPHEN F.;SEEGER, JAMES JOHN;WEDLAKE, MARTIME BRUCE;REEL/FRAME:015155/0316;SIGNING DATES FROM 20040827 TO 20040914

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION