EP1828905A4 - System and method for flexible physical-to-logical mapping in raid arrays - Google Patents

System and method for flexible physical-to-logical mapping in raid arrays

Info

Publication number
EP1828905A4
EP1828905A4 EP05800827A EP05800827A EP1828905A4 EP 1828905 A4 EP1828905 A4 EP 1828905A4 EP 05800827 A EP05800827 A EP 05800827A EP 05800827 A EP05800827 A EP 05800827A EP 1828905 A4 EP1828905 A4 EP 1828905A4
Authority
EP
European Patent Office
Prior art keywords
raid
redundancy
equal
group
memory devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05800827A
Other languages
German (de)
French (fr)
Other versions
EP1828905A2 (en
Inventor
Paul Nehse
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seagate Systems UK Ltd
Original Assignee
Xyratex Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xyratex Technology Ltd filed Critical Xyratex Technology Ltd
Publication of EP1828905A2 publication Critical patent/EP1828905A2/en
Publication of EP1828905A4 publication Critical patent/EP1828905A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • the present invention relates to allocation of physical resources for logical volumes in redundant arrays of inexpensive disk (RAID) arrays. Specifically, a system and method for assigning physical address space to logical data blocks is presented, wherein data space availability and system management flexibility are increased.
  • RAID redundant arrays of inexpensive disk
  • RAID redundant arrays of inexpensive disk arrays
  • RAID architecture was first documented in 1987 when Patterson, Gibson, and Katz published a paper entitled, "A Case for Redundant Arrays of Inexpensive Disks (RAID)" (University of California, Berkeley).
  • RAID architecture combines multiple small, inexpensive disk drives into an array of disk drives that yields performance exceeding that of a Single Large Expensive Drive (SLED).
  • SLED Single Large Expensive Drive
  • the array of drives appears as a single logical storage unit (LSU) or drive.
  • RAID-O array a non-redundant array of disk drives
  • RAID controllers provide data integrity through redundant data mechanisms, high speed through streamlined algorithms, and accessibility to stored data for users and administrators.
  • a networking technique that is fundamental to the various RAID levels is "striping," a method of concatenating multiple drives into one logical storage unit. Striping involves partitioning each drive's storage space into stripes, which may be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in round-robin style, so that the combined space is composed alternately of stripes from each drive. In effect, the storage space of the drives is shuffled like a deck of cards.
  • the choice of stripe size is application dependant and affects the real-time performance of data acquisition and storage in mass storage networks.
  • small stripes typically one 512-byte sector in length
  • each record will span across all the drives in the array, each drive storing part of the data from the record. This causes long record accesses to be performed faster, because the data transfer occurs in parallel on multiple drives.
  • Applications such as on-demand video/audio, medical imaging, and data acquisition, which utilize long record accesses, will achieve optimum performance with small stripe arrays.
  • FIG. 1 is a block diagram of a conventional networked storage system 100.
  • Conventional networked storage system 100 includes a plurality of hosts IIOA through 11 ON, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • Hosts 110 are connected to a communications means 120 that is further coupled via host ports to a plurality of RAID controllers 130A, and 130B through 130N, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • RAID controllers 130 are connected through device ports to a second communication means 140, which is further coupled to a plurality of memory devices 150, including memory device 150A through 150N, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • Hosts 110 are representative of any computer systems or terminals that are capable of communicating over a network.
  • Communication means 120 is representative of any type of electronic network that uses a protocol, such as Ethernet.
  • RAID controllers 130 are representative of any storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150. RAID controllers 130 also provide data redundancy, based on system administrator programmed RAID levels. Redundancy methods include data mirroring, parity generation, and/or data regeneration from parity after a device failure.
  • Communication means 140 is any type of storage controller network, such as iSCSI or fibre channel.
  • Memory devices 150 may be any type of storage device, such as, for example, tape drives, disk drives, non-volatile memory, or solid state devices. Although most RAID architectures use disk drives as the main storage devices, it should be clear to one skilled in the art that the invention embodiments described herein apply to any type of memory devices.
  • host 11 OA In operation, host 11 OA, for example, generates a read or a write request for a specific volume (e.g., volume 1), to which it has been assigned access rights.
  • the request is sent through communication means 120 to the host ports of RAID controllers 130.
  • the command is stored in local cache in RAID controller 130B, for example, because RAID controller 130B is programmed to respond to any commands that request volume 1 access.
  • RAID controller 130B processes the request from host 11 OA and determines, from mapping tables, the first physical memory device 150 address from which to read data or to write new data.
  • volume 1 is a RAID 5 volume and the command is a write request
  • RAID controller 130B If volume 1 is a RAID 5 volume and the command is a write request, RAID controller 130B generates new parity, stores the new parity to a parity memory device 150 via communication means 140, sends a "done" signal to host IIOA via communication means 120, and writes the new host 11 OA data through communication means 140 to corresponding memory devices 150.
  • data is less susceptible to loss from memory device 150 failures and, generally, can be restored from parity and/or functional memory devices 150, in the event of a failure.
  • RAID controllers 130 also have the ability to take over control for a failed RAID controller 130, such that system performance is unaffected or the effects are limited.
  • API Programming Interface
  • OEMs Typically, Original Equipment Manufactures (OEMs) bundle BAID networks and sell these memory systems to end users for network storage. OEMs bear the burden of customization of a RAID network and tune the network performance through an API. However, the degree to which a RAID system can be optimized through the API is limited; API does not adequately handle the unique performance requirements of various dissimilar data storage applications. Additionally, API does not provide an easily modifiable and secure format for proprietary OEM RAID configurations.
  • RAID controller that has the capability to be adequately programmed for unique performance and data storage requirements. Furthermore, the RAID controller configuration should be easily modifiable by a user or system administrator. The general functions of the RAID controller, such as volume allocation, should be optimized to use fewer processing resources, in order to increase overall system performance. Finally, the RAID controller needs to allocate physical storage space to logical volumes in such a way that the majority of the storage capacity is utilized.
  • RAID controller with a mapping function for allocating physical disk space to logical volumes is described in U.S. Patent Application Publication No. 2003/0028727.
  • the '727 application entitled, "RAID Apparatus for Storing a Plurality of Same Logical Volumes on Different Disk Units," describes a RAID apparatus that has a plurality of same logical volumes allocated on a real volume.
  • the real volume is designed so that a plurality of same logical volumes are respectively allocated on different physical disk units and a combination of a plurality of logical volumes allocated on each physical disk unit differs from one physical disk unit to another. This structure prevents uneven loading on the real volume from occurring because of uneven loads on the logical volumes.
  • the '727 application identifies the problem of physical disk device load balancing in a RAID architecture and offers a solution: allocating physical disk space such that equivalent logical volumes reside on separate physical disks for load balancing optimization.
  • the '727 application fails to provide an effective means to allocate volumes to physical storage devices, such that there is greater flexibility in system design.
  • the '727 application does not provide a means for mapping logical volumes to physical storage space with fewer processing cycle requirements.
  • the '727 application does not provide a means for utilizing a greater amount of available space of each storage device, as compared to conventional methods.
  • the present invention provides a method and a computer program are provided for allocating physical memory from a group of N memory devices to logical volumes.
  • the method and program include the step of partitioning the group of N memory devices into a plurality of bands, each of the group of ⁇ memory devices sharing a portion of each of the plurality of bands.
  • a cluster map for each of the plurality of bands is generated.
  • the cluster maps indicate die physical address for each of a plurality of clusters.
  • Each of die plurality of clusters are distributed equally over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands.
  • Each of die .Z ⁇ memory devices share an approximately equal number of clusters. Available bands are determined and are allocated to a logical volume.
  • the present invention also provides a system for allocating physical memory to logical volumes.
  • the system includes a group of N memory devices partitioned into a plurality of bands. Each of the group of N memory devices share a portion of each of the plurality of bands.
  • Each of the plurality of bands has a cluster map. Each cluster map indicates the physical address for each of a plurality of clusters. Each of the plurality of clusters are equally distributed over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands.
  • Each of the N memory devices share an approximately equal number of clusters.
  • An array controller is also configured to determine if a band from the plurality of bands is available and to allocate an available band to a logical volume.
  • Figure 1 is a block diagram of a conventional networked storage system
  • FIG. 2 is a block diagram of a RAID controller system, according to an exemplary embodiment of the invention.
  • Figure 3 shows a group of physical devices that have been grouped into a device group and further grouped into sub-device groups, according to an exemplary embodiment of the invention
  • Figure 4 shows an example of a volume configuration, according to an exemplary embodiment of the invention.
  • Figure 5 is a flow diagram of a method of allocating bands to volumes, according to an exemplary embodiment of the invention.
  • the present invention is a method of allocating physical storage space to logical unit numbers (LUNs) or volumes that use a RAID controller.
  • the method provides greater flexibility to the system administrator through the RAID controller, by systematically assigning various portions of physical space to single or multiple logical device groups.
  • Each device group has specific rules for data usage and allocation.
  • Each device group is further categorized into single or multiple sub-device groups.
  • a special algorithm in the BAID controller arranges physical storage device space into logical units, or bands, that are readily allocated with litde metadata overhead per system administrator commands.
  • the physical space is allocated to logical volumes, according to system administrator specifications.
  • FIG. 2 is a block diagram of a BAID controller system 200.
  • BAID controller system 200 includes BAID controllers 130 and a general purpose personal computer (PC) 210.
  • PC 210 further includes a graphical user interface (GUI) 212.
  • BAID controllers 130 further include software applications 220, an operating system 240, and BAID controller hardware 250.
  • Software applications 220 further include a common information module object manager (CIMOM) 222, a software application layer (SAL) 224, a logic library layer (LAL) 226, a system manager (SM) 228, a software watchdog (SWD) 230, a persistent data manager (PDM) 232, an event manager (EM) 234, and a battery backup (BBU) 236.
  • CIMOM common information module object manager
  • SAL software application layer
  • LAL logic library layer
  • SWD software watchdog
  • PDM persistent data manager
  • EM event manager
  • BBU battery backup
  • GUI 212 is a software application used to input personality attributes for BAID controllers 130.
  • GUI 212 runs on PC 210.
  • BAID controllers 130 are representative of BAID storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150.
  • BAID controllers 130 are an exemplary embodiment of the invention; however, otiier implementations of controllers may be envisioned here by diose skilled in the art.
  • BAID controllers 130 provide data redundancy, based on system- administrator-programmed BAID levels. This includes data mirroring, parity generation, and/or data regeneration from parity after a device failure.
  • BAID controller hardware 250 is the physical processor platform of BAID controllers 130 that executes all BAID controller software applications 220 and consists of a microprocessor, memory, and all other electronic devices necessary for BAID control.
  • Operating system 240 is an industry-standard software platform, such as Linux, for example, upon which software applications 220 can run. Operating system 240 delivers other benefits to BAID controllers 130. Operating system 240 contains utilities, such as a file system, that provide a way for BAID controllers 130 to store and transfer files.
  • Software applications 220 include algorithms and logic necessary for the RAID controllers 130 and are divided into those needed for initialization and those that operate at run- time.
  • Initialization software applications 220 consist of the following software functional blocks: CIMOM 222, which is a module that instantiates all objects in software applications 220 with the personality attributes entered; SAL 224, which is the application layer upon which the run-time modules execute; and LAL 226, a library of low-level hardware commands used by a BAID transaction processor.
  • CIMOM 222 which is a module that instantiates all objects in software applications 220 with the personality attributes entered
  • SAL 224 which is the application layer upon which the run-time modules execute
  • LAL 226, a library of low-level hardware commands used by a BAID transaction processor.
  • Software applications 220 that operate at run-time include the following software functional blocks: system manager 228, a module that carries out the run-time executive; SWD 230, a module that provides software supervision function for fault management; PDM 232, a module that handles the personality data within software applications 220; EM 234, a task scheduler that launches software applications 220 under conditional execution; and BBU 236, a module that handles power bus management for battery backup.
  • SM 228 is responsible for allocating physical space to newly requested volumes and adding physical space to existing volumes when new devices are added to the system.
  • SM 228 takes commands from the system administrator (e.g., assigning new volumes or creating new sub-device groups) and executes those commands. Commands that cannot be processed (because of lack of space available, for example) are returned as error messages to the system administrator.
  • the volume allocation function of SM 228 is described in more detail in Figure 4.
  • Figure 3 shows an example of a group of physical devices 300 that have been grouped into a device group 310 and further grouped into sub-device groups 320a, 320b, and 320c by a system administrator through SM 228.
  • a device group 310 may be assigned to multiple logical volumes 330, which include a plurality of LUNs 330a - 33On that have varying sizes and RAID levels, where 'n' is any integer value and is not representative of any other value 'n' described herein.
  • the maximum number of logical volumes 330 assigned to device group 310 depends on the size of logical volumes 330 and the number of sub-device groups 320 within device group 310.
  • a sub-device group 320 may include from one to sixteen physical devices; however, all devices must be the same class of storage.
  • the class of storage is defined by the system administrator. It may be based on the types of devices in sub-device group 320, such as fibre channel or serial ATA, or based on physical characteristics, such as rotation speed or size, or based on logical considerations, such as function, department, or user.
  • SM 228 defaults all physical devices to the same class of storage. After installation, the system administrator may define new classes of storage.
  • SM 228 further divides each storage sub-device group 320 into bands, which are the smallest unit of logical storage assigned to a logical volume 330. By categorizing the storage area in such a manner, the granularity of each storage unit allows more physical space to be utilized.
  • Table 1 shows an example of bands that stripe across all the devices within a sub-device group 320. There are n number of bands in sub-device group 320, depending on the capacity of each device.
  • Each band may be assigned to RAID 0 or RAID 5.
  • a band may be assigned to contain master volume data, mirror volume data, or snap volume data, as defined below.
  • the master volume data band format is used when space is allocated to a master volume (e.g., volume 330a).
  • the master volume may include one or more bands; however, all bands in that volume must be in the same sub-device group 320 (e.g., 320a).
  • the amount of user space within a band varies, depending on the RAID level.
  • the data band may be configured for either RAID level 0 or 5.
  • a mirror volume may include one or more bands, but all mirror bands associated with a master volume must be in a different sub-device group (e.g., sub-device group 320b) than the bands used for the master volume.
  • the amount of user space within a band varies, depending on the RAID level.
  • the mirror band may be configured for either RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
  • the snap band format is used when space is allocated for a point in time copy of a master volume.
  • the snap volume may include one or more bands, and all snap bands associated with a master volume may be in the same or different sub-device group.
  • the amount of user space within a band varies, depending on the RAID level.
  • the snap band may be configured for eidier RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
  • Bands are expanded through the addition of devices to the sub-device group in which the bands reside. At anytime after sub-device group 320a is created, it may be expanded through addition of one or more devices to sub-device group 320a. After the devices are added, SM 228 migrates the existing bands to use the added devices. When the migration is complete, sub-device group 320a will include additional bands that may then be allocated to new or existing logical volumes 330.
  • Table 2 shows an example of a redundancy group (RGrp) mapping for various numbers (integer power of two only) of devices in a sub-device group for RAID 0 (no parity device is required) for a single band.
  • Each band is further sub-divided into a plurality of RGrps, depending on the type of RAID level defined by the system administrator and the number of devices within a sub-device group 320.
  • RGrp describes the RAID level, stripe size, number of devices, device path used, and location of the data within sub-device group 320a.
  • the number of RGrps assigned to sub-device group 320a must be an integer power of two for RAID 0 and an integer power of two plus one additional device for RAID 5 (for parity data).
  • Table 3 shows an example of an RGrp mapping of RGrps for integer power of two plus one sub-device groups 320 for BAID 5 (for parity data) for a single band in sub-device group 320b.
  • the number of RGrps assigned to sub-device group 320b must be an integer power of two plus one additional device for RAID 5 (for parity data).
  • Table 4 shows an example of an RGrp mapping of RGrps for a RAID 0 band in sub-device group 320b that does not include an integer power of two number of devices.
  • rotating RGrps (RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrp ⁇ , and RGrp7) are used to map band.
  • the number of RGrps required to map the entire band is equal to the number of devices within any sub-device group 320.
  • Table 4 there are seven RGrps required to map a RAID 0 band in sub- device group 320 that includes seven devices.
  • Each RGrp is striped across the devices, such that there is an integer power of two number of devices (e.g., 2, 4, 8, and so on, for RAID 0) with a specific RGrp and no device has two stripes of the same RGrp.
  • the seven disk sub-device group 320 in Table 4 cannot use eight devices for rotating a specific RGrp, because Device 1 would contain two stripes of RGrpl.
  • the next available choice is four (integer power of 2), which satisfies the RGrp assignment rules by rotating onto four devices (RGrpl) before beginning a new RGrp (RGrp2).
  • Table 5 shows an example of a rotating RGrp mapping for sub-device groups 320 in RAID 5 band that do not equal integer powers of two plus one devices for parity (e.g., 3, 5, 9, and so on).
  • Table 5 outlines the process for band RGrp mapping in a RAID 5 level that does not include an integer power of two number of devices plus a parity device in sub-device groups 320.
  • the number of RGrps e.g., RGrpl, RGrp2, RGrp3, for example
  • the number of RGrps is equal to the number of devices in each of sub-device groups 320.
  • RGrps there are four RGrps in the four device sub-device group 320, namely RGrpl, RGrp2, RGrp3, and RGrp4; six RGrps in the six device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, and RGrp ⁇ ; and eight RGrps in the eight device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrp ⁇ , RGrp7, and RGrp ⁇ .
  • the number of devices an RGrp will stripe across is equal to an integer power of two plus one for the next lower integer power of two phis one multiple.
  • the next lower integer power of two plus one is four plus one, which is five. Therefore, each RGrp (RGrpl - 6) stripes across five devices in an eight disk sub-device group 320.
  • the next lower integer power of two plus one for the six disk sub-device group is also four plus one, which is five.
  • the next lower integer power of two plus one multiple is two plus one, which is three.
  • four RGrps (RGrpsl - 4) stripe across three disks in a sub-device group 320.
  • Each RGrp category striped across multiple devices is known as a cluster.
  • the RGrpl sections together combine into a single cluster.
  • RGrp2 sections are another cluster, and so on.
  • a cluster is a configurable value that is used to manage user data within a sub-device group. It is not used for managing parity data for RAID 5 volumes.
  • the minimum cluster size is 1MB and must be an integer power of two.
  • the cluster size is set before any device groups or volumes are created, and that size is used by all device groups within RAID controller 130.
  • Table 6 shows an example of a cluster map that includes clusters of a single band in an eight disk sub-device group 320 that is configured for a RAID 0 level.
  • the band is in an eight disk sub-device group 320 at BAID level 0 and includes n+10 clusters, which are all mapped to RGrpl. Since eight is an integer power of two, rotating RGrps are not required and, therefore, the band can use the same RGrp, in this case, RGrpl. For this configuration, only one RGrp is required to map all the clusters in the band.
  • Table 7 illustrates an example of a RAID 5 cluster map in which rotating redundancy is required, because the number of disks is not equal to an integer power of two plus one. Therefore, eight RGrps are required to map across all of the disks in sub- device group 320. This translates to eight clusters for that stripe. The RGrp rotation repeats for the next stripe, which translates into another group of eight clusters that has an offset of six (RGrpl starts 6 stripes up from the first stripe). The third group of RGrps maps to a third set of eight clusters with an offset of eleven (RGrpl starts again 11 stripes from the first stripe), and so on. Ta
  • Groups of eight clusters are mapped by eight RGrps, and each set is identified by a specific offset in the map.
  • the top of the band has space available to map six clusters only, because a single RGrp must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8, so the map completes at the end of RGrp ⁇ , which spans the required five disks.
  • the top of the band has space available to map six clusters only, as a single RGrp (for example, RGrp7) must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8.
  • Figure 4 shows an example of a volume configuration 1100, which includes a volume band list 1110, which further includes volumes 330a and 330b, when a create volume command is received by SM 228 for a RAID level 0 logical volume 330a and a RAID level 5 logical volume 330b.
  • logical volumes 330 only require one sub-device group 320, as no mirroring data in a separate sub-device group 320 is required.
  • SM 228 allocates bands in a sub-device group 320a (for example) to logical volume 330a and assigns the bands a BAID level 0. When the bands are assigned to logical volume 330a, they move from a state of being free to that of being allocated.
  • SM 228 also allocates bands in a sub-device group 320a (for example) to logical volume 330b and assigns the bands a BAID level 5. When the bands are assigned to logical volume 330b, they move from a state of being free to that of being allocated.
  • Figure 4 shows an example of SM 228 allocating bands in a sub-device group 320 to two different logical volumes 330, a BAID 0 logical volume 330a that is 6GB, and a RAID 5 logical volume 330b that is 5.1GB.
  • Figure 5 is a method 500 of allocating bands to volumes.
  • SM 228 divides each sub-device group into bands that may later be assigned to a BAID 0 or a BAID 5 volume. This process includes the following method steps:
  • Step 510 Calculating redundancy group
  • Step 520 Calculating common band widths for BJLID 0 and RAID 5
  • SM 228 compares the RAID 0 redundancy group map to the RAID 5 redundancy group map for a particular sub-device group and determines a common 1 MB boundary, where a full redundancy group rotation ends. This marks a band boundary, where either RAID 0 or RAID 5 may be assigned to the band.
  • Method 500 proceeds to step 530.
  • Step 530 Calculating cluster map for each band,
  • SM 228 calculates the cluster maps for each of the bands, as the band boundaries have already been defined, in the previous steps, for each sub-device group and redundancy groups that have been calculated for each band for both RAID 0 and RAID 5.
  • Cluster maps for rotating redundancy are in a slightly different format from cluster maps, for which a single redundancy group maps all of the clusters in a band, as shown in Tables 6 and 7, respectively.
  • Method 500 proceeds to step 540.
  • Step 540 Are there any free bands?
  • SM 228 receives a request for a new volume creation, including information about the size of the requested volume, the desired sub- device group, and its RAID level. SM 228 analyzes die sub-device group for bands that are free and bypasses bands that are already allocated to other volumes. SM 228 checks whether there are any free bands left for allocation in the requested sub-device group. If yes, method 500 proceeds to step 550; if no, method 500 proceeds to step 570.
  • Step 550 Allocating a band to a volume
  • SM 228 allocates to the new volume die first available band that meets die requirements for the requested volume and assigns die requested RAID type to the band. SM 228 continues to scan for free bands, until die entire requested volume size has been satisfied with enough allocated bands from the sub-device groups. However, if there are not enough free bands to allocate to the new volume, SM 228 generates a message to the system administrator when die space allocated to the volume begins to reach capacity and informs the system administrator that data should be migrated to other volumes or that more memory devices 150 should be added to the sub-device group. Method 500 proceeds to step 560. Step 560: Bringing volume, online
  • SM 228 sets the state of the allocated bands from “free” to “allocated” and brings the new volume online by allowing host access.
  • Method 500 ends.
  • Step 570 Generating volume creation error
  • SM 228 generates an error message to the system administrator that indicates that there are no free bands in the desired sub-device group with which to allocate the newly requested volume.
  • Method 500 ends.
  • the BAID controller's processor has more diroughput available for other system resources and thereby increases overall system performance over that of conventional networked storage systems.
  • This method of allocation also allows more user flexibility in designing the system for various data storage needs, because the pre-mapped bands are assigned to a new volume, as defined by the user, rather than by the RAID controller that allocates volumes according to internal algorithms with little or no user input.
  • this allocation method allows more memory device capacity to be utilized, because the bands align on the nearest megabyte boundaries and the way the clusters are laid out results in very little unused space on the devices.
  • the only space that is not available to the user is the Meta Data area and a portion at the end of the device.
  • the unmapped space at the end of the device is used for reassigning clusters during error recovery.

Abstract

A system, method and computer program for allocating physical memory from a group of N memory devices to logical volumes. A group of N memory devices are partitioned into a plurality of bands, each of the group of N memory devices sharing a portion of each of the plurality of bands. A cluster map for each of the plurality of bands is generated. The cluster maps indicate the physical address for each of a plurality of clusters. Each of the plurality of clusters are distributed equally over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands. Each of the N memory devices share an approximately equal number of clusters. Available bands are determined and are allocated to a logical volume.

Description

SYSTEM AND METHOD FOR FLEXIBLE PHYSICAL-TO -LO GICAL
MAPPING IN RAID ARRAYS
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application Serial No. 60/611,802, filed September 22, 2004 in the U.S. Patent and Trademark Office, the entire content of which is incorporated by reference herein.
FIELD OF THE INVENTION
[0002] The present invention relates to allocation of physical resources for logical volumes in redundant arrays of inexpensive disk (RAID) arrays. Specifically, a system and method for assigning physical address space to logical data blocks is presented, wherein data space availability and system management flexibility are increased.
BACKGROUND OF THE INVENTION
[0003] Currently, redundant arrays of inexpensive disk (RAID) arrays are the principle storage architecture for large, networked computer storage systems. RAID architecture was first documented in 1987 when Patterson, Gibson, and Katz published a paper entitled, "A Case for Redundant Arrays of Inexpensive Disks (RAID)" (University of California, Berkeley). Fundamentally, RAID architecture combines multiple small, inexpensive disk drives into an array of disk drives that yields performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, the array of drives appears as a single logical storage unit (LSU) or drive. Five types of array architectures, designated as RAID-I through RAID-5, were defined by the Berkeley paper, each type providing disk fault- tolerance and offering different trade-offs in features and performance. In addition to the five redundant array architectures, a non-redundant array of disk drives is referred to as a RAID-O array. RAID controllers provide data integrity through redundant data mechanisms, high speed through streamlined algorithms, and accessibility to stored data for users and administrators. [0004] A networking technique that is fundamental to the various RAID levels is "striping," a method of concatenating multiple drives into one logical storage unit. Striping involves partitioning each drive's storage space into stripes, which may be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in round-robin style, so that the combined space is composed alternately of stripes from each drive. In effect, the storage space of the drives is shuffled like a deck of cards. The type of application environment, I/O or data intensive, determines whether large or small stripes should be used. The choice of stripe size is application dependant and affects the real-time performance of data acquisition and storage in mass storage networks. In data intensive environments and single-user systems which access large records, small stripes (typically one 512-byte sector in length) can be used, so that each record will span across all the drives in the array, each drive storing part of the data from the record. This causes long record accesses to be performed faster, because the data transfer occurs in parallel on multiple drives. Applications such as on-demand video/audio, medical imaging, and data acquisition, which utilize long record accesses, will achieve optimum performance with small stripe arrays.
[0005] Figure 1 is a block diagram of a conventional networked storage system 100. Conventional networked storage system 100 includes a plurality of hosts IIOA through 11 ON, where 'N' is any integer value and is not representative of any other value 'N' described herein. Hosts 110 are connected to a communications means 120 that is further coupled via host ports to a plurality of RAID controllers 130A, and 130B through 130N, where 'N' is any integer value and is not representative of any other value 'N' described herein. RAID controllers 130 are connected through device ports to a second communication means 140, which is further coupled to a plurality of memory devices 150, including memory device 150A through 150N, where 'N' is any integer value and is not representative of any other value 'N' described herein.
[0006] Hosts 110 are representative of any computer systems or terminals that are capable of communicating over a network. Communication means 120 is representative of any type of electronic network that uses a protocol, such as Ethernet. RAID controllers 130 are representative of any storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150. RAID controllers 130 also provide data redundancy, based on system administrator programmed RAID levels. Redundancy methods include data mirroring, parity generation, and/or data regeneration from parity after a device failure. Communication means 140 is any type of storage controller network, such as iSCSI or fibre channel. Memory devices 150 may be any type of storage device, such as, for example, tape drives, disk drives, non-volatile memory, or solid state devices. Although most RAID architectures use disk drives as the main storage devices, it should be clear to one skilled in the art that the invention embodiments described herein apply to any type of memory devices.
[0007] In operation, host 11 OA, for example, generates a read or a write request for a specific volume (e.g., volume 1), to which it has been assigned access rights. The request is sent through communication means 120 to the host ports of RAID controllers 130. The command is stored in local cache in RAID controller 130B, for example, because RAID controller 130B is programmed to respond to any commands that request volume 1 access. RAID controller 130B processes the request from host 11 OA and determines, from mapping tables, the first physical memory device 150 address from which to read data or to write new data. If volume 1 is a RAID 5 volume and the command is a write request, RAID controller 130B generates new parity, stores the new parity to a parity memory device 150 via communication means 140, sends a "done" signal to host IIOA via communication means 120, and writes the new host 11 OA data through communication means 140 to corresponding memory devices 150. As a result, data is less susceptible to loss from memory device 150 failures and, generally, can be restored from parity and/or functional memory devices 150, in the event of a failure. RAID controllers 130 also have the ability to take over control for a failed RAID controller 130, such that system performance is unaffected or the effects are limited.
[0008] The operation of most standard RAID controllers is set at the Application
Programming Interface (API) level. Typically, Original Equipment Manufactures (OEMs) bundle BAID networks and sell these memory systems to end users for network storage. OEMs bear the burden of customization of a RAID network and tune the network performance through an API. However, the degree to which a RAID system can be optimized through the API is limited; API does not adequately handle the unique performance requirements of various dissimilar data storage applications. Additionally, API does not provide an easily modifiable and secure format for proprietary OEM RAID configurations.
[0009] There is, therefore, a need for a RAID controller that has the capability to be adequately programmed for unique performance and data storage requirements. Furthermore, the RAID controller configuration should be easily modifiable by a user or system administrator. The general functions of the RAID controller, such as volume allocation, should be optimized to use fewer processing resources, in order to increase overall system performance. Finally, the RAID controller needs to allocate physical storage space to logical volumes in such a way that the majority of the storage capacity is utilized.
[0010] An example RAID controller with a mapping function for allocating physical disk space to logical volumes is described in U.S. Patent Application Publication No. 2003/0028727. The '727 application, entitled, "RAID Apparatus for Storing a Plurality of Same Logical Volumes on Different Disk Units," describes a RAID apparatus that has a plurality of same logical volumes allocated on a real volume. The real volume is designed so that a plurality of same logical volumes are respectively allocated on different physical disk units and a combination of a plurality of logical volumes allocated on each physical disk unit differs from one physical disk unit to another. This structure prevents uneven loading on the real volume from occurring because of uneven loads on the logical volumes.
[0011] The '727 application identifies the problem of physical disk device load balancing in a RAID architecture and offers a solution: allocating physical disk space such that equivalent logical volumes reside on separate physical disks for load balancing optimization. However, the '727 application fails to provide an effective means to allocate volumes to physical storage devices, such that there is greater flexibility in system design. Furthermore, the '727 application does not provide a means for mapping logical volumes to physical storage space with fewer processing cycle requirements. Finally, the '727 application does not provide a means for utilizing a greater amount of available space of each storage device, as compared to conventional methods.
[0012] It is therefore an object of this invention to provide a system and method for assigning physical storage space in a RAID array, such that maximum system flexibility is available to the administrator(s).
[0013] It is another object of the invention to provide a system and method for assigning physical storage space in a BAID array, such that fewer processing cycles are needed to maintain mapping information when a new volume is created.
[0014] It is yet another object of this invention to provide a system and method for assigning physical storage space in a RAID array, such that more data storage capacity is available.
BRIEF SUMMARY OF THE INVENTION
[0015] The present invention provides a method and a computer program are provided for allocating physical memory from a group of N memory devices to logical volumes. The method and program include the step of partitioning the group of N memory devices into a plurality of bands, each of the group of ^memory devices sharing a portion of each of the plurality of bands. A cluster map for each of the plurality of bands is generated. The cluster maps indicate die physical address for each of a plurality of clusters. Each of die plurality of clusters are distributed equally over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands. Each of die .ZΫ memory devices share an approximately equal number of clusters. Available bands are determined and are allocated to a logical volume.
[0016] The present invention also provides a system for allocating physical memory to logical volumes. The system includes a group of N memory devices partitioned into a plurality of bands. Each of the group of N memory devices share a portion of each of the plurality of bands. Each of the plurality of bands has a cluster map. Each cluster map indicates the physical address for each of a plurality of clusters. Each of the plurality of clusters are equally distributed over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands. Each of the N memory devices share an approximately equal number of clusters. An array controller is also configured to determine if a band from the plurality of bands is available and to allocate an available band to a logical volume.
[0017] These and other aspects of the invention will be more clearly recognized from the following detailed description of the invention which is provided in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] Figure 1 is a block diagram of a conventional networked storage system;
[0019] Figure 2 is a block diagram of a RAID controller system, according to an exemplary embodiment of the invention;
[0020] Figure 3 shows a group of physical devices that have been grouped into a device group and further grouped into sub-device groups, according to an exemplary embodiment of the invention;
[0021] Figure 4 shows an example of a volume configuration, according to an exemplary embodiment of the invention; and
[0022] Figure 5 is a flow diagram of a method of allocating bands to volumes, according to an exemplary embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] The present invention is a method of allocating physical storage space to logical unit numbers (LUNs) or volumes that use a RAID controller. The method provides greater flexibility to the system administrator through the RAID controller, by systematically assigning various portions of physical space to single or multiple logical device groups. Each device group has specific rules for data usage and allocation. Each device group is further categorized into single or multiple sub-device groups. A special algorithm in the BAID controller arranges physical storage device space into logical units, or bands, that are readily allocated with litde metadata overhead per system administrator commands. The physical space is allocated to logical volumes, according to system administrator specifications.
[0024] Figure 2 is a block diagram of a BAID controller system 200. BAID controller system 200 includes BAID controllers 130 and a general purpose personal computer (PC) 210. PC 210 further includes a graphical user interface (GUI) 212. BAID controllers 130 further include software applications 220, an operating system 240, and BAID controller hardware 250. Software applications 220 further include a common information module object manager (CIMOM) 222, a software application layer (SAL) 224, a logic library layer (LAL) 226, a system manager (SM) 228, a software watchdog (SWD) 230, a persistent data manager (PDM) 232, an event manager (EM) 234, and a battery backup (BBU) 236.
[0025] GUI 212 is a software application used to input personality attributes for BAID controllers 130. GUI 212 runs on PC 210. BAID controllers 130 are representative of BAID storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150. As shown in Figure 2, BAID controllers 130 are an exemplary embodiment of the invention; however, otiier implementations of controllers may be envisioned here by diose skilled in the art. BAID controllers 130 provide data redundancy, based on system- administrator-programmed BAID levels. This includes data mirroring, parity generation, and/or data regeneration from parity after a device failure. BAID controller hardware 250 is the physical processor platform of BAID controllers 130 that executes all BAID controller software applications 220 and consists of a microprocessor, memory, and all other electronic devices necessary for BAID control. Operating system 240 is an industry-standard software platform, such as Linux, for example, upon which software applications 220 can run. Operating system 240 delivers other benefits to BAID controllers 130. Operating system 240 contains utilities, such as a file system, that provide a way for BAID controllers 130 to store and transfer files. Software applications 220 include algorithms and logic necessary for the RAID controllers 130 and are divided into those needed for initialization and those that operate at run- time. Initialization software applications 220 consist of the following software functional blocks: CIMOM 222, which is a module that instantiates all objects in software applications 220 with the personality attributes entered; SAL 224, which is the application layer upon which the run-time modules execute; and LAL 226, a library of low-level hardware commands used by a BAID transaction processor.
[0026] Software applications 220 that operate at run-time include the following software functional blocks: system manager 228, a module that carries out the run-time executive; SWD 230, a module that provides software supervision function for fault management; PDM 232, a module that handles the personality data within software applications 220; EM 234, a task scheduler that launches software applications 220 under conditional execution; and BBU 236, a module that handles power bus management for battery backup.
[0027] SM 228 is responsible for allocating physical space to newly requested volumes and adding physical space to existing volumes when new devices are added to the system. SM 228 takes commands from the system administrator (e.g., assigning new volumes or creating new sub-device groups) and executes those commands. Commands that cannot be processed (because of lack of space available, for example) are returned as error messages to the system administrator. The volume allocation function of SM 228 is described in more detail in Figure 4.
[0028] Figure 3 shows an example of a group of physical devices 300 that have been grouped into a device group 310 and further grouped into sub-device groups 320a, 320b, and 320c by a system administrator through SM 228. A device group 310 may be assigned to multiple logical volumes 330, which include a plurality of LUNs 330a - 33On that have varying sizes and RAID levels, where 'n' is any integer value and is not representative of any other value 'n' described herein. The maximum number of logical volumes 330 assigned to device group 310 depends on the size of logical volumes 330 and the number of sub-device groups 320 within device group 310.
[0029] A sub-device group 320 may include from one to sixteen physical devices; however, all devices must be the same class of storage. The class of storage is defined by the system administrator. It may be based on the types of devices in sub-device group 320, such as fibre channel or serial ATA, or based on physical characteristics, such as rotation speed or size, or based on logical considerations, such as function, department, or user. At system installation, SM 228 defaults all physical devices to the same class of storage. After installation, the system administrator may define new classes of storage.
[0030] SM 228 further divides each storage sub-device group 320 into bands, which are the smallest unit of logical storage assigned to a logical volume 330. By categorizing the storage area in such a manner, the granularity of each storage unit allows more physical space to be utilized. Table 1 shows an example of bands that stripe across all the devices within a sub-device group 320. There are n number of bands in sub-device group 320, depending on the capacity of each device.
Table 1
[0031] Each band may be assigned to RAID 0 or RAID 5. There are three band formats: master volume data, mirror volume data, and snap volume data. A band may be assigned to contain master volume data, mirror volume data, or snap volume data, as defined below.
[0032] The master volume data band format is used when space is allocated to a master volume (e.g., volume 330a). The master volume may include one or more bands; however, all bands in that volume must be in the same sub-device group 320 (e.g., 320a). The amount of user space within a band varies, depending on the RAID level. The data band may be configured for either RAID level 0 or 5.
[0033] When space is allocated as a mirror to a master volume, the mirror band format is used. A mirror volume may include one or more bands, but all mirror bands associated with a master volume must be in a different sub-device group (e.g., sub-device group 320b) than the bands used for the master volume. The amount of user space within a band varies, depending on the RAID level. The mirror band may be configured for either RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
[0034] The snap band format is used when space is allocated for a point in time copy of a master volume. The snap volume may include one or more bands, and all snap bands associated with a master volume may be in the same or different sub-device group. The amount of user space within a band varies, depending on the RAID level. The snap band may be configured for eidier RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
[0035] Bands are expanded through the addition of devices to the sub-device group in which the bands reside. At anytime after sub-device group 320a is created, it may be expanded through addition of one or more devices to sub-device group 320a. After the devices are added, SM 228 migrates the existing bands to use the added devices. When the migration is complete, sub-device group 320a will include additional bands that may then be allocated to new or existing logical volumes 330.
[0036] Table 2 shows an example of a redundancy group (RGrp) mapping for various numbers (integer power of two only) of devices in a sub-device group for RAID 0 (no parity device is required) for a single band. Each band is further sub-divided into a plurality of RGrps, depending on the type of RAID level defined by the system administrator and the number of devices within a sub-device group 320. RGrp describes the RAID level, stripe size, number of devices, device path used, and location of the data within sub-device group 320a. The number of RGrps assigned to sub-device group 320a must be an integer power of two for RAID 0 and an integer power of two plus one additional device for RAID 5 (for parity data).
[0037] Table 3 shows an example of an RGrp mapping of RGrps for integer power of two plus one sub-device groups 320 for BAID 5 (for parity data) for a single band in sub-device group 320b. The number of RGrps assigned to sub-device group 320b must be an integer power of two plus one additional device for RAID 5 (for parity data).
[0038] Table 4 shows an example of an RGrp mapping of RGrps for a RAID 0 band in sub-device group 320b that does not include an integer power of two number of devices.
[0039] In this example, rotating RGrps (RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrpό, and RGrp7) are used to map band. The number of RGrps required to map the entire band is equal to the number of devices within any sub-device group 320. For example, in Table 4, there are seven RGrps required to map a RAID 0 band in sub- device group 320 that includes seven devices. Each RGrp is striped across the devices, such that there is an integer power of two number of devices (e.g., 2, 4, 8, and so on, for RAID 0) with a specific RGrp and no device has two stripes of the same RGrp. For example, the seven disk sub-device group 320 in Table 4 cannot use eight devices for rotating a specific RGrp, because Device 1 would contain two stripes of RGrpl. The next available choice is four (integer power of 2), which satisfies the RGrp assignment rules by rotating onto four devices (RGrpl) before beginning a new RGrp (RGrp2).
[0040] Table 5 shows an example of a rotating RGrp mapping for sub-device groups 320 in RAID 5 band that do not equal integer powers of two plus one devices for parity (e.g., 3, 5, 9, and so on). Table 5
M
M
M
[0041] Table 5 outlines the process for band RGrp mapping in a RAID 5 level that does not include an integer power of two number of devices plus a parity device in sub-device groups 320. As in the previous example, the number of RGrps (e.g., RGrpl, RGrp2, RGrp3, for example) is equal to the number of devices in each of sub-device groups 320. Therefore, there are four RGrps in the four device sub-device group 320, namely RGrpl, RGrp2, RGrp3, and RGrp4; six RGrps in the six device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, and RGrpό; and eight RGrps in the eight device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrpό, RGrp7, and RGrpδ. The number of devices an RGrp will stripe across is equal to an integer power of two plus one for the next lower integer power of two phis one multiple. For example, in the eight disk sub-device group 320, the next lower integer power of two plus one is four plus one, which is five. Therefore, each RGrp (RGrpl - 6) stripes across five devices in an eight disk sub-device group 320. Similarly, the next lower integer power of two plus one for the six disk sub-device group is also four plus one, which is five. In the four disk sub-device group band, the next lower integer power of two plus one multiple is two plus one, which is three. Thus, four RGrps (RGrpsl - 4) stripe across three disks in a sub-device group 320.
[0042] Each RGrp category striped across multiple devices is known as a cluster. Thus, in Table 5, the RGrpl sections together combine into a single cluster. Likewise, RGrp2 sections are another cluster, and so on. Thus, there are eight clusters in the eight disk sub-device group, six clusters in the six disk sub-device group, and four clusters in the four disk sub-device group.
[0043] A cluster is a configurable value that is used to manage user data within a sub-device group. It is not used for managing parity data for RAID 5 volumes. The minimum cluster size is 1MB and must be an integer power of two. The cluster size is set before any device groups or volumes are created, and that size is used by all device groups within RAID controller 130.
[0044] Table 6 shows an example of a cluster map that includes clusters of a single band in an eight disk sub-device group 320 that is configured for a RAID 0 level.
Table 6
Sub Device Group
[0045] The band is in an eight disk sub-device group 320 at BAID level 0 and includes n+10 clusters, which are all mapped to RGrpl. Since eight is an integer power of two, rotating RGrps are not required and, therefore, the band can use the same RGrp, in this case, RGrpl. For this configuration, only one RGrp is required to map all the clusters in the band.
[0046] Table 7 illustrates an example of a RAID 5 cluster map in which rotating redundancy is required, because the number of disks is not equal to an integer power of two plus one. Therefore, eight RGrps are required to map across all of the disks in sub- device group 320. This translates to eight clusters for that stripe. The RGrp rotation repeats for the next stripe, which translates into another group of eight clusters that has an offset of six (RGrpl starts 6 stripes up from the first stripe). The third group of RGrps maps to a third set of eight clusters with an offset of eleven (RGrpl starts again 11 stripes from the first stripe), and so on. Ta
[0047] Groups of eight clusters are mapped by eight RGrps, and each set is identified by a specific offset in the map. The top of the band has space available to map six clusters only, because a single RGrp must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8, so the map completes at the end of RGrpό, which spans the required five disks. The top of the band has space available to map six clusters only, as a single RGrp (for example, RGrp7) must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8.
[0048] Figure 4 shows an example of a volume configuration 1100, which includes a volume band list 1110, which further includes volumes 330a and 330b, when a create volume command is received by SM 228 for a RAID level 0 logical volume 330a and a RAID level 5 logical volume 330b. These logical volumes 330 only require one sub-device group 320, as no mirroring data in a separate sub-device group 320 is required. SM 228 allocates bands in a sub-device group 320a (for example) to logical volume 330a and assigns the bands a BAID level 0. When the bands are assigned to logical volume 330a, they move from a state of being free to that of being allocated. The bands assigned to logical volume 330a are not required to be contiguous. SM 228 also allocates bands in a sub-device group 320a (for example) to logical volume 330b and assigns the bands a BAID level 5. When the bands are assigned to logical volume 330b, they move from a state of being free to that of being allocated. Figure 4 shows an example of SM 228 allocating bands in a sub-device group 320 to two different logical volumes 330, a BAID 0 logical volume 330a that is 6GB, and a RAID 5 logical volume 330b that is 5.1GB.
[0049] Figure 5 is a method 500 of allocating bands to volumes. Upon initialization, SM 228 divides each sub-device group into bands that may later be assigned to a BAID 0 or a BAID 5 volume. This process includes the following method steps:
Step 510: Calculating redundancy group
[0050] In this step, SM 228 calculates the number of memory devices 150 in each sub-device group. Based on this value, SM 228 calculates the number of redundancy groups that are required to map the sub-device group for RAID 0 and again for RAID 5. For example, in an eight disk sub-device group, the number of redundancy groups that are required to map clusters for RAID 0 is one (integer power of two = true), and eight redundancy groups are required to map clusters for RAID 5 (integer power of two plus one = false). Method 500 proceeds to step 520.
Step 520: Calculating common band widths for BJLID 0 and RAID 5
[0051] In this step, SM 228 compares the RAID 0 redundancy group map to the RAID 5 redundancy group map for a particular sub-device group and determines a common 1 MB boundary, where a full redundancy group rotation ends. This marks a band boundary, where either RAID 0 or RAID 5 may be assigned to the band. Method 500 proceeds to step 530. Step 530: Calculating cluster map for each band,
[0052] In this step, SM 228 calculates the cluster maps for each of the bands, as the band boundaries have already been defined, in the previous steps, for each sub-device group and redundancy groups that have been calculated for each band for both RAID 0 and RAID 5. Cluster maps for rotating redundancy are in a slightly different format from cluster maps, for which a single redundancy group maps all of the clusters in a band, as shown in Tables 6 and 7, respectively. Method 500 proceeds to step 540.
Step 540: Are there any free bands?
[0053] In this decision step, SM 228 receives a request for a new volume creation, including information about the size of the requested volume, the desired sub- device group, and its RAID level. SM 228 analyzes die sub-device group for bands that are free and bypasses bands that are already allocated to other volumes. SM 228 checks whether there are any free bands left for allocation in the requested sub-device group. If yes, method 500 proceeds to step 550; if no, method 500 proceeds to step 570.
Step 550: Allocating a band to a volume
[0054] In this step, SM 228 allocates to the new volume die first available band that meets die requirements for the requested volume and assigns die requested RAID type to the band. SM 228 continues to scan for free bands, until die entire requested volume size has been satisfied with enough allocated bands from the sub-device groups. However, if there are not enough free bands to allocate to the new volume, SM 228 generates a message to the system administrator when die space allocated to the volume begins to reach capacity and informs the system administrator that data should be migrated to other volumes or that more memory devices 150 should be added to the sub-device group. Method 500 proceeds to step 560. Step 560: Bringing volume, online
[0055] In this step, SM 228 sets the state of the allocated bands from "free" to "allocated" and brings the new volume online by allowing host access. Method 500 ends.
Step 570: Generating volume creation error
[0056] In this step, SM 228 generates an error message to the system administrator that indicates that there are no free bands in the desired sub-device group with which to allocate the newly requested volume. Method 500 ends.
[0057] By defining bands and creating cluster maps for each RAID type during initialization, rather than when a volume request is made, the BAID controller's processor has more diroughput available for other system resources and thereby increases overall system performance over that of conventional networked storage systems. This method of allocation also allows more user flexibility in designing the system for various data storage needs, because the pre-mapped bands are assigned to a new volume, as defined by the user, rather than by the RAID controller that allocates volumes according to internal algorithms with little or no user input. Finally, this allocation method allows more memory device capacity to be utilized, because the bands align on the nearest megabyte boundaries and the way the clusters are laid out results in very little unused space on the devices. The only space that is not available to the user is the Meta Data area and a portion at the end of the device. The unmapped space at the end of the device is used for reassigning clusters during error recovery.
[0058] Although the present invention has been described in relation to particular embodiments thereof, many other variations and modifications and other uses will become apparent to those skilled in the art. Therefore, the present invention is to be limited not by the specific disclosure herein, but only by the appended claims.

Claims

What is claimed is:
1. A method of allocating physical memory from a group of N memory devices to logical volumes, the method comprising:
partitioning the group of N memory devices into a plurality of bands, each of the group of N memory devices sharing a portion of each of the plurality of bands;
generating a cluster map for each of the plurality of bands, each cluster map indicating the physical address for each of a plurality of clusters, each of the plurality of clusters equally distributed over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands, each of the N memory devices sharing an approximately equal number of clusters;
determining if a band is available; and
allocating an available band to a logical volume.
2. The method of claim 1, wherein the boundaries between adjacent bands of the plurality of bands are determined such that each band is compatible with multiple levels of redundancy.
3. The method of claim 2, wherein the partitioning step further comprises generating a redundancy group map for each of multiple levels of redundancy, each redundancy group map indicating the location of redundancy groups on the N memory devices, wherein band boundaries between adjacent bands of the plurality of bands are determined at shared boundaries of the redundancy group maps.
4. The method of claim 3, wherein a redundancy group map for a RAID 0 system is determined by:
setting a RAID 0 redundancy number equal to the highest integer power of two divisible into N; calculating a number of RAID 0 redundancy groups needed in a RAID 0 architecture for the group of N memory devices, the number of RAID 0 redundancy groups being equal to one if N is equal to the RAID 0 redundancy number, and the number of RAID 0 redundancy groups being equal to N if N is not equal to the RAID 0 redundancy number; and
generating a RAID 0 redundancy group map, wherein each RAID 0 redundancy group is distributed among a plurality of the N memory devices equal to the RAID 0 redundancy number, each of the N memory devices hosting an equal number of RAID 0 redundancy groups.
5. The metfiod of claim 4, wherein the step of generating a cluster map further comprises mapping each RAID 0 redundancy group to a cluster.
6. The method of claim 3, wherein a redundancy group map for a RAID 5 system is determined by:
setting a RAID 5 redundancy number equal to the highest integer power of two plus one divisible into N;
calculating a number of RAID 5 redundancy groups needed in a RAID 5 architecture for the group of N memory devices, the number of RAID 5 redundancy groups being equal to one if N is equal to the RAID 5 redundancy number, and the number of RAID 5 redundancy groups being equal to JST" if* JST" is not equal to the RAID 5 redundancy number; and
generating a RAID 5 redundancy group map, wherein each RAID 5 redundancy group is distributed among a plurality of the N memory devices equal to the RAID 5 redundancy number, each of the .N" memory devices hosting an equal number of RAID 5 redundancy groups.
7. The method of claim 6, wherein the step of generating a cluster map further comprises mapping each RAID 5 redundancy group to a cluster. 8. The method of claim 1, wherein each of the N memory devices in the group of N memory devices belong to a same storage class.
9. A computer program for allocating physical memory from a group of N memory devices to logical volumes, the program configured to:
partition the group of N memory devices into a plurality of bands, each of the group of N memory devices sharing a portion of each of the plurality of bands;
generate a cluster map for each of the plurality of bands, each cluster map indicating the physical address for each of a plurality of clusters, each of the plurality of clusters equally distributed over two or more of the JZST memory devices to ensure a specified level of redundancy for each of the plurality of bands, each of the N memory devices sharing an approximately equal number of clusters;
determine if a band is available; and
allocate an available band to a logical volume.
10. The program of claim 9, wherein the boundaries between adjacent bands of the plurality of bands are determined such that each band is compatible with multiple levels of redundancy.
11. The program of claim 10, wherein the partitioning step further comprises generating a redundancy group map for each of multiple levels of redundancy, each redundancy group map indicating the location of redundancy groups on the N memory devices, wherein band boundaries between adjacent bands of the plurality of bands are determined at shared boundaries of the redundancy group maps.
12. The program of claim 11, wherein a redundancy group map for a RAID 0 system is determined by
setting a BAID 0 redundancy number equal to the highest integer power of two divisible into N; calculating a number of RAID 0 redundancy groups needed in a BAID 0 architecture for the group of JsT memory devices, the number of RAID 0 redundancy groups being equal to one if N is equal to the RAID 0 redundancy number, and the number of RAID 0 redundancy groups being equal to N if N is not equal to the RAID 0 redundancy number; and
generating a RAID 0 redundancy group map, wherein each RAID 0 redundancy group is distributed among a plurality of the N memory devices equal to die RAID 0 redundancy number, each of die N memory devices hosting an equal number of RAID 0 redundancy groups.
13. The program of claim 12, wherein the step of generating a cluster map further comprises mapping each RAID 0 redundancy group to a cluster.
14. The program of claim 11, wherein a redundancy group map for a RAID 5 system is determined by:
setting a RAID 5 redundancy number equal to die highest integer power of two plus one divisible into N;
calculating a number of RAID 5 redundancy groups needed in a RAID 5 architecture for the group of N memory devices, the number of RAID 5 redundancy groups being equal to one if N is equal to die RAID 5 redundancy number, and the number of RAID 5 redundancy groups being equal to N if N is not equal to the RAID 5 redundancy number; and
generating a RAID 5 redundancy group map, wherein each RAID 5 redundancy group is distributed among a plurality of the N memory devices equal to the RAID 5 redundancy number, each of the N memory devices hosting an equal number of RAID 5 redundancy groups.
15. The program of claim 14, wherein the step of generating a cluster map further comprises mapping each RAID 5 redundancy group to a cluster. 16. The program of claim 9, wherein the program is further configured to allow a user to classify all N memory devices in the group of N memory devices as a same storage class.
17. The program of claim 9, wherein the program is further configured to generate an error condition if no available bands are found.
18. A system for allocating physical memory to logical volumes, comprising:
a group of N memory devices, partitioned into a plurality of bands, each of the group of N memory devices sharing a portion of each of the plurality of bands;
a cluster map for each of the plurality of bands, each cluster map indicating the physical address for each of a plurality of clusters, each of the plurality of clusters equally distributed over two or more of the ^memory devices to ensure a specified level of redundancy for each of the plurality of bands, each of the N memory devices sharing an approximately equal number of clusters;
an array controller configured to determine if a band from the plurality of bands is available and to allocate an available band to a logical volume.
19. The system of claim 18, wherein the boundaries between adjacent bands of the plurality of bands are arranged such that each band is compatible with multiple levels of redundancy.
20. The system of claim 19, further comprising a redundancy group map for each of multiple levels of redundancy, each redundancy group map indicating the location of redundancy groups on the N memory devices, wherein band boundaries between adjacent bands of the plurality of bands are located at shared boundaries of the redundancy group maps.
21. The system of claim 20, wherein a redundancy group map for a RAID 0 system comprises: a RAID 0 redundancy number equal to the highest integer power of two divisible into IsT;
a number of BAID 0 redundancy groups, the number of RAID 0 redundancy groups being equal to one if .N" is equal to the RAID 0 redundancy number, and the number of RAID 0 redundancy groups being equal to N if N is not equal to the RAID 0 redundancy number; and
a RAID 0 redundancy group distribution, wherein each RAID 0 redundancy group is distributed among a plurality of the N memory devices equal to the RAID 0 redundancy number, each of the N memory devices hosting an equal number of RAID 0 redundancy groups.
22. The system of claim 21, wherein each RAID 0 redundancy group is mapped to a cluster.
23. The program of claim 20, wherein a redundancy group map for a RAID 5 system comprises:
a RAID 5 redundancy number equal to the highest integer power of two plus one divisible into JZSZ;
a number of RAID 5 redundancy groups, the number of RAID 5 redundancy groups being equal to one if N is equal to the RAID 5 redundancy number, and the number of RAID 5 redundancy groups being equal to N if N is not equal to the RAID 5 redundancy number; and
a RAID 5 redundancy group distribution, wherein each RAID 5 redundancy group is distributed among a plurality of the N memory devices equal to the RAID 5 redundancy number, each of the N memory devices hosting an equal number of RAID 5 redundancy groups.
24. The system of claim 23, wherein each RAID 5 redundancy group is mapped to a cluster. 25. The system of claim 18, wherein all N memory devices in the group of N memory devices are classified as a same storage class.
EP05800827A 2004-09-22 2005-09-22 System and method for flexible physical-to-logical mapping in raid arrays Withdrawn EP1828905A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61180204P 2004-09-22 2004-09-22
PCT/US2005/034210 WO2006036810A2 (en) 2004-09-22 2005-09-22 System and method for flexible physical-to-logical mapping in raid arrays

Publications (2)

Publication Number Publication Date
EP1828905A2 EP1828905A2 (en) 2007-09-05
EP1828905A4 true EP1828905A4 (en) 2009-05-06

Family

ID=36119458

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05800827A Withdrawn EP1828905A4 (en) 2004-09-22 2005-09-22 System and method for flexible physical-to-logical mapping in raid arrays

Country Status (2)

Country Link
EP (1) EP1828905A4 (en)
WO (1) WO2006036810A2 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001031447A1 (en) * 1999-10-28 2001-05-03 Sun Microsystems, Inc. Load balancing configuration for storage arrays employing mirroring and striping

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636356A (en) * 1992-09-09 1997-06-03 Hitachi, Ltd. Disk array with original data stored in one disk drive and duplexed data distributed and stored in different disk drives
JP3344907B2 (en) * 1996-11-01 2002-11-18 富士通株式会社 RAID device and logical volume access control method
KR100392382B1 (en) * 2001-07-27 2003-07-23 한국전자통신연구원 Method of The Logical Volume Manager supporting Dynamic Online resizing and Software RAID
US7000087B2 (en) * 2001-11-07 2006-02-14 International Business Machines Corporation Programmatically pre-selecting specific physical memory blocks to allocate to an executing application
US7254813B2 (en) * 2002-03-21 2007-08-07 Network Appliance, Inc. Method and apparatus for resource allocation in a raid system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001031447A1 (en) * 1999-10-28 2001-05-03 Sun Microsystems, Inc. Load balancing configuration for storage arrays employing mirroring and striping

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIN H ET AL: "Stripped mirroring RAID architecture", JOURNAL OF SYSTEMS ARCHITECTURE, ELSEVIER SCIENCE PUBLISHERS BV., AMSTERDAM, NL, vol. 46, no. 6, 1 April 2000 (2000-04-01), pages 543 - 550, XP004190490, ISSN: 1383-7621 *

Also Published As

Publication number Publication date
WO2006036810A3 (en) 2006-07-06
WO2006036810A2 (en) 2006-04-06
EP1828905A2 (en) 2007-09-05

Similar Documents

Publication Publication Date Title
US7694072B2 (en) System and method for flexible physical-logical mapping raid arrays
US10782882B1 (en) Data fingerprint distribution on a data storage system
EP1810173B1 (en) System and method for configuring memory devices for use in a network
US10073621B1 (en) Managing storage device mappings in storage systems
US5758050A (en) Reconfigurable data storage system
US6895467B2 (en) System and method for atomizing storage
CN101414245B (en) Storage apparatus and data storage method using the same
US7519745B2 (en) Computer system, control apparatus, storage system and computer device
US7984258B2 (en) Distributed storage system with global sparing
US9547446B2 (en) Fine-grained control of data placement
US8972656B1 (en) Managing accesses to active-active mapped logical volumes
US20100049931A1 (en) Copying Logical Disk Mappings Between Arrays
US8972657B1 (en) Managing active—active mapped logical volumes
US7966449B2 (en) Distributed storage system with global replication
US11436113B2 (en) Method and system for maintaining storage device failure tolerance in a composable infrastructure
US11797387B2 (en) RAID stripe allocation based on memory device health
US11201788B2 (en) Distributed computing system and resource allocation method
US9848042B1 (en) System and method for data migration between high performance computing architectures and de-clustered RAID data storage system with automatic data redistribution
US8949526B1 (en) Reserving storage space in data storage systems
WO2006036810A2 (en) System and method for flexible physical-to-logical mapping in raid arrays
US20200068042A1 (en) Methods for managing workloads in a storage system and devices thereof
US11630596B1 (en) Sharing spare capacity of disks with multiple sizes to parallelize RAID rebuild
US20070299957A1 (en) Method and System for Classifying Networked Devices
US11868612B1 (en) Managing storage operations in storage systems

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070413

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20090403

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20090702