WO2006036810A2 - Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid - Google Patents

Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid Download PDF

Info

Publication number
WO2006036810A2
WO2006036810A2 PCT/US2005/034210 US2005034210W WO2006036810A2 WO 2006036810 A2 WO2006036810 A2 WO 2006036810A2 US 2005034210 W US2005034210 W US 2005034210W WO 2006036810 A2 WO2006036810 A2 WO 2006036810A2
Authority
WO
WIPO (PCT)
Prior art keywords
raid
redundancy
equal
group
memory devices
Prior art date
Application number
PCT/US2005/034210
Other languages
English (en)
Other versions
WO2006036810A3 (fr
Inventor
Paul Nehse
Original Assignee
Xyratex Technnology Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xyratex Technnology Limited filed Critical Xyratex Technnology Limited
Priority to US11/662,745 priority Critical patent/US7694072B2/en
Priority to EP05800827A priority patent/EP1828905A4/fr
Publication of WO2006036810A2 publication Critical patent/WO2006036810A2/fr
Publication of WO2006036810A3 publication Critical patent/WO2006036810A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • the present invention relates to allocation of physical resources for logical volumes in redundant arrays of inexpensive disk (RAID) arrays. Specifically, a system and method for assigning physical address space to logical data blocks is presented, wherein data space availability and system management flexibility are increased.
  • RAID redundant arrays of inexpensive disk
  • RAID redundant arrays of inexpensive disk arrays
  • RAID architecture was first documented in 1987 when Patterson, Gibson, and Katz published a paper entitled, "A Case for Redundant Arrays of Inexpensive Disks (RAID)" (University of California, Berkeley).
  • RAID architecture combines multiple small, inexpensive disk drives into an array of disk drives that yields performance exceeding that of a Single Large Expensive Drive (SLED).
  • SLED Single Large Expensive Drive
  • the array of drives appears as a single logical storage unit (LSU) or drive.
  • RAID-O array a non-redundant array of disk drives
  • RAID controllers provide data integrity through redundant data mechanisms, high speed through streamlined algorithms, and accessibility to stored data for users and administrators.
  • a networking technique that is fundamental to the various RAID levels is "striping," a method of concatenating multiple drives into one logical storage unit. Striping involves partitioning each drive's storage space into stripes, which may be as small as one sector (512 bytes) or as large as several megabytes. These stripes are then interleaved in round-robin style, so that the combined space is composed alternately of stripes from each drive. In effect, the storage space of the drives is shuffled like a deck of cards.
  • the choice of stripe size is application dependant and affects the real-time performance of data acquisition and storage in mass storage networks.
  • small stripes typically one 512-byte sector in length
  • each record will span across all the drives in the array, each drive storing part of the data from the record. This causes long record accesses to be performed faster, because the data transfer occurs in parallel on multiple drives.
  • Applications such as on-demand video/audio, medical imaging, and data acquisition, which utilize long record accesses, will achieve optimum performance with small stripe arrays.
  • FIG. 1 is a block diagram of a conventional networked storage system 100.
  • Conventional networked storage system 100 includes a plurality of hosts IIOA through 11 ON, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • Hosts 110 are connected to a communications means 120 that is further coupled via host ports to a plurality of RAID controllers 130A, and 130B through 130N, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • RAID controllers 130 are connected through device ports to a second communication means 140, which is further coupled to a plurality of memory devices 150, including memory device 150A through 150N, where 'N' is any integer value and is not representative of any other value 'N' described herein.
  • Hosts 110 are representative of any computer systems or terminals that are capable of communicating over a network.
  • Communication means 120 is representative of any type of electronic network that uses a protocol, such as Ethernet.
  • RAID controllers 130 are representative of any storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150. RAID controllers 130 also provide data redundancy, based on system administrator programmed RAID levels. Redundancy methods include data mirroring, parity generation, and/or data regeneration from parity after a device failure.
  • Communication means 140 is any type of storage controller network, such as iSCSI or fibre channel.
  • Memory devices 150 may be any type of storage device, such as, for example, tape drives, disk drives, non-volatile memory, or solid state devices. Although most RAID architectures use disk drives as the main storage devices, it should be clear to one skilled in the art that the invention embodiments described herein apply to any type of memory devices.
  • host 11 OA In operation, host 11 OA, for example, generates a read or a write request for a specific volume (e.g., volume 1), to which it has been assigned access rights.
  • the request is sent through communication means 120 to the host ports of RAID controllers 130.
  • the command is stored in local cache in RAID controller 130B, for example, because RAID controller 130B is programmed to respond to any commands that request volume 1 access.
  • RAID controller 130B processes the request from host 11 OA and determines, from mapping tables, the first physical memory device 150 address from which to read data or to write new data.
  • volume 1 is a RAID 5 volume and the command is a write request
  • RAID controller 130B If volume 1 is a RAID 5 volume and the command is a write request, RAID controller 130B generates new parity, stores the new parity to a parity memory device 150 via communication means 140, sends a "done" signal to host IIOA via communication means 120, and writes the new host 11 OA data through communication means 140 to corresponding memory devices 150.
  • data is less susceptible to loss from memory device 150 failures and, generally, can be restored from parity and/or functional memory devices 150, in the event of a failure.
  • RAID controllers 130 also have the ability to take over control for a failed RAID controller 130, such that system performance is unaffected or the effects are limited.
  • API Programming Interface
  • OEMs Typically, Original Equipment Manufactures (OEMs) bundle BAID networks and sell these memory systems to end users for network storage. OEMs bear the burden of customization of a RAID network and tune the network performance through an API. However, the degree to which a RAID system can be optimized through the API is limited; API does not adequately handle the unique performance requirements of various dissimilar data storage applications. Additionally, API does not provide an easily modifiable and secure format for proprietary OEM RAID configurations.
  • RAID controller that has the capability to be adequately programmed for unique performance and data storage requirements. Furthermore, the RAID controller configuration should be easily modifiable by a user or system administrator. The general functions of the RAID controller, such as volume allocation, should be optimized to use fewer processing resources, in order to increase overall system performance. Finally, the RAID controller needs to allocate physical storage space to logical volumes in such a way that the majority of the storage capacity is utilized.
  • RAID controller with a mapping function for allocating physical disk space to logical volumes is described in U.S. Patent Application Publication No. 2003/0028727.
  • the '727 application entitled, "RAID Apparatus for Storing a Plurality of Same Logical Volumes on Different Disk Units," describes a RAID apparatus that has a plurality of same logical volumes allocated on a real volume.
  • the real volume is designed so that a plurality of same logical volumes are respectively allocated on different physical disk units and a combination of a plurality of logical volumes allocated on each physical disk unit differs from one physical disk unit to another. This structure prevents uneven loading on the real volume from occurring because of uneven loads on the logical volumes.
  • the '727 application identifies the problem of physical disk device load balancing in a RAID architecture and offers a solution: allocating physical disk space such that equivalent logical volumes reside on separate physical disks for load balancing optimization.
  • the '727 application fails to provide an effective means to allocate volumes to physical storage devices, such that there is greater flexibility in system design.
  • the '727 application does not provide a means for mapping logical volumes to physical storage space with fewer processing cycle requirements.
  • the '727 application does not provide a means for utilizing a greater amount of available space of each storage device, as compared to conventional methods.
  • the present invention provides a method and a computer program are provided for allocating physical memory from a group of N memory devices to logical volumes.
  • the method and program include the step of partitioning the group of N memory devices into a plurality of bands, each of the group of ⁇ memory devices sharing a portion of each of the plurality of bands.
  • a cluster map for each of the plurality of bands is generated.
  • the cluster maps indicate die physical address for each of a plurality of clusters.
  • Each of die plurality of clusters are distributed equally over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands.
  • Each of die .Z ⁇ memory devices share an approximately equal number of clusters. Available bands are determined and are allocated to a logical volume.
  • the present invention also provides a system for allocating physical memory to logical volumes.
  • the system includes a group of N memory devices partitioned into a plurality of bands. Each of the group of N memory devices share a portion of each of the plurality of bands.
  • Each of the plurality of bands has a cluster map. Each cluster map indicates the physical address for each of a plurality of clusters. Each of the plurality of clusters are equally distributed over two or more of the N memory devices to ensure a specified level of redundancy for each of the plurality of bands.
  • Each of the N memory devices share an approximately equal number of clusters.
  • An array controller is also configured to determine if a band from the plurality of bands is available and to allocate an available band to a logical volume.
  • Figure 1 is a block diagram of a conventional networked storage system
  • FIG. 2 is a block diagram of a RAID controller system, according to an exemplary embodiment of the invention.
  • Figure 3 shows a group of physical devices that have been grouped into a device group and further grouped into sub-device groups, according to an exemplary embodiment of the invention
  • Figure 4 shows an example of a volume configuration, according to an exemplary embodiment of the invention.
  • Figure 5 is a flow diagram of a method of allocating bands to volumes, according to an exemplary embodiment of the invention.
  • the present invention is a method of allocating physical storage space to logical unit numbers (LUNs) or volumes that use a RAID controller.
  • the method provides greater flexibility to the system administrator through the RAID controller, by systematically assigning various portions of physical space to single or multiple logical device groups.
  • Each device group has specific rules for data usage and allocation.
  • Each device group is further categorized into single or multiple sub-device groups.
  • a special algorithm in the BAID controller arranges physical storage device space into logical units, or bands, that are readily allocated with litde metadata overhead per system administrator commands.
  • the physical space is allocated to logical volumes, according to system administrator specifications.
  • FIG. 2 is a block diagram of a BAID controller system 200.
  • BAID controller system 200 includes BAID controllers 130 and a general purpose personal computer (PC) 210.
  • PC 210 further includes a graphical user interface (GUI) 212.
  • BAID controllers 130 further include software applications 220, an operating system 240, and BAID controller hardware 250.
  • Software applications 220 further include a common information module object manager (CIMOM) 222, a software application layer (SAL) 224, a logic library layer (LAL) 226, a system manager (SM) 228, a software watchdog (SWD) 230, a persistent data manager (PDM) 232, an event manager (EM) 234, and a battery backup (BBU) 236.
  • CIMOM common information module object manager
  • SAL software application layer
  • LAL logic library layer
  • SWD software watchdog
  • PDM persistent data manager
  • EM event manager
  • BBU battery backup
  • GUI 212 is a software application used to input personality attributes for BAID controllers 130.
  • GUI 212 runs on PC 210.
  • BAID controllers 130 are representative of BAID storage controller devices that process commands from hosts 110 and, based on those commands, control memory devices 150.
  • BAID controllers 130 are an exemplary embodiment of the invention; however, otiier implementations of controllers may be envisioned here by diose skilled in the art.
  • BAID controllers 130 provide data redundancy, based on system- administrator-programmed BAID levels. This includes data mirroring, parity generation, and/or data regeneration from parity after a device failure.
  • BAID controller hardware 250 is the physical processor platform of BAID controllers 130 that executes all BAID controller software applications 220 and consists of a microprocessor, memory, and all other electronic devices necessary for BAID control.
  • Operating system 240 is an industry-standard software platform, such as Linux, for example, upon which software applications 220 can run. Operating system 240 delivers other benefits to BAID controllers 130. Operating system 240 contains utilities, such as a file system, that provide a way for BAID controllers 130 to store and transfer files.
  • Software applications 220 include algorithms and logic necessary for the RAID controllers 130 and are divided into those needed for initialization and those that operate at run- time.
  • Initialization software applications 220 consist of the following software functional blocks: CIMOM 222, which is a module that instantiates all objects in software applications 220 with the personality attributes entered; SAL 224, which is the application layer upon which the run-time modules execute; and LAL 226, a library of low-level hardware commands used by a BAID transaction processor.
  • CIMOM 222 which is a module that instantiates all objects in software applications 220 with the personality attributes entered
  • SAL 224 which is the application layer upon which the run-time modules execute
  • LAL 226, a library of low-level hardware commands used by a BAID transaction processor.
  • Software applications 220 that operate at run-time include the following software functional blocks: system manager 228, a module that carries out the run-time executive; SWD 230, a module that provides software supervision function for fault management; PDM 232, a module that handles the personality data within software applications 220; EM 234, a task scheduler that launches software applications 220 under conditional execution; and BBU 236, a module that handles power bus management for battery backup.
  • SM 228 is responsible for allocating physical space to newly requested volumes and adding physical space to existing volumes when new devices are added to the system.
  • SM 228 takes commands from the system administrator (e.g., assigning new volumes or creating new sub-device groups) and executes those commands. Commands that cannot be processed (because of lack of space available, for example) are returned as error messages to the system administrator.
  • the volume allocation function of SM 228 is described in more detail in Figure 4.
  • Figure 3 shows an example of a group of physical devices 300 that have been grouped into a device group 310 and further grouped into sub-device groups 320a, 320b, and 320c by a system administrator through SM 228.
  • a device group 310 may be assigned to multiple logical volumes 330, which include a plurality of LUNs 330a - 33On that have varying sizes and RAID levels, where 'n' is any integer value and is not representative of any other value 'n' described herein.
  • the maximum number of logical volumes 330 assigned to device group 310 depends on the size of logical volumes 330 and the number of sub-device groups 320 within device group 310.
  • a sub-device group 320 may include from one to sixteen physical devices; however, all devices must be the same class of storage.
  • the class of storage is defined by the system administrator. It may be based on the types of devices in sub-device group 320, such as fibre channel or serial ATA, or based on physical characteristics, such as rotation speed or size, or based on logical considerations, such as function, department, or user.
  • SM 228 defaults all physical devices to the same class of storage. After installation, the system administrator may define new classes of storage.
  • SM 228 further divides each storage sub-device group 320 into bands, which are the smallest unit of logical storage assigned to a logical volume 330. By categorizing the storage area in such a manner, the granularity of each storage unit allows more physical space to be utilized.
  • Table 1 shows an example of bands that stripe across all the devices within a sub-device group 320. There are n number of bands in sub-device group 320, depending on the capacity of each device.
  • Each band may be assigned to RAID 0 or RAID 5.
  • a band may be assigned to contain master volume data, mirror volume data, or snap volume data, as defined below.
  • the master volume data band format is used when space is allocated to a master volume (e.g., volume 330a).
  • the master volume may include one or more bands; however, all bands in that volume must be in the same sub-device group 320 (e.g., 320a).
  • the amount of user space within a band varies, depending on the RAID level.
  • the data band may be configured for either RAID level 0 or 5.
  • a mirror volume may include one or more bands, but all mirror bands associated with a master volume must be in a different sub-device group (e.g., sub-device group 320b) than the bands used for the master volume.
  • the amount of user space within a band varies, depending on the RAID level.
  • the mirror band may be configured for either RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
  • the snap band format is used when space is allocated for a point in time copy of a master volume.
  • the snap volume may include one or more bands, and all snap bands associated with a master volume may be in the same or different sub-device group.
  • the amount of user space within a band varies, depending on the RAID level.
  • the snap band may be configured for eidier RAID level 0 or 5 and is not required to be the same RAID level as the master volume.
  • Bands are expanded through the addition of devices to the sub-device group in which the bands reside. At anytime after sub-device group 320a is created, it may be expanded through addition of one or more devices to sub-device group 320a. After the devices are added, SM 228 migrates the existing bands to use the added devices. When the migration is complete, sub-device group 320a will include additional bands that may then be allocated to new or existing logical volumes 330.
  • Table 2 shows an example of a redundancy group (RGrp) mapping for various numbers (integer power of two only) of devices in a sub-device group for RAID 0 (no parity device is required) for a single band.
  • Each band is further sub-divided into a plurality of RGrps, depending on the type of RAID level defined by the system administrator and the number of devices within a sub-device group 320.
  • RGrp describes the RAID level, stripe size, number of devices, device path used, and location of the data within sub-device group 320a.
  • the number of RGrps assigned to sub-device group 320a must be an integer power of two for RAID 0 and an integer power of two plus one additional device for RAID 5 (for parity data).
  • Table 3 shows an example of an RGrp mapping of RGrps for integer power of two plus one sub-device groups 320 for BAID 5 (for parity data) for a single band in sub-device group 320b.
  • the number of RGrps assigned to sub-device group 320b must be an integer power of two plus one additional device for RAID 5 (for parity data).
  • Table 4 shows an example of an RGrp mapping of RGrps for a RAID 0 band in sub-device group 320b that does not include an integer power of two number of devices.
  • rotating RGrps (RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrp ⁇ , and RGrp7) are used to map band.
  • the number of RGrps required to map the entire band is equal to the number of devices within any sub-device group 320.
  • Table 4 there are seven RGrps required to map a RAID 0 band in sub- device group 320 that includes seven devices.
  • Each RGrp is striped across the devices, such that there is an integer power of two number of devices (e.g., 2, 4, 8, and so on, for RAID 0) with a specific RGrp and no device has two stripes of the same RGrp.
  • the seven disk sub-device group 320 in Table 4 cannot use eight devices for rotating a specific RGrp, because Device 1 would contain two stripes of RGrpl.
  • the next available choice is four (integer power of 2), which satisfies the RGrp assignment rules by rotating onto four devices (RGrpl) before beginning a new RGrp (RGrp2).
  • Table 5 shows an example of a rotating RGrp mapping for sub-device groups 320 in RAID 5 band that do not equal integer powers of two plus one devices for parity (e.g., 3, 5, 9, and so on).
  • Table 5 outlines the process for band RGrp mapping in a RAID 5 level that does not include an integer power of two number of devices plus a parity device in sub-device groups 320.
  • the number of RGrps e.g., RGrpl, RGrp2, RGrp3, for example
  • the number of RGrps is equal to the number of devices in each of sub-device groups 320.
  • RGrps there are four RGrps in the four device sub-device group 320, namely RGrpl, RGrp2, RGrp3, and RGrp4; six RGrps in the six device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, and RGrp ⁇ ; and eight RGrps in the eight device sub-device group 320, namely RGrpl, RGrp2, RGrp3, RGrp4, RGrp5, RGrp ⁇ , RGrp7, and RGrp ⁇ .
  • the number of devices an RGrp will stripe across is equal to an integer power of two plus one for the next lower integer power of two phis one multiple.
  • the next lower integer power of two plus one is four plus one, which is five. Therefore, each RGrp (RGrpl - 6) stripes across five devices in an eight disk sub-device group 320.
  • the next lower integer power of two plus one for the six disk sub-device group is also four plus one, which is five.
  • the next lower integer power of two plus one multiple is two plus one, which is three.
  • four RGrps (RGrpsl - 4) stripe across three disks in a sub-device group 320.
  • Each RGrp category striped across multiple devices is known as a cluster.
  • the RGrpl sections together combine into a single cluster.
  • RGrp2 sections are another cluster, and so on.
  • a cluster is a configurable value that is used to manage user data within a sub-device group. It is not used for managing parity data for RAID 5 volumes.
  • the minimum cluster size is 1MB and must be an integer power of two.
  • the cluster size is set before any device groups or volumes are created, and that size is used by all device groups within RAID controller 130.
  • Table 6 shows an example of a cluster map that includes clusters of a single band in an eight disk sub-device group 320 that is configured for a RAID 0 level.
  • the band is in an eight disk sub-device group 320 at BAID level 0 and includes n+10 clusters, which are all mapped to RGrpl. Since eight is an integer power of two, rotating RGrps are not required and, therefore, the band can use the same RGrp, in this case, RGrpl. For this configuration, only one RGrp is required to map all the clusters in the band.
  • Table 7 illustrates an example of a RAID 5 cluster map in which rotating redundancy is required, because the number of disks is not equal to an integer power of two plus one. Therefore, eight RGrps are required to map across all of the disks in sub- device group 320. This translates to eight clusters for that stripe. The RGrp rotation repeats for the next stripe, which translates into another group of eight clusters that has an offset of six (RGrpl starts 6 stripes up from the first stripe). The third group of RGrps maps to a third set of eight clusters with an offset of eleven (RGrpl starts again 11 stripes from the first stripe), and so on. Ta
  • Groups of eight clusters are mapped by eight RGrps, and each set is identified by a specific offset in the map.
  • the top of the band has space available to map six clusters only, because a single RGrp must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8, so the map completes at the end of RGrp ⁇ , which spans the required five disks.
  • the top of the band has space available to map six clusters only, as a single RGrp (for example, RGrp7) must span five disks (in this example), and there is not enough space to map RGrp7 or RGrp8.
  • Figure 4 shows an example of a volume configuration 1100, which includes a volume band list 1110, which further includes volumes 330a and 330b, when a create volume command is received by SM 228 for a RAID level 0 logical volume 330a and a RAID level 5 logical volume 330b.
  • logical volumes 330 only require one sub-device group 320, as no mirroring data in a separate sub-device group 320 is required.
  • SM 228 allocates bands in a sub-device group 320a (for example) to logical volume 330a and assigns the bands a BAID level 0. When the bands are assigned to logical volume 330a, they move from a state of being free to that of being allocated.
  • SM 228 also allocates bands in a sub-device group 320a (for example) to logical volume 330b and assigns the bands a BAID level 5. When the bands are assigned to logical volume 330b, they move from a state of being free to that of being allocated.
  • Figure 4 shows an example of SM 228 allocating bands in a sub-device group 320 to two different logical volumes 330, a BAID 0 logical volume 330a that is 6GB, and a RAID 5 logical volume 330b that is 5.1GB.
  • Figure 5 is a method 500 of allocating bands to volumes.
  • SM 228 divides each sub-device group into bands that may later be assigned to a BAID 0 or a BAID 5 volume. This process includes the following method steps:
  • Step 510 Calculating redundancy group
  • Step 520 Calculating common band widths for BJLID 0 and RAID 5
  • SM 228 compares the RAID 0 redundancy group map to the RAID 5 redundancy group map for a particular sub-device group and determines a common 1 MB boundary, where a full redundancy group rotation ends. This marks a band boundary, where either RAID 0 or RAID 5 may be assigned to the band.
  • Method 500 proceeds to step 530.
  • Step 530 Calculating cluster map for each band,
  • SM 228 calculates the cluster maps for each of the bands, as the band boundaries have already been defined, in the previous steps, for each sub-device group and redundancy groups that have been calculated for each band for both RAID 0 and RAID 5.
  • Cluster maps for rotating redundancy are in a slightly different format from cluster maps, for which a single redundancy group maps all of the clusters in a band, as shown in Tables 6 and 7, respectively.
  • Method 500 proceeds to step 540.
  • Step 540 Are there any free bands?
  • SM 228 receives a request for a new volume creation, including information about the size of the requested volume, the desired sub- device group, and its RAID level. SM 228 analyzes die sub-device group for bands that are free and bypasses bands that are already allocated to other volumes. SM 228 checks whether there are any free bands left for allocation in the requested sub-device group. If yes, method 500 proceeds to step 550; if no, method 500 proceeds to step 570.
  • Step 550 Allocating a band to a volume
  • SM 228 allocates to the new volume die first available band that meets die requirements for the requested volume and assigns die requested RAID type to the band. SM 228 continues to scan for free bands, until die entire requested volume size has been satisfied with enough allocated bands from the sub-device groups. However, if there are not enough free bands to allocate to the new volume, SM 228 generates a message to the system administrator when die space allocated to the volume begins to reach capacity and informs the system administrator that data should be migrated to other volumes or that more memory devices 150 should be added to the sub-device group. Method 500 proceeds to step 560. Step 560: Bringing volume, online
  • SM 228 sets the state of the allocated bands from “free” to “allocated” and brings the new volume online by allowing host access.
  • Method 500 ends.
  • Step 570 Generating volume creation error
  • SM 228 generates an error message to the system administrator that indicates that there are no free bands in the desired sub-device group with which to allocate the newly requested volume.
  • Method 500 ends.
  • the BAID controller's processor has more diroughput available for other system resources and thereby increases overall system performance over that of conventional networked storage systems.
  • This method of allocation also allows more user flexibility in designing the system for various data storage needs, because the pre-mapped bands are assigned to a new volume, as defined by the user, rather than by the RAID controller that allocates volumes according to internal algorithms with little or no user input.
  • this allocation method allows more memory device capacity to be utilized, because the bands align on the nearest megabyte boundaries and the way the clusters are laid out results in very little unused space on the devices.
  • the only space that is not available to the user is the Meta Data area and a portion at the end of the device.
  • the unmapped space at the end of the device is used for reassigning clusters during error recovery.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

L'invention concerne un système, un procédé, et un programme permettant d'attribuer la mémoire physique d'un groupe de N unités mémoire à des volumes logiques. Un groupe de N unités mémoire est divisé en une pluralité de bandes, chaque unité de ce groupe de N unités mémoire partageant une portion de chacune des bandes de la pluralité. Une topologie des groupements est généré pour chacune des bandes de la pluralité. Cette topologie des groupements indique l'adresse physique de chaque groupement d'une pluralité de groupements. Chaque groupement de la pluralité est réparti de manière égale sur au moins deux des N unités mémoire afin d'assurer un niveau de redondance prédéterminé dans chacune des bandes. Chacune des N unités mémoire partagent un nombre approximativement équivalent de groupements. Les bandes disponibles sont déterminées et attribuées à un volume logique.
PCT/US2005/034210 2004-09-22 2005-09-22 Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid WO2006036810A2 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/662,745 US7694072B2 (en) 2005-09-22 2005-09-22 System and method for flexible physical-logical mapping raid arrays
EP05800827A EP1828905A4 (fr) 2004-09-22 2005-09-22 Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61180204P 2004-09-22 2004-09-22
US60/611,802 2004-09-22

Publications (2)

Publication Number Publication Date
WO2006036810A2 true WO2006036810A2 (fr) 2006-04-06
WO2006036810A3 WO2006036810A3 (fr) 2006-07-06

Family

ID=36119458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/034210 WO2006036810A2 (fr) 2004-09-22 2005-09-22 Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid

Country Status (2)

Country Link
EP (1) EP1828905A4 (fr)
WO (1) WO2006036810A2 (fr)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636356A (en) * 1992-09-09 1997-06-03 Hitachi, Ltd. Disk array with original data stored in one disk drive and duplexed data distributed and stored in different disk drives
JP3344907B2 (ja) * 1996-11-01 2002-11-18 富士通株式会社 Raid装置及び論理ボリュームのアクセス制御方法
US6425052B1 (en) * 1999-10-28 2002-07-23 Sun Microsystems, Inc. Load balancing configuration for storage arrays employing mirroring and striping
KR100392382B1 (ko) * 2001-07-27 2003-07-23 한국전자통신연구원 동적 크기 변경 및 메타 데이터 양의 최소화를 위한 논리볼륨 관리 방법
US7000087B2 (en) * 2001-11-07 2006-02-14 International Business Machines Corporation Programmatically pre-selecting specific physical memory blocks to allocate to an executing application
US7254813B2 (en) * 2002-03-21 2007-08-07 Network Appliance, Inc. Method and apparatus for resource allocation in a raid system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1828905A4 *

Also Published As

Publication number Publication date
EP1828905A2 (fr) 2007-09-05
WO2006036810A3 (fr) 2006-07-06
EP1828905A4 (fr) 2009-05-06

Similar Documents

Publication Publication Date Title
US7694072B2 (en) System and method for flexible physical-logical mapping raid arrays
US10782882B1 (en) Data fingerprint distribution on a data storage system
EP1810173B1 (fr) Systeme et procede permettant de configurer des unites memoire destines a etre utilisees dans un reseau
US10073621B1 (en) Managing storage device mappings in storage systems
US5758050A (en) Reconfigurable data storage system
US6895467B2 (en) System and method for atomizing storage
CN101414245B (zh) 存储装置以及使用该存储装置的数据存储方法
US7519745B2 (en) Computer system, control apparatus, storage system and computer device
US7984258B2 (en) Distributed storage system with global sparing
US9547446B2 (en) Fine-grained control of data placement
US8972657B1 (en) Managing active—active mapped logical volumes
US20100049931A1 (en) Copying Logical Disk Mappings Between Arrays
US8972656B1 (en) Managing accesses to active-active mapped logical volumes
US7966449B2 (en) Distributed storage system with global replication
US11436113B2 (en) Method and system for maintaining storage device failure tolerance in a composable infrastructure
US11797387B2 (en) RAID stripe allocation based on memory device health
US9848042B1 (en) System and method for data migration between high performance computing architectures and de-clustered RAID data storage system with automatic data redistribution
US11201788B2 (en) Distributed computing system and resource allocation method
US8949526B1 (en) Reserving storage space in data storage systems
WO2006036810A2 (fr) Systeme et procede de mise en correspondance des structures physiques et logiques dans les ensembles raid
US20200068042A1 (en) Methods for managing workloads in a storage system and devices thereof
US11868612B1 (en) Managing storage operations in storage systems
US11630596B1 (en) Sharing spare capacity of disks with multiple sizes to parallelize RAID rebuild
US20070299957A1 (en) Method and System for Classifying Networked Devices

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

NENP Non-entry into the national phase in:

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005800827

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 11662745

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 2005800827

Country of ref document: EP