WO2016068976A1 - Attributeur de réseau de stockage - Google Patents

Attributeur de réseau de stockage Download PDF

Info

Publication number
WO2016068976A1
WO2016068976A1 PCT/US2014/063320 US2014063320W WO2016068976A1 WO 2016068976 A1 WO2016068976 A1 WO 2016068976A1 US 2014063320 W US2014063320 W US 2014063320W WO 2016068976 A1 WO2016068976 A1 WO 2016068976A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
operation rate
allocation
storage array
drive
Prior art date
Application number
PCT/US2014/063320
Other languages
English (en)
Inventor
Wade J. Satterfield
David K. BESEN
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2014/063320 priority Critical patent/WO2016068976A1/fr
Publication of WO2016068976A1 publication Critical patent/WO2016068976A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • Data can be stored and retrieved in storage arrays such as storage area networks and mass storage servers.
  • Each storage array can include different tiers of storage drives each with their own properties. Some storage drives may be faster but more expensive per gigabyte, whereas other storage drives may be slower but cheaper per gigabyte.
  • a storage array uses multiple types of storage drives, it can be called a multi-tier storage array, as each storage drive may belong to a particular tier. For example, more expensive, faster storage drives may belong to one tier, while cheaper, slower storage drives may belong to another tier of storage.
  • FIG. 1 is a diagram of a computation device for determining an array allocation
  • FIG. 2 is a block diagram of an example method for providing an array allocation
  • FIG. 3 is a block diagram of an example method for generating a multi- tier storage array allocation
  • FIG. 4 is a diagram of a non-transitory, computer-readable media generates a multi-tier storage array allocation
  • Fig. 5 is a graph with example data illustrating how uneven input/output operations per second (lOPs) distribution can be for several applications.
  • a storage array may have multiple tiers in that the storage drives of the storage array can belong to different tiers based on their characteristics.
  • the benefits of a multi-tiered storage array are pronounced when a storage array migrates stored data based on its frequency of access.
  • Migrating storage data throughout the array based on use results in a more efficient allocation of resources. For example, when frequently used data is migrated to higher value, faster storage drives, the benefits of that high value drive are amplified when compared to a storage array that allowed the same data to be stored on a low value, slower storage drive.
  • this dynamic allocation of storage data to various storage drives based on frequency of use affects the storage array allocation in storage array design stages.
  • this disclosure discusses and makes use of one aspect of modern storage arrays, such as storage arrays that automatically migrate frequently accessed data into the faster tiers and migrate the less frequently accessed data into the slower tiers. Predicting how big each tier should be complicates the planning process when buying, or upgrading, a storage array. Further, this disclosure relates generally to an array allocator that generates an array allocation for multi-tier storage arrays.
  • FIG. 1 is a diagram of a computation device for determining an array allocation.
  • the computing device 100 may be, for example, a laptop computer, desktop computer, ultrabook, tablet computer, mobile device, or server, among others.
  • the computing device 1 00 may be used as an array allocator for generating an array allocation for a storage array 106.
  • the array allocation may be manually or automatically be configured in a storage array with the array allocation.
  • the array allocation sent over a bus 108 to the storage array 106.
  • the array allocation may be used to populate a drive type order request that will be used to configure the storage array with this particular array allocation and ordered drive types.
  • the computing device 1 00 may include a central processing unit (CPU) 102 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the CPU 102.
  • the CPU may be coupled to the memory device 1 04 by a bus 108.
  • the CPU 102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • the computing device 100 may include more than one CPU 102.
  • the computing device 100 also includes a storage device 1 1 0.
  • the storage device 1 10 is a physical memory such as a hard drive, an optical drive, a thumbdrive, an array of drives, or any combinations thereof.
  • the storage device 1 10 may also include remote storage drives.
  • the computing device 100 may also include a memory device 104 which itself can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
  • the memory device 1 04 may include dynamic random access memory (DRAM).
  • the computing device 1 00 may also be connected to a network 124.
  • the network 124 may be a wide area network (WAN), local area network (LAN), or the Internet, among others.
  • the CPU 102 may be linked through the bus 106 to a display interface 1 14 configured to connect the computing device 100 to a display device 1 16.
  • the display device 1 16 may include a display screen that is a built-in component of the computing device 100.
  • the display device 1 1 6 may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 100.
  • the CPU 102 may also be connected through the bus 106 to an input/output (I/O) device interface 1 1 8 configured to connect the computing device 100 to one or more I/O devices 120.
  • the I/O devices 1 20 may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others.
  • the I/O devices 120 may be built-in components of the computing device 1 00, or may be devices that are externally connected to the computing device 1 00.
  • the computing device 100 may also include a network interface controller (NIC) 122 may be configured to connect the computing device 100 through the bus 106 to the network 1 12.
  • the computing device 100 and each of its components may be powered by a power supply unit (PSU) 124.
  • the CPU 102 may be coupled to the PSU through the bus 1 06 which may communicate control signals or status signals between then CPU 102 and the PSU 124.
  • the PSU 124 is further coupled through a power source connector 1 26 to a power source 128.
  • the power source 128 provides electrical current to the PSU 124 through the power source connector 126.
  • a power source connector can include conducting wires, plates or any other means of transmitting power from a power source 128 to the PSU 124.
  • the computer device 100 also includes an array allocator 130, which may be included in the storage device 120.
  • the array allocator 130 may store inputs received from a user that can be used by the array allocator 130 to generate an array allocation for a storage array.
  • This array allocator 130 can receive a user specifications entered at one of the I/O devices 120 or the network 1 12, or retrieved from a memory device 104, or storage 1 20.
  • the received user specifications can describe a storage size for each tier of storage devices and then estimate an operation rate that will be demanded of the storage array as a whole.
  • this operation rate can be IPOs.
  • the operation rate can be a total bandwidth that can be measured in a value of storage in a given time interval, e.g. measured in Megabytes per second.
  • the operation rate may also be based on a number of "drive writes" to the storage in a given interval.
  • the operation rate metric chosen may vary based on the types of storage being used.
  • the drive write count metric may allocate more effectively between write count-sensitive tiers of solid state drives (SSDs). In this case, the goal may not be exclusively higher performance for the cost, but could also include the longest life of the array for the cost.
  • the array allocator 130 could then estimate the array allocation assuming the operation rate should be uniformly distributed across each tier of storage devices. However, an operation rate distribution is rarely uniformly, or evenly, distributed across all drives, an allocation based on this assumption leads to inefficiently allocating arrays to be used in a storage array.
  • the array allocator 130 instead can select a distribution coefficient which can allocate resources in a more efficient manner.
  • One example distribution coefficient follows the Pareto distribution which follows an 80/20 rule. In the context of array allocations of storage arrays, using this Pareto distribution would generate an array allocation where 80 percent of the operation rate would be performed on 20 percent of the storage space. This non-uniform distribution is much closer to the uneven distribution of the operation rate seen in most programs that access a storage array. Accordingly, if the array allocator 130 implemented this distribution, a more efficient array allocation can be generated. For an array allocation to be more efficient, the resource allocation is close as possible to a given workload.
  • a small number of expensive, high performance resources may be the most cost effective way to perform the desired performance for that work load.
  • cheaper assets such as slower disk drives may be used in place of faster tiers of assets such as solid state memory.
  • the more efficient distribution would be to minimize cost by having few, if any, high performance components while allocating a larger relative number of low performance high volume storage units. Accordingly, determining an accurate multi-tier array allocation can ensure the performance goals are met at a minimum cost.
  • Pareto distribution of 80/20 is only one example of a nonuniform distribution coefficient that can be used to more accurately distribute the operation rate for the entire array, however other distribution coefficients may also be used. Distributions may follow a fractional, logarithmic, or exponential decay distribution or any other similar nonlinear operation rate distribution which emphasize that the majority of the operations may take place on a minority of the storage space.
  • the disclosed array allocator can also take as input a distribution that is specifically tailored to a user's access data. For example, if there is a particular program used by the user that would operate better with a particular distribution of operation rate across varying tiers of storage drives, a customized operation rate distribution can be provided to the array allocator 130 in order to generate an array allocation specifically tailored to that application.
  • a generated array allocation using the array allocator 130 is most useful for systems that automatically migrate segments of the storage that are accessed more frequently to the fastest tier of storage.
  • Storage array generally can be manually set to divide accesses between a number of disks.
  • the present array allocator predicts the most accurate array allocation for arrays that automatically migrate high rate of access data to faster access memory units.
  • Automatically migrating disk systems allow a more constant distribution and are in part responsible for the functioning of a single distribution parameter's ability to be used to determine an operation rate distribution in an array allocation.
  • Migration of data to higher performance storage units allows the bulk of operation rate in a system take place on faster tiers of storage.
  • another example for accomplishing this migration is through a system that migrates least used (or unused) disk blocks to a slower tier or memory region usually located on cheaper disk drives with slower access times.
  • FIG. 2 is a block diagram of an example method for providing an array allocation.
  • the method 200 can be performed by hardware or a combination of hardware and software.
  • the method 200 may be performed by the array allocator 130 running on the CPU 1 02 (Fig. 1 ).
  • the user input can include the total size of the storage array, the total IOPS the storage array will be handling, the types of drives used in each tier, and a distribution parameter. While lOPs is used here as an example, any measure of operation rate of an array could also be used.
  • the distribution parameter may be a Pareto distribution, customized user lOPs distribution data, or any other nonuniform data distribution.
  • the distribution parameter may be used in place of actual workload data of a storage array as workload data uses measurement of actual performance characteristics of a completed storage array. In contrast, the distribution parameter allows a relatively accurate approximation of the lOPs distribution prior to the actual creation of a storage array with which to generate workload data.
  • the distribution parameter does not require the gathering of lOPs data on an actively or previously operating storage or disk array and in fact enables the predictive allocation of lOPs across a number of tiers.
  • the number of tiers to be used in the generated array allocation can be determined and designated for use by the array allocator.
  • the user input includes the sizes of the tiers of a disk array.
  • the size of each tier can be estimated by the array allocator.
  • the tier size may also correspond to size information of the various drive types being used. For example, a tier size may be one third of the total storage array size.
  • the number of tiers is based on the various capabilities of the drive types. In this way, the number of tiers is not limited to only two tiers but may include three, four, or any number as determined by the number of available drive types.
  • a distribution of lOPs may be determined for each tier. Again, this determination can be based on the distribution parameter, whether it is a Pareto parameter, a custom distribution data, or any other nonuniform distribution coefficient that emphasizes a larger distribution of lOPs on a smaller amount of storage space.
  • This distribution is not limited to a Pareto principle, where 80 percent of the lOPs is allocated to only 20 percent of the disk space. The distribution of lOPs between the percentage of lOPs and percentage of disk space does not need to add up to 100 percent. For example, if designated by a distribution parameter, 80 percent of the lOPs may be allocated to 10 percent of the disk space.
  • the first tier may include 15 percent of disk space
  • the second tier may include 25 percent of the disk space
  • the third tier may include 60 percent of the disk space.
  • the distribution may allocate 70 percent of the lOPs to the first tier, 20 percent of the lOPs to the second tier, and 10 percent of the lOPs to the third tier. This disclosure is not limited however to these examples and may include any combination of tier and lOPs distributions.
  • a number of drives can be assigned to each tier based on the tier size, the lOPs assigned to that tier, the lOPs capacity of the drive type, and the size of the drive type. This assignment allows for a variation in the drive type based on the specifications of a particular tier. For example, if it is determined that a high number of IOPS should be used for a particular amount of storage space, that amount of storage space may be allocated to an equivalent amount of space from SSDs that have a high lOPs capacity. Likewise, tiers that are projected to not experience a high number of lOPs may be assigned cheaper storage drive types, such as hard disk drives, or other "nearline" drives with high storage size but lower lOPs capacity.
  • an array allocation is generated that allocates drive types to achieve a target lOPs response time, which could include another input to be used by the array allocator.
  • a distribution function created from the distribution parameter may predict how many IOPS will be requested of a single drive, and then an estimate can be generated for the response time of the drive.
  • "response time" of a particular drive or tier can be a function of the work requested and the capacity of the drive.
  • an array allocation based on tier parameters such as response time may be non-uniform. In one example, the response time doubles at about 50 percent of capacity, and goes to infinity if the maximum capacity is exceeded. Response times may not follow a Pareto distribution, but related operation rates such as IOPS can follow such a distribution.
  • the drives allocated could also be matched by the lOPs response time of each drive at a particular lOPs level. This example could also adjust the sizes of the tiers to make sure the worst case response time of each tier, based on the lOPs and the drive type, is under a desired target.
  • FIG. 3 is a block diagram of an example method for generating a multi- tier storage array allocation. Using this method 300, at block 302, array
  • array specifications are received. These array specifications have information relevant to the size of the desired storage array, a way of determining operation rate specified by the total storage array, a distribution parameter, and information about the drive available for allocation. Further, these array specifications may be received at the array allocator itself or in another storage location that the array allocator could access for further use.
  • a multi-tier array allocation is generated. This multi-tier array allocation is generated using the array specifications and the array allocator.
  • the array allocation is stored.
  • the storage of the array allocation may take place on the array allocator itself for later access by a user.
  • the stored data may take the form of raw data or may be stored in a drive type order request such as an ordering form that can request the correct numbers of drives based on the stored array allocation.
  • the drive allocation is stored in a location that does not order the actual drive types but may use the stored array allocation to display or store a potential associated cost to a potential customer of a storage array. In this manner the array allocator that generated the stored array allocation may be used to highlight the benefit of the allocation by displaying information about the array allocation such as price, size, operation rate capacity, and response time for the overall storage array using the stored array allocation.
  • Fig. 4 is a block diagram showing a tangible, non-transitory, computer- readable medium 400 that generates a multi-tier storage array allocation.
  • the computer-readable medium 400 can include RAM, a hard disk drive, an array of hard disk drives, an optical drive, an array of optical drives, a non-volatile memory, a flash card or flash drive, a digital versatile disk (DVD), or a compact disk (CD), among others.
  • the computer-readable media 400 can be accessed by a processor 402, such as an ASIC, over a system bus 404.
  • the computer readable medium (CRM) 400 may direct the processor 402 through stored instructions to perform the steps of the current method as described with respect to FIGS. 2, 3, and other embodiments disclosed herein.
  • the computer-readable medium 400 may include code configured to perform the methods described herein.
  • the various software components discussed herein may be stored on the computer-readable medium 400.
  • the computer readable medium 400 may include an array specification receiver 406 to receive array specifications for use in later processes, operations, and computations of the computer readable medium.
  • the computer readable medium 400 may also include a distribution engine 408 which can compute an operation rate distribution. This operation rate distribution can be based on data received at the array specification receiver 406.
  • the computer readable medium 400 may also include an array allocation generator 410 to generate an array allocation with array specifications from the array specification receiver 406 and the operation rate distribution generated by the distribution engine 408.
  • the array allocation generator 41 0 may size the tiers from the least expensive per gigabyte to the most expensive per gigabyte.
  • An initial step may allocate as much space as possible to the least expensive tier without causing any performance problems.
  • more space is allocated to in this least expensive tier, more IOPS will be requested of the drives in this least expensive tier, and there will usually be a space allocation where there will be more IOPS in the least expensive tier than the drives in that tier can handle.
  • a distribution function can be used by the generation engine 408 to estimate the number of IOPS in each of the tiers for any size tier.
  • the size limit of the tier can be limited by one or more estimates. Examples of these limiting estimates include the number of IOPS on the busiest drive in the tier, the average response time of the drives in the tier, the average bandwidth transferred by the drives in the tier, the worst case response time of the busiest redundant array of inexpensive disks (RAID) group in the tier, or other metrics that reflect the performance of the tier.
  • the tier with the next lowest cost per gigabyte is sized. This process of sizing the tiers from least expensive to the most expensive continues until all of the tiers are sized.
  • the size of some tiers may be zero. For example, if the IOPS are low enough there may be no need for the most expensive storage, or if the IOPS are high enough, it may not be possible to use any of the least expensive storage. Accordingly, an array allocation generator 410 can account for these possibilities and may return an array allocation to match the specifications of the particular array in question.
  • the software components can be stored in any order or configuration.
  • the tangible, nontransitory, computer-readable medium is a hard drive
  • the software components can be stored in non-contiguous, or even overlapping, sectors.
  • the block diagram of FIG. 4 is not intended to indicate that the computer-readable media 400 is to include all of the components or modules shown in FIG. 4.
  • any number of additional components may be included within the computer-readable media 400, depending on the details of the end to end Quality of Service (QoS) technique and in-band communication described herein.
  • QoS Quality of Service
  • Fig. 5 is a graph 500 with example data illustrating how uneven lOPs distribution can be for several applications.
  • the lOPs measurement here is representative of similar relationships for any operation rate measure, and lOPs here should be understood to be only an example of one measurement of this operation rate.
  • the legend 502 of this graph 500 indicates the three example applications run to provide this data.
  • the y-axis of the graph 500 shows the cumulative percentage of lOPs 504 or access rate out of the total accesses or lOPs actually undertaken by a particular application.
  • the x-axis of the graph 500 shows the percentage of regions of memory accessed out of the total number of regions for a storage array 506.
  • the example applications include an operating system application 508, a first database application 508, and a second database application 512.
  • Each of these applications are only examples of applications that could be accessing a storage array and many other applications may also have similar access patterns.
  • the large majority of lOPs may occur on a very small percentage of the space in memory.
  • the presently disclosed array allocator accounts for this relationship by allowing a user to indicate a distribution parameter that more closely matches the distribution of lOPs across space in memory.
  • array allocators which generate array allocations based on the assumption that lOPs are distributed uniformly across multiple tiers can be improved by accounting for this nonuniform relationship when generating array allocation for storage arrays.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

La présente invention concerne des procédés et des systèmes permettant de générer une attribution de réseau de stockage à plusieurs étages. Ces procédés et systèmes sont conçus pour comprendre des spécifications réseau telles qu'un débit de fonctionnement, un paramètre de répartition, et une pluralité de types d'actionnement, ayant chacun une capacité de vitesse de fonctionnement associée. Un processeur va recevoir les spécifications réseau, calculer une répartition de vitesse de fonctionnement en appliquant le paramètre de répartition à la vitesse de fonctionnement, et générer une attribution de réseau de stockage à plusieurs étages. L'attribution de réseau de stockage à plusieurs étages a pour but de comprendre un certain nombre d'actionnements spécifiés pour chacun de la pluralité de types d'actionnement en fonction d'une capacité de vitesse de fonctionnement de type d'actionnement et de la répartition de vitesse de fonctionnement.
PCT/US2014/063320 2014-10-31 2014-10-31 Attributeur de réseau de stockage WO2016068976A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/063320 WO2016068976A1 (fr) 2014-10-31 2014-10-31 Attributeur de réseau de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/063320 WO2016068976A1 (fr) 2014-10-31 2014-10-31 Attributeur de réseau de stockage

Publications (1)

Publication Number Publication Date
WO2016068976A1 true WO2016068976A1 (fr) 2016-05-06

Family

ID=55858077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/063320 WO2016068976A1 (fr) 2014-10-31 2014-10-31 Attributeur de réseau de stockage

Country Status (1)

Country Link
WO (1) WO2016068976A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010099992A1 (fr) * 2009-03-02 2010-09-10 International Business Machines Corporation Procédé, système et produit-programme d'ordinateur de gestion du placement de données en mémoire dans une infrastructure de mémorisation virtualisée à étages multiples
WO2010127092A2 (fr) * 2009-04-29 2010-11-04 Netapp, Inc. Mécanismes de déplacement de données dans un agrégat hybride
US20120102350A1 (en) * 2010-10-22 2012-04-26 International Business Machines Corporation Reducing Energy Consumption and Optimizing Workload and Performance in Multi-tier Storage Systems Using Extent-level Dynamic Tiering
US8433848B1 (en) * 2010-03-30 2013-04-30 Emc Corporation Analysis tool for a multi-tier storage environment
US20130198449A1 (en) * 2012-01-27 2013-08-01 International Business Machines Corporation Multi-tier storage system configuration adviser

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010099992A1 (fr) * 2009-03-02 2010-09-10 International Business Machines Corporation Procédé, système et produit-programme d'ordinateur de gestion du placement de données en mémoire dans une infrastructure de mémorisation virtualisée à étages multiples
WO2010127092A2 (fr) * 2009-04-29 2010-11-04 Netapp, Inc. Mécanismes de déplacement de données dans un agrégat hybride
US8433848B1 (en) * 2010-03-30 2013-04-30 Emc Corporation Analysis tool for a multi-tier storage environment
US20120102350A1 (en) * 2010-10-22 2012-04-26 International Business Machines Corporation Reducing Energy Consumption and Optimizing Workload and Performance in Multi-tier Storage Systems Using Extent-level Dynamic Tiering
US20130198449A1 (en) * 2012-01-27 2013-08-01 International Business Machines Corporation Multi-tier storage system configuration adviser

Similar Documents

Publication Publication Date Title
US11073999B2 (en) Extent migration in multi-tier storage systems
US10853139B2 (en) Dynamic workload management based on predictive modeling and recommendation engine for storage systems
US7849180B2 (en) Load balanced storage provisioning
US9971548B1 (en) Storage resource management employing performance analytics
US9983802B2 (en) Allocating storage extents in a storage system
JP6260407B2 (ja) ストレージ管理装置、性能調整方法及び性能調整プログラム
US10140034B2 (en) Solid-state drive assignment based on solid-state drive write endurance
WO2011092739A1 (fr) Système de gestion permettant de calculer une capacité de mémoire devant être augmentée/réduite
EP2378410A2 (fr) Procédé et appareil permettant de gérer des informations tiers
US9823875B2 (en) Transparent hybrid data storage
US10425352B2 (en) Policy driven storage hardware allocation
US8966213B2 (en) Granting and revoking supplemental memory allocation requests
US11520715B2 (en) Dynamic allocation of storage resources based on connection type
US20210089226A1 (en) Adaptive wear leveling for drive arrays
US20170046089A1 (en) Online flash resource allocation manager based on a tco model
JP2021504780A (ja) 分散コンピューティング環境における自動対角スケーリングためのアプリケーションの優先順位付け
US10372372B2 (en) Storage system
CN112948279A (zh) 管理存储系统中的访问请求的方法、设备和程序产品
EP4258096A1 (fr) Fourniture de taille de stockage de blocs prédictive pour des volumes de stockage en nuage
US10209749B2 (en) Workload allocation based on downstream thermal impacts
US10956084B2 (en) Drive utilization in multi-tiered systems with read-intensive flash
WO2016068976A1 (fr) Attributeur de réseau de stockage
US20190339898A1 (en) Method, system and computer program product for managing data storage in data storage systems
US20200174885A1 (en) Write-balanced parity assignment within a cluster
US10666754B2 (en) System and method for cache reservation in an information handling system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14904898

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14904898

Country of ref document: EP

Kind code of ref document: A1