CN102341779A - Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure - Google Patents

Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure Download PDF


Publication number
CN102341779A CN2010800102363A CN201080010236A CN102341779A CN 102341779 A CN102341779 A CN 102341779A CN 2010800102363 A CN2010800102363 A CN 2010800102363A CN 201080010236 A CN201080010236 A CN 201080010236A CN 102341779 A CN102341779 A CN 102341779A
Prior art keywords
Prior art date
Application number
Other languages
Chinese (zh)
Original Assignee
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP09305191 priority Critical
Priority to EP09305191.0 priority
Application filed by 国际商业机器公司 filed Critical 国际商业机器公司
Priority to PCT/EP2010/050254 priority patent/WO2010099992A1/en
Publication of CN102341779A publication Critical patent/CN102341779A/en



    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0628Dedicated interfaces to storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • G06F3/0649Lifecycle management
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0602Dedicated interfaces to storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from or digital output to record carriers, e.g. RAID, emulated record carriers, networked record carriers
    • G06F3/0601Dedicated interfaces to storage systems
    • G06F3/0668Dedicated interfaces to storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays


A storage management method for use in SAN based virtualized multi-tier storage infrastructure in a loosely defined and changing environment. Each physical storage media is assigned a tier level based on its Read I/O rate access density. The method comprises a top down method based on data collected from the virtualization engine compared to Read I/O capability and space capacity of each discrete virtual storage pool to determine whether re-tiering situations exist, a drill-in analysis algorithm based on relative Read I/O access density to identify which data workload should right-tiered among the composite workload hosted in the discrete virtual storage pool.


在多层级虚拟化存储结构中管理存储数据的布置的方法、 系统和计算机程序产品 Management arrangement for storing data in a multi-level virtual storage structure, methods, systems and computer program products

技术领域 FIELD

[0001] 本发明涉及数据处理领域,并且具体地涉及多层级虚拟化存储结构中的存储管理和数据布置优化。 [0001] The present invention relates to the field of data processing, and in particular to multi-level data storage management and virtual storage structure disposed optimization.

背景技术 Background technique

[0002] 企业面临由它们的存储需要的快速增长、管理存储的增加复杂度以及对存储的高可用性的需求造成的主要挑战。 [0002] enterprises are facing major challenges posed by the rapid growth of their storage needs, high availability requirements increase the complexity of storage management and storage caused. 存储区域网络(SAN)技术使得能够通过存储池(pooling) 从主机计算机单独地构造存储系统,导致改进的效率。 A storage area network (SAN) technology enables storage system configured separately from a host computer via a storage pool (pooling), resulting in improved efficiency.

[0003] 还可以使用为用户屏蔽物理存储器的复杂度的存储管理技术、存储虚拟化。 [0003] The storage management techniques may also be used as a user mask complexity of the physical memory, virtual memory. 块虚拟化(有时也称作块聚集)给服务器提供实际存储数据的物理存储器(诸如盘驱动器、固态盘、以及带驱动器)的逻辑视图。 Block virtualization (also sometimes referred to block aggregation) provides data to the server storing the actual physical storage (such as disk drives, solid state disk and tape drives) logical view. 逻辑视图可以包括将可用存储空间划分(或聚集)而成的许多虚拟存储区域,而不用考虑实际存储器的物理布局。 It may include a logical view of the available storage space into (or aggregate) of many virtual storage area formed, without regard to the actual physical layout of the memory. 服务器不再查看特定物理目标,而是查看可以专为它们使用的逻辑量(volume)。 See specific physical server is no longer certain, but can be designed to view the amount of logic used for them (volume). 服务器向虚拟存储区域发送它们的数据,就像是虚拟存储区域是它们的直接附接所有物。 Server send their data to the virtual storage area, as a virtual storage area is directly attached to their belongings.

[0004] 虚拟化可在量的等级、单独文件或者在表示盘驱动器内特定的位置的块的等级上发生。 [0004] Virtualization level in an amount of, or separate files occurs at the level of the block represents a specific position within the disk drive. 可以在主机(服务器)、和/或存储设备(智能盘阵列)内执行块聚集。 Blocks may be performed in the aggregate host (server), and / or storage devices (smart disk arrays).

[0005] 在数据存储中,在一套存储层级中的精确数据布置(placement)的问题是最难以解决的问题之一。 [0005] in the data store accurate data in a storage hierarchy arrangement (Placement) is one of the most difficult problem to solve. 层级存储是指定不同类别(category)的数据分配到不同类型的存储介质以便减少总存储开销。 Storage hierarchy of categories are specified (category) data allocated to different types of storage media in order to reduce the total storage overhead. 类别可以基于所需保护的等级、性能需求、使用频率、容量以及其它考虑。 Category may be based on the level of protection required, performance requirements, frequency, capacity, and other considerations.

[0006] 用于用户的布置的用户要求经常宽松地指定或者基于希望而不是精确的容量计划。 [0006] The arrangement of the user for the user to specify or require frequent loosely based on the desired and not the exact capacity planning. 此外,即使初始条件充足,应用在贯穿它们的生命周期中也可经历激烈的数据访问改变。 Further, even if the initial condition is satisfied, the application throughout the life cycle thereof may also undergo drastic change data access. 例如,未来用户的数目难以预测的因特网应用的发布,很有可能具有在给定时间与初始配置值和/或计划的活动非常不同的实际数据访问行为。 For example, a future release of Internet applications the number of users is difficult to predict, is likely to have given time and initial configuration values ​​and / or planned activities very different from the actual data access behavior. 时间流逝,该应用可能从引起数据访问行为中上向改变的功能增强中获益。 The passage of time, to change the function of the application may cause the data access behavior from the enhanced benefit of. 随后,所选择的功能可能变得不被使用,因为他们具有的功能被更新的应用接替导致在数据访问模式中的下向改变。 Subsequently, the selected function may become not be used because of their function has been updated application to take over the lead to the next change in the data access mode. 除了应用行为不确定性以外,单一应用内的数据访问行为可能完全不同类。 In addition to application behavior uncertainty, the data access behavior within a single application may be completely different class. 例如,高度活跃的数据库日志和静态参数表将呈现非常不同的数据访问模式的特征。 For example, highly active database log and static characteristic parameter table will show a very different data access patterns. 全部跨越这些生命周期改变,存储管理员面临宽松指定的并且变化的环境,其中不能认为用户技术输入是精确并且可信的,从而采用正确的数据布置判断。 These changes across the entire life cycle, the specified storage administrators face relaxed and changing environment, where the user can not be considered accurate and trustworthy technical input, thereby determining the correct arrangement of data.

[0007] 与它们的冗余建立(RAID 5、RAID 10等...)相结合的、在存储层级(光纤信道(FC)、串联AT附件(SATA)、固态驱动器(SSD))中使用的大量的存储技术在存储层级中造成甚至更加复杂的应用数据布置判断,其中每单位存储容量的代价在SATA和SSD之间可以在从1到20的范围内。 [0007] combined with their established redundancy (RAID 5, RAID 10, etc. ...), in a storage hierarchy (Fiber Channel (the FC), serial AT Attachment (SATA), a solid state drive (the SSD)) used a large number of storage technologies resulting in even more complex application data storage hierarchy arrangement is determined, where the cost per unit storage capacity between SATA and SSD may be in the range from 1 to 20. 使用应用数据的正确层级现在对于企业在保持应用性能的同时降低它们的开销是重要的。 Use the correct level application data is now important for the enterprise while maintaining application performance to reduce their overhead.

3[0008] 在美国专利US 5,345,584中已经提出了一种用于在多个存储设备中管理数据集的分配的方法。 3 [0008] In U.S. Patent No. US 5,345,584 a method of dispensing a plurality of managing sets of data for storage devices have been proposed. 基于用于数据集和存储设备的数据存储因素的方法对于在没有本地高速缓冲层的情况下访问的单独存储设备中的单独数据集布置良好合适。 Based on method of storing data for a set of data elements and storage device for separate storage access without a local cache in the case where a separate layer furnished appropriate data set. 当今大多废弃了这个结构,这是因为现代存储设备在横跨具有能够缓冲高数目写入访问指令的高速缓冲层的多存储设备的拆散(stripped)模式中容纳(host)数据集。 Today most of the structure of this waste, because modern storage device receiving (Host) in the data set across multiple storage devices having a cache buffer layer can be a high number of write access instruction break up (stripped In) mode. 此外使用总访问率(即,读取活动和写入活动的和)对于表现现代存储设备的特征非常不准确;例如,300GB光纤信道驱动器典型地可以支持每秒100-150个随机访问,而写入高速缓冲层可以持续15分钟缓冲每秒1000 个写入指令,每一个为8K字节(典型的数据库块大小),导致总访问率变得不准确。 Further use of the total access rate (i.e., read and write activity and activity) for the performance of modern storage device characteristics are very inaccurate; e.g., 300GB Fiber Channel driver typically 100-150 per second can support random access, written layer into the cache 1000 may be 15 minutes per write command buffer, each of 8K bytes (a typical database block size), resulting in a total access rate becomes inaccurate. 这个问题使得任何将要基于总读取和写入活动和容量的模型受挫。 This problem makes any model will always read and write activity and capacity based on frustration.

[0009] 在来自受让人的WO 2007/009910中已经提出了一种存储区域网络(SAN)中的分级(hierarchical)存储数据的方法,其中SAN包括与存储虚拟化引擎耦接的多个主机数据处理器,其与多个物理存储介质耦接。 Method [0009] In WO 2007/009910 from the assignee has proposed a storage area network (SAN) in a hierarchical (Hierarchical) storing data, wherein the SAN comprises a plurality of hosts and storage virtualization engine coupled a data processor, which is coupled to a plurality of physical storage media. 每个物理介质被指定一个层级等级。 Each physical media is assigned a grade level. 该方法基于当数据块的访问行为超过层级介质阈值时的数据块的选择性重定位。 Selective block of data which exceeds the level threshold based on the medium when the access behavior block relocation. 这种方法可能导致对于包括由高需求应用和低需求应用组成的多应用的混合工作量的不经济的解决方法。 This approach may lead to economical solution for multiple applications, including applications from high demand and low demand applications consisting of mixed workloads. 对于这样的工作量,这个方法将导致推荐或选择两种类型的存储资源。 For such work, this method will result in a recommendation or a choice of two types of storage resources. 第一种存储资源类型将是“高性能SSD等”类型并且第二种存储资源将是“低性能SATA驱动器等”类型,而基于光纤信道(FC)盘的解决方法可能对于支持聚集的工作量的“平均”性能特征是足够的并且更加经济。 A first storage resource type will be "high performance SSD like" and a second storage resource type will be "low-performance SATA drives" type, and the solution Fiber Channel (FC) disk support for aggregation may be based on workload "average" performance characteristics are enough and more economical. 本质上,使用SATA、FC和SSD的容量的类型价格/单元的1、2和20比例将导致FC 解决方案比合并的SSD和SATA解决方案便宜五倍。 Essentially, 1,2 and 20 proportions SATA, FC type and capacity of the SSD price / unit will result in cheaper solution than FC combined SSD and SATA solution five times.

[001 ο] 本发明针对解决上述弓I用的问题。 [001 ο] The present invention is directed to solve the above problems with the bow I.


[0011] 本发明提出一种用于在宽松定义的并且改变的环境中的虚拟化的多层级存储结构中管理数据布置的方法。 [0011] The present invention provides a method of multi-level virtual storage structure for use in a loosely defined environment and changes in the management data arrangement. 每个物理存储介质被分配基于其读取I/O速率访问密度的层级等级。 Each level is assigned a physical storage medium access density level based on which the read I / O rate. 本发明包括组织严密的(top down)方法,其基于从虚拟化引擎采集的数据与每个离散虚拟存储池的读取I/O能力和空间容量相比较以便确定是否存在重新排列层级(re-tiering)情况,以及深度(drill-in)分析算法,其基于有关读取I/O访问密度以便识别在离散虚拟存储池中容纳的混合工作量中哪个数据应该正确排列层级。 The present invention comprises a highly organized (top down) method, which is based on data collected from each discrete virtualization engine and the virtual storage pool read I / O capacity and space capacity is compared to determine if there rearranged level (Re- Tiering), as well as the depth (drill-in) analysis algorithm, based on their access related to the read I / O density in the mixing work to identify discrete virtual storage pool in which data should be received correctly tiering.

[0012] 本方法操作离散存储虚拟池以及存储虚拟盘等级,并且利用在多数聚集混合工作量中出现的机遇性互补工作量简档。 [0012] The present method of operating a discrete virtual storage pool and virtual disk storage level, and take advantage of the opportunities of complementary work appears in most aggregation mixed workload profiles. 本方法显著地降低将要通过在块或存储虚拟盘等级的微分析生成的重新排列层级动作的量,并且提供更加经济的推荐。 This method significantly reduces the amount of operation of rearranging the micro-level analysis block level virtual disk or storage will be generated through, and provide a more economical recommendation.

[0013] 基于组织严密的尝试分析存储资源的行为,该方法检测工作量重新排列层级合适的情况,并且提供重新排列层级(向上或向下)推荐。 [0013] Based on the behavior of trying to analyze organized storage resources, the method detects rearranged workload level appropriate, and provides a rearranged level (up or down) recommendation.

[0014] 所建议的重新排列层级/正确排列层级动作可以被存储管理员分析,用于验证或者自动传给虚拟化弓I擎用于虚拟盘迁移。 [0014] The proposed re-tiering / right-tiering actions can be stored administrator analysis used to verify or automatically passed to the virtualization engine for virtual disk I bow migration.

[0015] 该方法还包括覆盖服务质量问题的写入响应时间组件。 [0015] The method further comprises quality of service issues overwritten response time components. 该方法使用基于由存储管理员定义的阈值的警报。 The method defined threshold based storage administrator from the alert. 该处理包括虚拟化的存储架构的结构化以及可重复评估、导致数据工作量重新排列层级动作的处理流程。 The process comprises a virtualized storage architecture structured and repeatable assessment, resulting in a process flow rearranging data workload level operation. 该处理还包括结构化流程用于分析写入响应时间服务质量警报,判断是否需要重新排列层级以及识别应该对哪个数据工作量重新排列层级。 The process further comprises a structured process for analyzing the quality of service alert write response times, determine the need to rearrange the hierarchy, and identifying which data should be rearranged on the workload level.

[0016] 根据本发明,提供了一种如在所附独立权利要求中所描述的方法和系统。 [0016] According to the present invention, there is provided a method and system as described in the appended independent claims herein.

[0017] 在所附从属权利要求中定义进一步实施例。 [0017] A further embodiment is defined in the appended dependent claims.

[0018] 现在将参照附图,以优选实施例和示例的方式描述本发明的前述的和其它目标、 模块以及优点。 [0018] Referring now to the drawings, embodiments and examples of the way of a preferred embodiment described in the foregoing and other objects, modules, and advantages of the present invention.


[0019] 图1示出了其中可以实施本发明的存储区域网络的示例; [0019] FIG. 1 illustrates an exemplary storage area network which may be implemented according to the present invention;

[0020] 图2示出了块虚拟化的简单视图; [0020] FIG. 2 shows a simple view of the block virtualization;

[0021] 图3示出了其中可以实施本发明的虚拟化引擎的组件; [0021] FIG. 3 illustrates an embodiment of the present invention which can virtualization engine component;

[0022] 图4示出了根据本发明的用于正确排列层级的存储排列层级分析器(START)的组件; [0022] FIG. 4 shows the correct alignment of the assembly storage tiering level analyzer (START) according to the present invention;

[0023] 图5图示了在正确排列层级处理的实施例中使用的优选数据服务模型维数; [0023] FIG. 5 illustrates a preferred data service dimension model used in the correct tiering process embodiment;

[0024] 图6图示了使用的存储数据服务技术和经济领域; [0024] FIG. 6 illustrates a data storage service used in technical and economic fields;

[0025] 图7A、7B、7C和7D示出了用于存储池的使用的技术领域中混合数据工作量的实际情况的示例; [0025] FIGS. 7A, 7B, 7C and 7D illustrate an example of actual workload data mixed using Field of the storage pool;

[0026] 图8图示了本发明所使用的三维模型的读取I/O速率密度; [0026] FIG 8 illustrates a three-dimensional model of the Read I used in the present invention / O rate density;

[0027] 图9示出了由不同读取I/O速率密度的两个数据工作量组成的数据的读取I/O速率密度并且图示了适用的热类比; Reading I [0027] FIG. 9 shows two different densities workload data read I / O data rate of the composition / O rate and density analog illustrates a suitable heat;

[0028] 图10示出了在移除组成数据工作量之一时怎样修改混合工作量的读取I/O速率密度; [0028] FIG. 10 illustrates how the density of data modification in removal composition mixed workload workload read one I / O rate;

[0029] 图11示出了支持本发明的基于阈值的警报系统; [0029] FIG. 11 shows a threshold-based alarm system supporting the present invention;

[0030] 图12提供了由于其涉及读取I/O速率密度和空间应用的支持本发明中描述的方法的处理流程;以及 [0030] Figure 12 provides a process flow as it relates to a method of reading I / O rate and density of the support space applications described in this invention; and

[0031] 图13提供了由于其涉及写入I/O响应时间警报的分析的支持方法的实施例的处 [0031] Figure 13 provides as it relates to the write I / O response embodiments supporting method of analysis at the time alarm

理流程。 Management process.

具体实施方式 Detailed ways

[0032] 本发明提出使用虚拟化引擎以及分析器组件,该虚拟化引擎具有数据和数据的位置两者的知识,该分析器组件用于识别应受数据重新排列层级并且推荐实际数据重新排列层级动作的情况。 [0032] The present invention proposes to use knowledge of both the position of the virtualization engine and an analyzer assembly having the virtualization engine data and data, the analyzer component for identifying the data to be rearranged by the recommended level and the actual level data rearrangement situation action.

[0033] 参照图1,示出了具有若干主机应用服务器102所附接的SAN 100。 [0033] Referring to FIG. 1, it shows a plurality of host application server 102 attached SAN 100. 这些可以是许多不同的类型,典型地一些数量的企业服务器,以及一些数量的用户工作站。 These can be of many different types, some number of typically enterprise servers, as well as some number of user workstations.

[0034] 还附接到SAN(经由盘冗余阵列(RAID))的是各种等级的物理存储器。 [0034] Also attached to the SAN (via redundant array of disks (the RAID)) are various grades of physical memory. 在本示例中,存在三个等级的物理存储器:层级1,其可以是例如诸如IBM©系统存储器DS8000 的企业级存储器;层级2,其可以是中间范围存储器,诸如配备FC盘的IBM©系统存储器DS5000;以及层级3,其可以是低端存储器,诸如配备串行先进技术附件(SATA)驱动器的IBM©系统存储器DS4700。 In the present example, there are three levels of physical memory: Level 1, which may be for example a system memory such as IBM © DS8000 enterprise-level memory; level 2, which may be an intermediate range of memory, such as with IBM © FC disk memory system the DS5000; and level 3, which may be a low-end memory, such as with a serial advanced technology attachment (SATA) drives IBM © system memory DS4700.

[0035] 典型地,每个M盘(MDisk)与单个层级对应并且每个RAID阵列101属于单个层级。 [0035] Typically, each of the M disk (MDisk) and each corresponding to a single tier RAID array 101 belong to a single level. 每个RAID控制器103可以控制属于不同层级的RAID存储器。 Each RAID controller 103 may control the memory belonging to different RAID levels. 除了应用不同层级于不同的物理盘类型以外,还可以将不同层级应用于不同的RAID类型;例如,可以将RAID-5阵列放置于比RAID-O阵列更高的层级。 In addition to the application of different levels to different physical disc type, it may also be applied to different RAID levels different type; for example, RAID-5 array may be placed higher than the level of array RAID-O.

[0036] SAN通过位于所有SAN数据的数据路径中的存储器虚拟引擎104被虚拟化,并且向主机服务器和工作站102呈现虚拟盘106a至106η。 [0036] SAN SAN data by all the data located in the memory of the virtual path is the virtualization engine 104, and 106a to present a virtual disk to the host servers and workstations 106η 102. 这些虚拟盘由横跨三个层级的存储设备提供的容量构成。 These virtual disk consists of three levels of capacity across storage devices provide.

[0037] 虚拟化引擎104包括多个节点110 (示出了四个)之一,该多个节点向主机提供虚拟化、高速缓存和复制服务。 [0037] The virtualization engine 104 includes a plurality of nodes 110 (four shown), one of which provides a plurality of virtual nodes, caching and replication services to the host. 典型地,节点成对地部署并且构成节点的集群(cluster),其中每对节点已知为输入/输出(I/O)组。 Typically, the nodes deployed in pairs and constitute a cluster node (Cluster), wherein each pair of nodes known as input / output (I / O) group.

[0038] 由于存储器附接到SAN,它被添加到各种存储池,每个由RAID控制器103控制。 [0038] Since the memory is attached to a SAN, it is added to various storage pools, each controlled by a RAID controller 103. 每个RAID控制器向虚拟化引擎呈现SCSI (小型计算机系统接口)盘。 Each RAID controller presents SCSI (Small Computer System Interface) disk to the virtualization engine. 所呈现的盘可以通过虚拟化引擎管理,并且被称为被管理盘或M盘。 The disk can be presented by the virtualization engine management, and is called a managed disk or disk M. 这些M盘被划分成扩展区(extent),固定大小块的可使用容量,其从每个M盘的开始到结束顺序地标号。 These M disc is divided into extents (extents), the usable capacity of a fixed-size blocks, which are sequentially numbered from the start end M of each disc. 可以连接、剥去(strip)拆散这些扩展区,或者可以使用任何期望的算法来产生通过节点呈现给主机的更大的虚拟盘(VDisk)。 May be connected, strip (Strip) to break up these extents, or may be any desired algorithm to generate a larger presentation to host a virtual disk (the VDisk) through the node.

[0039] 在被管理盘组或MDG 108中可以集合M盘Ml、M2. . . M9,典型地通过诸如性能、RAID 等级、可靠性、制造商等的因素体现特征。 [0039] In a managed disk group or MDG 108 may set M disc Ml, M2... M9, typically embodies characterized by factors such as performance, RAID level, reliability, and other manufacturers. 根据优选实施例,如图1所示,MDG中的所有M盘表示相同层级等级的存储器。 According to a preferred embodiment, as shown in Figure 1, all the disk MDG M represents the same level of the memory hierarchy. 在虚拟化存储架构中可以有相同层级的多MDG,每个都是离散虚拟存储池。 In a virtualized storage architecture can have more than MDG same level, each of which is a discrete virtual storage pool.

[0040] 虚拟化引擎将虚拟盘的逻辑块地址(LBA)转化到V盘的扩展区,并且映射V盘的扩展区到M盘扩展区。 [0040] The virtualization engine virtual disk logical block address (LBA) conversion of V to the extended area of ​​the disc, and the disc is mapped to the extension area V M disk extents. 图2示出了从V盘到M盘的映射的示例。 FIG 2 shows an example of the disc from V M to disk map. 将V盘A的每个扩展区映射到被管理盘Ml、M2或M3之一的扩展区。 Mapping each disk expansion region V A to a managed disk Ml, M2, or one of the extension area M3. 可以从由每个节点存储的元数据创建的映射表显示一些被管理盘扩展区未被使用。 It may be displayed from the mapping table created by the metadata stored at each node of a managed disk extents of some unused. 这些未使用的扩展区对于创建新的V盘、迁移、扩展等可用。 These unused extents for the creation of a new V plate, migration, expansion, etc. are available.

[0041] 典型地,创建并分发虚拟盘使得企业级服务器初始地使用企业级存储器或者基于应用所有者需求。 [0041] Typically, create and distribute virtual disk makes enterprise-class server initially use enterprise-class storage or applications based on the needs of the owner. 这不能通过实际数据访问特征来完全诠释。 This can not be the actual data access features to complete interpretation. 本发明提供一种使用结构化正确排列层级处理来识别更好的数据布置情况的方法。 The present invention provides a use of hierarchical structured correctly aligned to identify better data processing arrangement of the method. 本发明支持用于应用的不同的并且更便宜的初始数据布置。 The present invention supports different applications and less expensive initial data arrangement. 例如,可以在层级2存储介质中释放用于所有应用的初始数据布置,并且本发明将基于整体虚拟化存储结构的实际情况来支持这个数据的部分或全部的重新排列层级。 For example, initial data may be released for all applications arranged in a hierarchy in the second storage medium, and the present invention is to support some or all of this data is rearranged based on the actual level of the overall situation of the virtual storage structure.

[0042] 为此,除了用于跟踪被管理盘扩展区向虚拟盘的映射的元数据以外,还监控向每个扩展区的访问率。 [0042] To this end, in addition to track metadata are mapped to the virtual disk management area extended beyond the disk, but also for controlling access to each of the extension area. 当对任何给出的扩展区读取并写入数据时,使用访问计数更新元数据。 When reading and writing data to any given extent, the use of access count update metadata.

[0043] 现在将参照图3来描述I/O流。 [0043] 3 will now be described with reference to I / O streams FIG. 如图3所示,节点110的虚拟化引擎包括以下模块:SCSI前端302、存储器虚拟化310、SCSI后端312、存储器管理器314以及事件管理器316。 FIG virtualization engine 3, the node 110 comprises the following modules: SCSI front end 302, virtual memory 310, SCSI back-end 312, a memory manager 314 and the event manager 316.

[0044] SCSI前端层从主机接收I/O请求;进行LUN映射(即,在虚拟盘A和C之间的LBA 到逻辑单元数目(LUN)(或扩展区));以及将SCSI读取和写入命令转化成节点的内部格式。 [0044] layer of the receiver front end SCSI I / O request from a host; LUN mapping performed (i.e., the LBA between A and C to the virtual disk logical number (LUN) (or extension regions) means); and SCSI read and write command is converted into the internal format of the node. SCSI后端处理通过上面的虚拟化层向其发送的对被管理盘的请求,并且将命令寻址到RAID控制器。 SCSI back-end processing by the virtualization layer above the managed disks to its request, and a command addressed to the RAID controller. [0045] I/O堆栈可以还包括诸如远程复制、闪速复制或高速缓存的其它模块(未示出)。 [0045] I / O stack it may further include other modules, such as remote copy replication, or a flash cache (not shown). 高速缓存通常在虚拟化引擎和RAID控制器等级二者呈现。 Cache usually present in both the virtualization engine controller and RAID level.

[0046] 图3中显示的节点属于向其分配V盘A和B的I/O组。 [0046] FIG. 3 shows the nodes belonging to assign disc A and B V I / O group. 这意味着该节点向V盘A 和B呈现接口用于主机。 This means that the presentation interface for the host node to the disc A and V B. 被管理盘1、2和3还可以与分配给其它节点的其它虚拟盘对应。 A managed disk 2 and 3 also correspond to other virtual disks assigned to other nodes.

[0047] 事件管理器316管理元数据318,其包括用于每个扩展区的映射信息以及用于扩展区的层级等级数据和访问值。 [0047] The event manager 316 managing metadata 318, which includes mapping information for each extension area and grade level for data access and the value of the extension area. 这个元数据对于虚拟化层310和存储器管理器314也是可用的。 The meta data layer 310 and the virtual memory manager 314 is also available.

[0048] 现在考虑来自主机的写入请求350的接收,所述写入请求350包括请求所指向的虚拟盘的ID以及应当对其进行写入的LBA。 [0048] Considering now the reception from the host 350 write request, the write request 350 comprises a request is directed to a virtual disk ID and LBA should write its. 在接收到写入请求时,前端将指定LBA转化为虚拟盘的扩展区ID (LUN),假设这是V盘A的扩展区3 (A-3)。 Upon receiving the write request, into the front end of the designated LBA extents of the virtual disk ID (LUN), which is assumed extents disc A V 3 (A-3). 虚拟化组件310使用以图2 中映射表的形式所示的元数据,来映射扩展区A-3到M盘2的扩展区6 (M2-6)。 Virtualization component 310 using the metadata mapping table in the form shown in FIG. 2, to map the extension area A-3 2 M disc extension region 6 (M2-6). 写入请求然后经由SCSI后端312被传给M盘2的有关控制器,并且数据被写入到扩展区M2-6。 Then a write request relating to the controller 312 is transmitted via a SCSI disk 2 M backend, and data is written to the extension area M2-6. 虚拟化层向事件管理器发送消息304,指示已经请求向M盘2的扩展区6的写入。 The virtualization layer sends a message to the event manager 304 to indicate that the write request to the disk M 2 extension region 6. 事件管理器然后关于扩展区M2-6更新元数据以便指示该扩展区现在已满。 About M2-6 event manager then updates the metadata extension area to indicate that the extended area is now full. 事件管理器还更新用于该扩展区的元数据中的访问值。 Event Manager also updates the metadata used to access the value of the extension area. 这可以通过重置元数据中的计数值,通过存储在其发生写入作为访问值的时间来实现。 This resets the count value by metadata, to achieve as the access time of the occurrence is written by storing the value. 事件管理器向虚拟化组件返回消息304,用于指示已经更新了元数据以便反映写入操作。 Event manager 304 returns a message to the virtualization components, for indicating the metadata has been updated to reflect the write operation.

[0049] 现在将参照图4来描述允许正确排列层级动作的用于正确排列层级的存储排列层级分析器(START)管理器组件。 [0049] will now be described with reference to FIG. 4 allow proper alignment for proper operation of the hierarchy of tiering storage tiering analyzer (START) manager component. START执行SAN活动的分析以便识别应受正确排列层级动作的情况并且准备适当的V盘迁移动作列表。 START SAN activity analysis performed for the case of correct operation of the arrangement level should be subject to identification and prepare appropriate action list V migration disc. 首先,数据采集器401通过周期性地采集在虚拟化引擎中包括的拓扑结构数据以及每个LUN和V盘的访问活动,用作存储器资源管理器。 First, data acquisition and data 401 acquired topology visits each LUN and disk included in the V virtualization engine by periodically used as a memory resource manager. 这可以包括写入和读取活动计数、响应时间和其它监控数据。 This may include writing and reading activity counts, response time, and other monitor data. 这可以包括虚拟化引擎的后端和前端活动数据以及内部测量,如队列等级。 This may include a front end and a rear end and an internal activity data virtualization engine of measurement, such as the queue level. 数据采集器周期性地(优选周期是典型地每15分钟)在其本地存储库(i^pository)中插入该一系列数据,并且将其存储更长时间段(典型地6个月)。 Data acquisition is periodically (typically every period is preferably 15 minutes) inserted into the series of data in its local repository (i ^ pository), and stores it for a longer period (typically 6 months).

[0050] 数据聚集器402通过访问数据采集器存储库(使用诸如一批报告的机制)来处理覆盖更长时间段的SAN数据(比方说一天例如96个样本,每个15分钟),并且产生包括最小、最大、平均、形状因素...的聚集值用于由SAN的虚拟化引擎管理的V盘和MDG。 [0050] Data aggregator 402 harvester library (using reporting mechanisms, such as a batch) process to cover a longer period SAN data by accessing data (say, one day, for example, 96 samples, each of 15 minutes), and generates including minimum, maximum, average, form factor ... is the aggregate value for the V virtualization engine discs and MDG management of the SAN.

[0051] 可以将由数据聚集器产生的数据与包含对于每个MDG的I/O处理能力的SAN模型元数据403相比较。 [0051] The aggregated data may be generated by the data model comprising SAN MDG metadata for each of I / O processing capability compared with 403. 该I/O处理能力可以基于盘阵列制造商规格、盘阵列建模活动特性(figure)(诸如由Disk Magic应用软件产生的)、或者被RAID控制器控制的盘一般接收的工业科技能力特性,它们的数目、它们的冗余建立以及在RAID控制器等级的高速缓存适配率(hit ratio)值。 The I / O processing capacity of the disk array can be based on manufacturer's specifications, a disk array modeling activity characteristic (Figure) (such as produced by Disk Magic application software), or a general industrial technological capability characteristic is received RAID disk controller, their number, to establish their redundancy and value in RAID controller level cache adaptation rate (hit ratio). 还可使用其它I/O处理建模能力算法。 You can use other I / O processing capability modeling algorithm.

[0052] 由数据聚集器产生的数据还可以与每个MDG的总空间容量相比较,MDG可以存储于SAN模型元数据中或者从虚拟化引擎采集。 [0052] The data generated by the data aggregator may also be compared to the total amount of space to each MDG, MDG SAN model may be stored in the metadata collection or from the virtualization engine.

[0053] 数据分析器组件404执行这些比较并且基于由存储器管理器设置的阈值发出正确排列层级警报。 [0053] These data analyzer component 404 performs comparison and issuing an alarm based tiering correct thresholds set by the memory manager. 这些警报覆盖没有平衡应用的以及对其需要考虑V盘迁移动作的MDG。 These alerts do not cover the balance of the application and its need to consider the V disk migration action MDG.

[0054] 对于警报中的任何MDG,数据分析器提供以读取访问率密度排列的由MDG容纳的所有V盘的深入视图。 [0054] For any of MDG alert, the data analyzer to provide a thorough view of all V discs accommodated by the MDG to read access rate density arrangement. 这个视图允许“热”V盘和“冷”V盘的立即识别。 This view allows "hot" V disks and "cold" V immediately recognize the disc. 根据警报的类型,该深入视图轻易地指向V盘,其向另一层级的迁移将解决该MDG警报。 Depending on the type of alarm, the in-depth view easily point to V plate, which migrate to another level of alert will address the MDG. 通过对这些V盘正确排列层级,源MDG将看到由该MDG容纳的混合工作量的读取访问率密度值变得更加接近MDG固有能力,使得这个MDG使用关于其使用领域更好地平衡。 By proper arrangement of the hierarchy of these disks V, the source will see MDG read access rate value of the mixed density received by the workload MDG MDG inherent ability to become closer, so that the use of better balance MDG on its field of use.

[0055] 对于所有MDG,数据分析器计算净(net)读取I/O访问密度作为MDG剩余读取I/ 0处理能力除以MDG剩余空间容量的比例。 [0055] For all read I MDG, calculate the net data analyzer (net) / O read access density as MDG remaining I / 0 processing capability divided by the ratio of the remaining amount of space MDG. 读取I/O访问密度将等于净读取I/O访问密度的工作量将被考虑作为用于该MDG在其当前状态中的互补工作量。 Read I / O read access density will be equal to the net I / O access density for the work to be considered as complementary to the MDG workload in its current state.

[0056] 根据警报的类型由“热”或“冷”V盘组成的V盘迁移动作列表,由数据分析器组件准备并且可以被传给虚拟引擎用于如405所示的或者自动地或者在存储器管理器验证以后在SAN中实施。 [0056] The migration operation of the disc type V alarm list by "hot" or "cold" plate consisting of V, the data analyzer component may be prepared and transmitted as shown in a virtual engine 405 either automatically or in memory manager after the validation embodiment in the SAN.

[0057] 可以使用以下算法来确定对其应该重新排列层级特定V盘的MGD目标。 [0057] The following algorithm may be used to determine its level should be the target specific rearranged V MGD disc. 首先,其剩余空间容量或读取I/O处理能力不足以容纳V盘痕迹(footprint) (V盘痕迹等于用于该V盘的空间和读取I/O需求)的MDG被消除作为可能的目标。 First, the remaining capacity or space read I / O processing capability is insufficient to accommodate the disc trace V (footprint of) (V equal to the trace of the disc space and the V-disk read I / O request) of MDG is eliminated as a possible aims. 然后,选择与V盘读取I/O访问密度最接近的净读取I/O访问密度的MDG(例如,V盘工作量简档是与MDG在其当前状态中互补的工作量)。 Then, select V plate reader and I / O access to the closest net density read I / O access density of MDG (e.g., the disc V is MDG workload profile complementary to work in its current state). 为警报中的MDG中的V盘重复这个操作,直至重新排列层级的V盘的累积相关权重解决该警报。 This operation is repeated for the alert in the MDG V disk, until the rights related to the accumulation levels of rearranged V heavy disc resolve the alarm. 还对于所有警报中的MDG重复这个操作。 Also repeat this operation for all alarms in the MDG. 可以考虑其它算法以便在警报解决处理中协助。 Other algorithms can be considered for treatment to help resolve the alarm.

[0058] 图5图示了在本发明特定实施例中使用的三维模型。 [0058] FIG. 5 illustrates a three-dimensional model used in a particular embodiment of the present invention. 在基于IBM® TotalStomge®SAN虚拟化控制器(SVC)的实施例中,通过“被管理盘组”(MDG)联合一系列在存储阵列上容纳的被管理盘(LUN)并且在“剥离模式”通过SVC层被访问而提供了后端存储服务。 In an embodiment based IBM® TotalStomge®SAN virtualization controller (SVC) is by "a managed disk group" (the MDG) combined in series on the storage disk array receiving managed (LUN) and in "peel-mode" SVC is accessed through the layer provides a back-end storage services. 如通过数据处理主机所见的前端存储服务由V盘所提供。 As seen by the host data processing service provided by the front end of the storage tray V. 多V盘的混合工作量,例如在给出的MDG中容纳的所有V盘,也可以根据这个三维模型描述。 Multi-V mixed workload discs, for example disc MDG contained all V given, may be described in terms of this three-dimensional model.

[0059] 图6图示了诸如RAID阵列、MDG、LUN或V盘的存储服务的应用的两个主要领域。 [0059] FIG. 6 illustrates a two main areas of application, such as a RAID array, MDG, LUN or V disk storage service.

[0060] 第一领域是存储服务的功能领域。 [0060] The first area is the functional areas of storage service. 其位于存储池的总空间(以M字节方式)的边界内,其最大读取I/O速率处理能力和其最大可接受响应时间由存储管理员定义。 Located within a total space of the storage pool (in M byte) boundaries, the maximum read I / O processing capacity and the rate of its maximum acceptable response time is defined by the storage administrator.

[0061] 第二领域是存储服务的使用的经济领域。 [0061] The second area is the economic field use storage services. 这是位于先前领域以内的位于靠近最大读取I/O能力和存储空间的边界的在可接受响应时间限制以内的减少容量。 It is located within the prior art is located near the boundary of the maximum read I / O capability and storage space a response within the acceptable time limit capacity reduction.

[0062] 图7A-7D提供了在两个使用领域内的工作量情况的图示示例。 [0062] Figures 7A-7D provides an illustration of an example of the workload in the two fields of use.

[0063] 在图7A中,数据占用全部存储容量,I/O处理能力被良好使用并且写入响应时间值不成问题。 [0063] In Figure 7A, all the data storage capacity occupied, I / O processing capabilities are well used and the value of the write response time is not a problem. 数据布置和存储池之间存在良好匹配。 There is a good match between the arrangement and data storage pools.

[0064] 在图7B中,I/O处理能力几乎被全部使用,仅仅非常部分地分配存储容量并且写入I/O响应时间值不成问题。 [0064] In Figure 7B, I / O processing capability almost completely used, only very partially allocate storage capacity and the write I / O response time value is not a problem. 进一步容量分配很有可能造成I/O限制。 Further capacity allocation is likely to cause I / O bound. 移动所选择数据到更高I/O能力的存储池将是合适的。 The selected data is moved to a higher I / O capacity of the storage pool would be suitable.

[0065] 在图7C中,数据占用几乎全部存储容量,I/O处理能力未充分利用并且写入I/O响应时间值不成问题。 [0065] In Figure 7C, occupies almost all of the data storage capacity, I / O processing capability underutilized and write I / O response time value is not a problem. 存在使用很有可能更加经济的更低I/O处理能力的存储池的机会。 Likely to use the opportunity of more economical storage pool less I / O processing capability exists.

[0066] 在图7D中,存储容量被几乎完全分配,I/O处理能力良好平衡,然而,写入I/O响应时间值过高。 [0066] In FIG. 7D, storage capacity is almost completely assigned, I / O processing capacity of a well-balanced, however, the write I / O response time is too high. 有需要在判断任何动作之前评定高响应时间值是否造成工作量SLA(典型地成批流逝时间)的风险。 There needs assessment before determining any action response time value is high risk of workload SLA (typically batch elapsed time).

[0067] 图8介绍了读取I/O速率访问密度因素,其可以被评估用于存储设备(在容量方面)或诸如应用或应用的部分(在一个V盘活多V盘中容纳)的数据工作量。 [0067] Figure 8 illustrates the read I / O rate access density factor, which can be used to evaluate the data storage device (in terms of capacity), or as part of an application or application (in a multi-V revitalize the receiving tray V) of workload. 以下方程提 Mention the following equation

8供附加细节。 8 for additional details.

[0068] •对于MDG :最大访问密度=I/O处理能力/总存储容量 [0068] • For MDG: Maximum access density = I / O processing capacity / total storage capacity

[0069] •对于应用:最大访问密度=实际最大I/O速率/分配的存储空间 [0069] • For applications: Maximum access density = actual maximum I / O rate / storage space allocated

[0070] •对于V盘:最大访问密度=实际最大I/O速率/分配的存储空间 [0070] • V for the disc: Maximum access density = actual maximum I / O rate / storage space allocated

[0071] 读取I/O速率访问密度以10/秒/兆字节的方式测量,并且当使用其中高访问密度应用为“热”存储工作量并且第访问密度应用为“冷”存储工作量的热量比喻时,可以轻易理解其代数。 [0071] The read I / O access density at a rate of 10 / sec / megabyte measurement mode, and wherein when a high-density applications access to "hot" and the first work memory access density applications "cold" storage workload when the heat analogy, you can easily understand algebra. 如图9和10中所图示的,应用于温水(热+冷)的加权热量方程应用于“热” 和“冷”数据工作量。 9 and 10, as illustrated, applied to a heated (hot + cold) heat weighting equation to "hot" and "cold" workload data.

[0072] 如果在MDG中容纳的所有V盘的聚集工作量“靠近”MDG理论访问密度并且该MDG 容量几乎全部使用,则该MDG在其经济地域内操作。 [0072] If the aggregate V tray all workload MDG contained in the "near" and the access density MDG MDG theoretical capacity is almost all used, in which the MDG economical operation region. 本发明提出针对由与不同访问密度的其它MDG交换工作量而产生优化MDG使用的处理。 The present invention proposes a process to exchange with other workload MDG different access density generated optimized for use MDG. 本发明的优选实施例是使用读取I/O速率密度以便在各种层级中将MDG容量分类。 Preferred embodiments of the present invention is the use of the read I / O rate to a density classification MDG capacity in the various levels. 在层级IRAID控制器上容纳的MDG具有在所有MDG中最高的读取I/O速率密度,而最低读取I/O速率访问密度的MDG将属于更低等级的层级(根据虚拟化结构中的层级分组典型地层级3-5)。 Receiving at the level of the MDG IRAID controller has read all MDG highest I / O rate density, the lowest read I / O access rate MDG belonging density lower level hierarchy (the virtual structure of typically hierarchical grouping levels 3-5).

[0073] 当基于由存储器管理员定义的阈值发出警报是通过数据分析器实施本发明的优选实施例。 [0073] When issuing the memory based on a threshold defined by the administrator warning preferred embodiment of the present invention, the data analyzer embodiment. 以下列出了三个不同的警报: Listed below are three different alarms:

[0074] 1.存储容量几乎全部分配:在这种情况下,分配给V盘的被管理盘组容量接近(以%形式)MDG存储器容量。 [0074] 1. The storage capacity of almost all dispensing: in this case, is assigned to the managed disk set volume V of the disk close (in%) the MDG memory capacity.

[0075] 2. IO容量几乎全部使用:在这种情况下在后端盘(被管理盘组)的最大读取I/O 速率接近(以%形式)最大理论值。 [0075] 2. IO capacity almost exclusively: maximum read I at the rear end disc (a managed disk group) / O rate in this case close (in%) the maximum theoretical value.

[0076] 3. “高”响应时间值:在这种情况下当与写入指令的总数相比较时在SVC高速缓存中保留的写入指令的数量“重要”(以%形式)。 [0076] 3. The "high" response time values: the number of the write command when the write command is compared with the total number of SVC retained in the cache in this case, "Important" (in%). 这个现象揭示了写入响应时间的增加,其可能造成对于成批工作量的SLA目标值的破坏。 This phenomenon reveals an increase in write response time, which may cause damage to the bulk of the workload SLA target.

[0077] 图11示出了这三个警报阈值,如它们指向MDG使用领域。 [0077] FIG. 11 illustrates the three alarm thresholds, as they point MDG field of use.

[0078] 存储池优化的驱动原理如下: [0078] The driving principle of the optimization of the storage pool as follows:

[0079] 1.如果“分配的容量”接近“最大容量”并且“读取I/O活动”比“读取I/O能力,, 显著地更低,则“读取I/O能力”没有完全平衡。那么,必须从离散虚拟存储池(即,MDG)中移除最低访问率密度的应用数据以便释放空间来容纳更高访问率密度的应用数据。应该将被移除的最低访问率密度的应用数据发送至更低读取访问率密度能力的存储池。这个处理称为“向下排列层级”。 [0079] 1. If the "allocated capacity" close to "the maximum capacity" and "read I / O activity" ratio "read I / O capabilities ,, is significantly lower," read I / O capabilities "No perfectly balanced, then, you must be from discrete virtual storage pool (ie, MDG) to remove the lowest access rate to the lowest density of application data access rate to free up space to accommodate application data access rates higher density. the density should be removed transmitting application data to a lower density of the read access to the storage pool capacity. this process is called "down-tiering".

[0080] 2.如果“读取I/O活动”接近“读取I/O能力”并且“分配的容量”比“最大容量”显著地更低,存储池的容量不平衡并且添加更多应用数据很有可能产生不期望的性能限制。 [0080] 2. If the "read I / O activity" near "read I / O capability" and "allocated capacity" is significantly lower than the "maximum capacity", storage capacity of the pool imbalance and adding more applications data is likely to produce undesirable performance limitations. 处理这个情况需要从存储池中移除最高访问率密度的应用数据以便释放读取I/O能力。 This situation requires processing to remove the highest access density of the application data from the storage pool so as to release the read I / O capability. 这个容量将在以后用来容纳更低访问率密度的应用数据。 This capacity will be used to receive the application data access rate lower density in the future. 移除的应用数据(最高访问率密度的)可以需要被发送至更高“读取I/O密度能力”的存储池。 Removing application data (highest access density ratio) may need to be sent to a higher "read I / O density capability" storage pool. 这个处理称为“向上排列层级”。 This process is called "up-level arrangement."

[0081] 3.当写入高速缓存缓存被填满时“写入响应时间”值增加并且这可将应用服务等级协定(SLA)置于风险中。 [0081] 3. When the write cache buffer is filled, "write response time" and this can increase the value of the application service level agreements (SLA) at risk. 在这种情况下,有必要执行趋势分析以便计划未来“写入响应时间”值并且评定释放将要危及应用SLA。 In this case, it is necessary to perform a trend analysis to plan for the future, "write response time" and assess the value of the release will endanger application SLA. 如是这种情况,相关应用数据(V盘)必须被“向上排列层级”至更高写入I/O能力的存储池。 This case, the relevant application data (V disk) must be "aligned up level" written in the memory pool to a higher I / O capability. 如果SLA不处于风险中,应用数据布置在其当前存储池中必须保持不改变。 If the SLA is not at risk, the application data is arranged in its current storage pool must remain unchanged.

[0082] 4.如存储池处于其中所有空间没有完全分配并且其读取I/O活动不接近“读取I/ 0能力”的中间状态,不需要考虑任何动作。 [0082] 4. wherein all of the space in the storage pool is not fully allocated and which reads the I / O activity is not close to "read I / 0 capability," the intermediate state, no need to consider any action. 即使在MDG中呈现热工作量,其行为可以通过冷工作量平衡造成在MDG能力中的平均工作量。 Even hot work presented in the MDG, its behavior can cause the average workload balancing capabilities of the MDG by cold work. 这个机遇性情况显著地降低正确排列层级的假想量,其可能被微分析尝试不适当地推荐。 This opportunistic situation significantly reduce the amount of imaginary right-tiering, which may be a micro analysis attempts to inappropriately recommended.

[0083] 5.如果“读取I/O活动”接近“读取I/O能力”并且“分配的容量”几乎等于“最大容量”,存储池容量良好平衡只要“写入响应时间”值保持在可接受限制内并且两个警报彼此补偿。 [0083] 5. If the "read I / O activity" near "read I / O capability" and "allocated capacity" almost equal to "the maximum capacity", as long as a good balance of the storage pool capacity "write response" value held within acceptable limits and alarm two compensate each other.

[0084] 6.当确定哪个V盘应该被正确排列层级时,绝对读取I/O速率V盘实际值不能“照原样”使用,这是因为在虚拟化引擎等级出现的高速缓存。 [0084] 6. When the disc is determined which should be properly aligned V level, the absolute read I / O rate actual value V disk can not "as is" use, since the cache occurs in the virtualization engine level. 这个高速缓存允许在不招致后端读取指令的情况下供应读取I/O请求给前端数据处理器。 This allows a case where the cache without incurring the rear end of the supply read instructions read I / O request to the front end of a data processor. 本发明的方法使用在MDG中容纳的前端聚集工作量相比较的每个V盘的相关读取I/O速率活动以便在“热”和“冷”之间排列V盘并且进行实际重新排列层级判断。 The method of the present invention is used in the front end of each V-disk aggregate contained in the workload MDG compared associated read I / O activity to the rate V plate arranged between the "hot" and "cold" level and the actual rearranged judgment.

[0085] 本领域技术人员将清楚的是,本发明的方法可以合适地在包括执行方法的步骤的部件的逻辑装置中实施,并且这样的逻辑部件可以包括固件组件的硬件组件。 [0085] Those skilled in the art will appreciate that the method of the present invention may suitably be in the logic means includes means performing the method steps of embodiments, and that such logic means may comprise hardware components, firmware components.

[0086] 可以通过支持如现在参照图12所描述的处理流程的微处理器来支持这个优化尝试的实现。 [0086] This optimization can be supported by supporting attempts to achieve a microprocessor as will now described with reference to the process flow of FIG 12.

[0087] 步骤1200检验分配的存储容量是否大于被管理盘组的总容量的90%,其中存储器管理员可以根据本地策略建立阈值(90% )。 If [0087] Step 1200 tests allocated storage capacity is greater than 90% of the total capacity of the disk management group, where the storage administrator can set up the threshold (90%) according to a local policy.

[0088] 如果结果为否,则执行测试(步骤1202)来确定实际读取I/O速率是否大于MDG 的读取I/O能力的75%,其中存储器管理员可以根据本地策略建立阈值(75% )。 [0088] If the result is negative, a test is performed (step 1202) to determine whether to actually read I / O rate is greater than the MDG read I / 75% O capacity, wherein the storage administrator may establish a threshold value (in accordance with local policy 75 %).

[0089]-如果结果为否,意味着池处于中间状态,不执行进一步动作并且处理去往步骤1216。 [0089] - if the result is negative, meaning that the pool is in an intermediate state, no further action and the process goes to step 1216.

[0090]-如果测试1202的结果为是,意味着在没有消耗全部空间的情况下聚集工作量已经正在使用读取I/O能力的高百分比,存在添加其它工作量可能使读取I/O能力饱和,造成工作量SLA承受艰难的高概率。 [0090] - If the test result 1202 is yes, meaning gather workload without consuming all of the space is already in use a high percentage of read I / O capacity, there is likely to add additional workload read I / O the ability to saturation, resulting workload SLA withstand high probability tough. 因此,在步骤1206中推荐向上排列层级操作。 Thus, in step 1206 upwardly arranged in the recommended operating level. 下一步,在步骤1208,通过选择当前在MDG中容纳的最高访问密度的V盘,并且向上排列层级到V盘对其是良好的互补工作量的另一MDG来执行向上排列层级。 Next, at step 1208, by selecting the currently highest access density of the MDG receiving tray V, and V to the disc arranged upwardly level thereof is good MDG further arranged to perform a complementary upwardly workload level. 在这个V盘正确排列层级操作之后,源MDG将看到其读取访问率密度实际值减少并且变得与其固有能力接近,使得这个MDG 使用关于其使用领域更好地平衡。 After the correct operation of the tiering disc V, the source will see the MDG read access rate and decrease the density of the actual value becomes closer to its intrinsic ability of such use on the MDG art using better balance.

[0091] 处理然后去往步骤1216。 [0091] The process then goes to Step 1216.

[0092] 回到在步骤1200上执行的测试,如果结果为是,则执行与步骤1202类似的测试。 [0092] back to the test performed at step 1200, a similar test 1202 if the result is YES, the step is executed.

[0093]-如果结果为是,意味着聚集工作量正在使用高百分比的读取I/O能力并且消耗了大部分空间,MDG正在其经济邻域操作,不执行进一步动作,并且处理停止。 [0093] - If the result is positive, meaning that the workload is gathered using a high percentage of read I / O capacity and consumes most of the space, MDG neighborhood is its economic operation, without further action, and processing stops.

[0094]-如果结果为否,意味着读取I/O能力未充分利用并且已经消耗了大部分空间,那么,MDG读取I/O能力很有可能保持未充分利用。 [0094] - If the result is negative, meaning that read I / O capabilities of underutilized and has consumed most of the space, then, MDG read I / O capacity is likely to remain underutilized. MDG中的V盘将更加经济地容纳在更低层级的MDG上。 The MDG V will be more economical disc housed in the lower level MDG. 因此在步骤1212推荐向下排列层级操作。 Thus tiering down operation in step 1212 recommendation. 下一步,在步骤1214上,通过选择MDG中的最低访问密度的V盘执行向下排列层级,并且向下排列层级到V盘对其是良好互补工作量的另一MDG。 Next, at step 1214, by selecting the lowest performing disk access density V of the MDG tiering downward and are arranged down the hierarchy to the disc V is its good complementary MDG another workload. 在这个V盘正确排列层级操作之后,源MDG将看到起读取访问率密度实际值增加并且变得与其固有能力更加接近,使得这个MDG使用关于其使用领域更好地平衡。 After the correct operation of the V-tiering disc, seen from the source MDG read access rate increases and the density of the actual value becomes closer to its intrinsic ability, so that the use of better balance MDG on its field of use. 处理然后去往步骤1216。 The process then goes to Step 1216.

[0095] 最后,在步骤1216上,可用MDG存储空间被分配给互补访问密度简档的其它工作量,并且处理回到步骤1200来分析后面的MDG。 [0095] Finally, in step 1216, the available memory space is allocated to the MDG workload other complementary access density profiles, and the process returns to step 1200 for later analysis MDG. 当分析了所有MDG时,处理将等待直至下一评估时段,以便对于列表的第一MDG在1200中重启。 When analyzed all MDG, the processing will wait until the next evaluation period, so that for a first list 1200 MDG restart.

[0096] 可将分析/警报方法集成于可重复存储管理处理中作为定期监控任务。 [0096] Analysis may be / alarm methods integrated storage management process is repeated periodically as a monitoring task. 例如,每天,本方法的系统实现方式可以产生存储管理仪表板,其对于每个MDG报告,与当适用时具有突出警报的能力和容量和写入响应时间情况相对的实际值。 For example, every day, the system of the method of storage management implementations may produce a dashboard, which each MDG for reporting, when the write power and capacity and has outstanding alarm when applicable the relative response time of the actual value. 仪表板将伴随以提供由每个MDG容纳的V盘的行为的深入视图,这个视图以读取I/O访问率密度和可能由存储管理员评估的用于向虚拟引擎传递的正确排列层级运动列表来排列。 The instrument panel to provide a thorough view of behavior accompanied by each V MDG disc accommodated, this view to read I / O density and access rate assessment by the storage administrator may transfer the virtual engine for proper movement tiering list order.

[0097] 图13示出了用于照顾写入I/O服务质量方面的分析/警报方法的流程图。 [0097] FIG. 13 shows a flow chart for analyzing care write I / O service of quality / alarm process. 在该图中,以另一写入I/O速率指示符(indicator)替换写入I/O响应时间触发。 In this figure, another write I / O rate indicator (Indicator) replacement writing I / O response time triggered. 这个指示符基于前端写入高速缓存延迟I/O速率和总写入I/O速率值之间的比率。 The delay indicator is the ratio between the I / O rate and total write I / O rate value based on the front end write cache. 写入高速缓存延迟I/O操作是在虚拟引擎的写入高速缓存中保留的写入I/O操作,这是因为后端存储池由于饱和而不能接受它们。 Delay write cache I / O operation is a write cache in the virtual engine written in the reserved I / O operations, since the rear end of the storage pool can not accept due to saturation thereof. 当写入高速缓存延迟I/O操作的量到达总写入I/O活动的显著百分比时,前端应用很有可能变慢并且响应时间增加。 When the amount of delay write cache I / O operations to a significant percentage of the total write I / O activity is reached, the front end of the application is likely to slow down and the response time increases. 这个指示符作为重排列层级警报的使用是本发明的另一实施例。 The alarm level designator as rearrangement using another embodiment of the present invention.

[0098] 在步骤1300,执行测试来检验前端写入高速缓存延迟I/O速率是否已经到达阈值,其中该阈值由存储管理员根据本地策略建立。 [0098] In step 1300, a test is performed to verify whether the delay distal write cache I / O rate has reached a threshold value, wherein the threshold established by the storage administrator in accordance with local policy.

[0099] 如果结果为否,则处理去往步骤1320。 [0099] If the result is NO, the process goes to step 1320.

[0100] 如果结果为是,那么在步骤1302跟踪造成警报的V盘至使用这些V盘的应用。 [0100] If the answer is yes, then in step 1302 track caused alarm to the application of these V V disk trays. 下一步,在步骤1303,采集用于应用成批时间流逝值[A]和成批流逝时间SLA目标[T]的值。 Next, at step 1303, an application acquisition time elapses batch values ​​[A] and the elapsed time batch SLA targets [T] of. 典型地在IT操作人员负责下,通过应用性能指示符将这些值外部地提供至本发明。 Typically at IT personnel responsible for the operation, by applying the performance indicator values ​​provided externally to the present invention. 下一步在步骤1304,新测试通过将A和T与安全阈值等级相比较来检验应用SLA (典型地成批流逝时间目标)是否处于风险。 Next, in step 1304, a new test by the A and T and the safety threshold level comparing test application SLA (typically batch elapsed time objective) is at risk.

[0101] 如果结果为否,意味着A显著地比T更低,那么观察到的高响应时间值对于成批持续时间不重要,在步骤1306上不执行进一步动作,并且处理去往步骤1320。 [0101] If the result is NO, A means is significantly lower than T, then the observed time value for the high response is not important batch duration, without further action in step 1306, and the process goes to step 1320.

[0102] 如果结果为是,意味着A接近T,那么在步骤1308,使用例如TPC图形报告作为实施例执行写入I/O响应时间和写入I/O速率值的趋势分析。 [0102] If the answer is yes, meaning that close to the A T, then in step 1308, e.g., as the TPC graphical reports to analyze trends embodiment executed in response to write I / O time and write I / O rate value embodiment.

[0103] 在步骤1310继续处理,其中执行新的测试来检验应用等待写入I/O操作的总时间是否为增加值(这个总写入时间等于所有警报中的V盘的写入I/O响应时间与写入I/O速率的乘积的所有采样时段的和): [0103] At step 1310 to continue the processing in which to perform the inspection application waits for a new test write I / O operations if the total time to increase the value of (the total write time is equal to V all alarms in the disk write I / O the product of the response time of the write I / O rate of the sample period and for all):

[0104]-如果结果为否,意味着在成批处理期间应用等待写入I/O操作的总时间没有随时间增加,并且因此没有劣化成批持续时间SLA,那么在步骤1312上不执行进一步动作,并且在步骤1320接着处理。 [0104] - if the result is negative, during a batch process means that the application waits for the write I / O operations, the total time does not increase with time, and thus no degradation of the SLA batch duration, then at step 1312 is not performed further operation, and then the processing in step 1320.

[0105]-如果结果为是,意味着在成批处理期间应用等待写入I/O操作的总时间增加,并且可能造成成批持续时间变得有风险,处理去往步骤1314,其中使用趋势分析结果来推演(例如使用线性建模)未来的成批持续时间值。 [0105] - If the result is positive, meaning that applications waiting to be written during the batch total time I / O operations increases, and may result in batches duration becomes at risk, the process goes to step 1314, where the trend to deduce the analysis result (e.g., using linear modeling) next batch duration value.

11[0106] 处理在步骤1316继续来检验SLA目标(T)是否在短期未来有风险。 11 [0106] processing in step 1316 to continue to examine the SLA targets (T) Is there a risk in the short-term future. 如果结果为否,处理去往步骤1312,否则如果结果为是,处理去往步骤1318以对一些(或所有)V盘进行向上排列层级,为具有更高I/O能力的MDG创建应用SLA风险。 If the result is NO, the process goes to step 1312, otherwise, if the result is YES, the process goes to step 1318 to some (or all) V up disk tiering, SLA create applications with a higher risk MDG I / O capability .

[0107] 最后,在步骤1320,将可用的MDG存储容量分配给互补访问密度简档的其它工作量,并且处理返回到步骤1300以便分析接下来的MDG。 [0107] Finally, in step 1320, the available storage capacity allocated to other MDG workload complementary access density profiles, and the process returns to step 1300 to analyze the next MDG. 当分析了所有MDG时,处理将等待直至下一评估时段,以对于列表的第一MDG在1300中重启。 When analyzed all MDG, the processing will wait until the next evaluation period to restart the MDG in 1300 for the first list.

[0108] 图12和13中描述的分析/警报方法还可用于体现I/O简档未知的新工作量的特征。 [0108] FIGS. 12 and 13 described analyzes / method may be used wherein the alert I / O workload profiles reflect the new unknown. 在某时段(例如一个月)内可以在“保育”MDG中容纳这个工作量用于其I/O行为测量,以便采集足够行为数据。 It may be housed within a certain time period (e.g. one month) in the "care" for the MDG in this work which I / O behavior measurements, in order to acquire sufficient behavioral data. 在该时段以后,应用V盘可以基于由数据分析器组件提供的空间需求、读取I/O需求和读取I/O密度值来正确排列层级。 After this period, the application may be based on V disk space requirements provided by the analyzer component data, read I / O requirements and the read I / O to the correct density values ​​tiering. 这个“保育”处理可以以低开销代替在决定将使用哪个存储器层级以及哪个MDG最合适之前所需的对复杂存储性能估计作业的需要。 This "care" treatment can be replaced with low overhead memory hierarchy and the need to determine which required before the MDG which is most suitable for complex storage performance estimates the job will be used. 那么将通过定期监控任务来处理应用行为中的未来改变,以确保应用需求和存储结构的对准而不需要昂贵的存储工程师的干预。 It will be handled in the future to change the application behavior through regular monitoring tasks to ensure alignment of applications and storage structures without the need for costly intervention storage engineers.

[0109] 在可替换实施例中,当与虚拟化存储结构相连接的后端盘阵列需要去委托(de-commisioning)时,本发明的分析/警报方法可以用来重新定位应用数据。 [0109] In an alternative embodiment, when the rear end of the disk array virtual storage structure connected need to delegate (de-commisioning), analysis / alarm process of the present invention may be used to reposition the application data. 在这种情况下,在数据分析器组件可用的数据可以被用来判断为每个逻辑存储单元应该使用哪个存储层级以及哪个离散存储池(例如,MDG)对于每一个逻辑存储单元最合适。 In this case, the data analyzer component available data may be used to determine which storage level and which is a discrete storage tank (e.g., the MDG) of each logical unit of storage to be most suitable for each logical storage unit.

[0110] 在另一实施例中,当不与虚拟化存储结构连接的盘阵列需要去委托时,本发明的分析/警报方法可以用来重新定位应用数据。 [0110] In another embodiment, when the disk array is not connected to the virtual storage structure need to commission, analysis / alarm process of the present invention may be used to reposition the application data. 在这种情况下,在将虚拟逻辑存储单元重新定位到其它离散虚拟存储池之前,盘可能与虚拟化存储架构相连接并且经历保育特征化处理。 In this case, the virtual logical storage unit before repositioning to other discrete virtual storage pool, the disk may be connected to the virtualized storage architecture features and subjected to conservation treatment. 可替代地,该处理可能包括使用在盘阵列上采集的已存在性能数据,并且使用由数据分析器组件提供的数据在虚拟化存储结构上重新安装应用。 Alternatively, the process may include the use of performance data already exists on disk arrays acquired, and uses the data provided by the data analyzer component reinstall the application on the virtual storage structure.

[0111] 本领域技术人员将要理解的是,虽然已经关于前述示例实施例描述了本发明,但是本发明不限于此,并且存在许多落入本发明的范围内的可能的变形和修改。 [0111] Those skilled in the art will be appreciated that, although embodiments have been described with respect to the foregoing exemplary embodiments of the present invention, but the present invention is not limited thereto, and there may be many variations and modifications fall within the scope of the present invention.

[0112] 本发明的范围包括在此公开的特征的结合的任何新颖特征。 [0112] scope of the present invention includes the features disclosed herein any novel combination of features. 申请人在此给出这样的通知,在本申请进行期间,在从本申请导出的任何其它申请的这样特征或特征的结合可以形成新的权利要求。 The applicant hereby gives notice this, during the present application performed, in conjunction with any other feature of this application is derived from the application of the present features may be formed or that new claims. 具体地,参照所附权利要求,可以以任何适当的方式结合各个独立权利要求的特征而不是仅仅以权利要求中列举的特定结合。 In particular, with reference to the appended claims, it may be combined with features of the respective independent claims in any appropriate manner and not merely in combination recited in a particular claim.

[0113] 为了避免疑惑,如在此贯穿描述和权利要求使用的术语“包括”将不被解释为“仅 [0113] For the avoidance of doubt, as herein throughout the description and claims the term "comprising" is not to be interpreted as "only

由......组成”的意思。本领域技术人员将要了解的是,虽然本发明通过使用SAN量控制 ...... consisting of "means skilled in the art will be appreciated that, although the present invention by using a SAN amount control

器词语关于前述示例实施例已经描述,但是本发明不限于此,并且存在描述MDG或V盘的许多可能的措辞。 Exemplary is words related to the foregoing embodiments have been described, but the present invention is not limited thereto, and there are many possible MDG description language or V disc. 例如,MDG可以被称为存储池、虚拟存储池或离散虚拟存储池,并且V盘被称为虚拟存储逻辑单元。 For example, the MDG may be called storage pools, a virtual storage pool or discrete virtual storage pool, and V is called a virtual disk storage logic.

Claims (15)

1. 一种用于管理网络中的数据存储的方法,所述网络包括通过存储虚化拟引擎与多个物理存储介质耦接的多个主机数据处理器,所述存储虚拟化引擎包括用于在虚拟盘(V盘) 到被管理盘(M盘)之间映射的映射单元,其中相同层级等级的多个被管理盘被分组以便形成离散虚拟存储池(MDG),所述方法包括:•存储描述每个离散虚拟存储池的空间容量和量化读取I/O能力的元数据;•周期性地从虚拟化引擎中采集关于虚拟盘的存储使用、读取I/O和写入I/O活动的fn息;•聚集采集的信息;•将聚集的数据与每个离散虚拟存储池的元数据相比较;以及•根据比较步骤的结果,基于阈值达到来生成虚拟盘的重新排列层级动作的列表。 1. A data storage method for managing a network, the network comprising a virtual storage by a plurality of host data processors proposed engine and coupled to the plurality of physical storage media, the storage virtualisation engine comprising means for a virtual disk (V drive) to the mapping unit mapping between managed disks (M disc), wherein a plurality of the same hierarchy level are managed disks are grouped to form discrete virtual storage pool (the MDG), the method include: • capacity and storage space quantization description of each discrete virtual storage pool read I / O capabilities of the metadata; • periodically on the acquisition memory using the virtual disk from the virtualization engine, the read I / O write and I / fn O activity information; • aggregate information collected; • aggregated metadata with each discrete data comparing the virtual storage pool; and • according to the result of the comparing step, based on a threshold for soon rearranged to generate a virtual disk level operation list of.
2.如权利要求1所述的方法,其中采集的读取和写入I/O信息能够为速率访问、响应时间、后端和/或前端活动数据、和/或队列等级之一。 2. The method according to claim 1, wherein the read and write the acquired I / O access to information can be a rate, response time, a rear end and / or distal activity data, and / or one queue level.
3.如权利要求1或2所述的方法,其中采集步骤还包括以下步骤:在各个时间时段将采集的信息存储到本地存储库。 The method according to claim 12, wherein the collecting step further comprises the step of: in each time period the collected information is stored in the local repository.
4.如权利要求1至3中任意一项所述的方法,其中聚集的数据包括V盘的最小、最大、 平均、形状因素的值。 4. A method according to any one of claims 1 to 3, wherein the data comprises a minimum aggregate V disk, maximum, average value of the shape factor.
5.如权利要求1至4中任意一项所述的方法,其中比较步骤还包括以下步骤:检验分配的存储容量是否大于预定义的容量阈值。 1 to 5. A method according to any one of claim 4, wherein the comparing step further comprises the step of: testing whether the storage capacity allocated capacity greater than a predefined threshold.
6.如权利要求5所述的方法,其中将预定义的容量阈值设置到离散虚拟存储池的总容量的90%。 6. The method according to claim 5, wherein the predefined capacity threshold set to 90% of the total capacity of discrete virtual storage pool.
7.如权利要求5或6所述的方法,还包括以下步骤:检验实际读取I/O速率是否大于预定义的能力阈值。 7. The method of claim 5 or claim 6, further comprising the step of: testing whether the threshold capacity actually read I / O rate is greater than predefined.
8.如权利要求8所述的方法,其中将预定义的能力阈值设置到读取I/O能力的75%。 8. The method according to claim 8, wherein the predefined threshold is set to the ability to read I / 75% O capability.
9.如权利要求1至8中任意一项所述的方法,其中比较步骤还包括以下步骤:检验写入高速缓存延迟I/O速率是否大于实际写入I/O速率值的预定义的百分比阈值。 The percentage delayed test write cache I / O rate is greater than a predefined actual write I / O rate value of: 1 to 9. The method according to any one of claim 8, wherein the comparing step further comprises the step of threshold.
10.如权利要求1至9中任意一项所述的方法,其中由存储管理员建立阈值。 10. The method to any one of claim 9, wherein establishing the threshold value by the storage administrator.
11.如权利要求1至10中任意一项所述的方法,其中生成重新排列层级动作的列表的步骤还包括以下步骤:生成包括虚拟存储池能力、容量、实际使用以及发出的警报的存储池仪表板。 1 to 11. The method according to any one of claim 10, wherein generating hierarchy list reordering operation further comprises the steps of: generating a virtual storage pool the storage pool capacity, the capacity of the actual use and an alarm emitted Dashboard.
12.如权利要求1至11中任意一项所述的方法,其中生成重新排列层级动作的列表的步骤还包括以下步骤:生成以相关读取I/O速率密度排列的V盘的深入视图。 12. The method of any one of 1 to claim 11, wherein generating hierarchy list reordering operation further comprises the steps of: generating V-depth view of the disc to read the associated I / O rate density arrangement.
13. 一种用于管理网络中的数据存储的系统,所述网络包括通过存储虚化拟引擎与多个物理存储介质耦接的多个主机数据处理器,所述存储虚拟化引擎包括用于在虚拟盘(V 盘)到被管理盘(M盘)之间映射的映射单元,相同层级等级的多个被管理盘被分组以便形成离散虚拟存储池(MDG),所述系统包括用于实施权利要求1至12任意一项的方法的步骤的部件。 13. A data storage management system for a network, the network comprising a virtual storage by a plurality of host data processors proposed engine and coupled to the plurality of physical storage media, the storage virtualisation engine comprising means for a virtual disk (V drive) to the mapping unit mapping between managed disks (M disc), a plurality of the same hierarchy level are managed disks are grouped to form discrete virtual storage pool (the MDG), for implementing the system comprising member of a process according to any of claims 12 to steps.
14. 一种计算机程序,包括用于当在合适的计算机设备上执行所述计算机程序时执行如权利要求1至12中任意一项的方法的步骤的指令。 14. A computer program comprising instructions for performing the steps of claim 1 to 12. The method of any one of claims when said computer program is executed on a suitable computer apparatus.
15. 一种计算机可读介质,在其上具有编码如权利要求14所述的计算机程序。 15. A computer-readable medium having encoded thereon a computer program according to claim 14.
CN2010800102363A 2009-03-02 2010-01-12 Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure CN102341779A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP09305191 2009-03-02
EP09305191.0 2009-03-02
PCT/EP2010/050254 WO2010099992A1 (en) 2009-03-02 2010-01-12 Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure

Publications (1)

Publication Number Publication Date
CN102341779A true CN102341779A (en) 2012-02-01



Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800102363A CN102341779A (en) 2009-03-02 2010-01-12 Method, system and computer program product for managing the placement of storage data in a multi tier virtualized storage infrastructure

Country Status (3)

Country Link
EP (1) EP2404231A1 (en)
CN (1) CN102341779A (en)
WO (1) WO2010099992A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739911A (en) * 2014-12-12 2016-07-06 华为技术有限公司 Storage data allocation method and device and storage system

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8341350B2 (en) 2010-09-21 2012-12-25 Lsi Corporation Analyzing sub-LUN granularity for dynamic storage tiering
US8671263B2 (en) 2011-02-03 2014-03-11 Lsi Corporation Implementing optimal storage tier configurations for a workload in a dynamic storage tiering system
CN102520887A (en) * 2011-12-19 2012-06-27 中山爱科数字科技股份有限公司 Storage space configuration and management method applied to cloud computing
GB2506164A (en) * 2012-09-24 2014-03-26 Ibm Increased database performance via migration of data to faster storage
WO2016068976A1 (en) * 2014-10-31 2016-05-06 Hewlett Packard Enterprise Development Lp Storage array allocator
GB2533405A (en) 2014-12-19 2016-06-22 Ibm Data storage resource assignment
CN105007330B (en) * 2015-08-04 2019-01-08 电子科技大学 The modeling method of the storage resource scheduling model of distributed stream data-storage system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1945520A (en) * 2005-10-04 2007-04-11 株式会社日立制作所 Data management method in storage pool and virtual volume in dkc
CN101027668A (en) * 2004-07-21 2007-08-29 海滩无极限有限公司 Distributed storage architecture based on block map caching and VFS stackable file system modules
US20080147960A1 (en) * 2006-12-13 2008-06-19 Hitachi, Ltd. Storage apparatus and data management method using the same
US20080301763A1 (en) * 2007-05-29 2008-12-04 Hitachi, Ltd. System and method for monitoring computer system resource performance

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5345584A (en) 1991-03-11 1994-09-06 Laclead Enterprises System for managing data storage based on vector-summed size-frequency vectors for data sets, devices, and residual storage on devices
GB0514529D0 (en) 2005-07-15 2005-08-24 Ibm Virtualisation engine and method, system, and computer program product for managing the storage of data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101027668A (en) * 2004-07-21 2007-08-29 海滩无极限有限公司 Distributed storage architecture based on block map caching and VFS stackable file system modules
CN1945520A (en) * 2005-10-04 2007-04-11 株式会社日立制作所 Data management method in storage pool and virtual volume in dkc
US20080147960A1 (en) * 2006-12-13 2008-06-19 Hitachi, Ltd. Storage apparatus and data management method using the same
US20080301763A1 (en) * 2007-05-29 2008-12-04 Hitachi, Ltd. System and method for monitoring computer system resource performance

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105739911A (en) * 2014-12-12 2016-07-06 华为技术有限公司 Storage data allocation method and device and storage system
CN105739911B (en) * 2014-12-12 2018-11-06 华为技术有限公司 Store distribution method, device and the storage system of data
US10152411B2 (en) 2014-12-12 2018-12-11 Huawei Technologies Co., Ltd. Capability value-based stored data allocation method and apparatus, and storage system

Also Published As

Publication number Publication date
EP2404231A1 (en) 2012-01-11
WO2010099992A1 (en) 2010-09-10

Similar Documents

Publication Publication Date Title
US8825964B1 (en) Adaptive integration of cloud data services with a data storage system
JP4922496B2 (en) Method for giving priority to I / O requests
US7624241B2 (en) Storage subsystem and performance tuning method
EP1768014B1 (en) Storage control apparatus, data management system and data management method
EP1889142B1 (en) Quality of service for data storage volumes
US8380947B2 (en) Storage application performance matching
US8880835B2 (en) Adjusting location of tiered storage residence based on usage patterns
US20100050013A1 (en) Virtual disk drive system and method
US8363519B2 (en) Hot data zones
CN102473134B (en) Management server, management method, and management program for virtual hard disk
US9547459B1 (en) Techniques for data relocation based on access patterns
US7698517B2 (en) Managing disk storage media
US8549528B2 (en) Arrangements identifying related resources having correlation with selected resource based upon a detected performance status
US8341312B2 (en) System, method and program product to manage transfer of data to resolve overload of a storage system
EP1770499B1 (en) Storage control apparatus, data management system and data management method
US8112596B2 (en) Management apparatus, management method and storage management system
US7702865B2 (en) Storage system and data migration method
WO2012090247A1 (en) Storage system, management method of the storage system, and program
US8407417B2 (en) Storage system providing virtual volumes
JP5236365B2 (en) Power management in the storage array
US8566550B2 (en) Application and tier configuration management in dynamic page reallocation storage system
US20100235597A1 (en) Method and apparatus for conversion between conventional volumes and thin provisioning with automated tier management
US8935493B1 (en) Performing data storage optimizations across multiple data storage systems
US9436389B2 (en) Management of shared storage I/O resources
JP5771280B2 (en) computer system and storage management method

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)