US9772794B2 - Method and apparatus for big data cloud storage resource management - Google Patents

Method and apparatus for big data cloud storage resource management Download PDF

Info

Publication number
US9772794B2
US9772794B2 US15/174,657 US201615174657A US9772794B2 US 9772794 B2 US9772794 B2 US 9772794B2 US 201615174657 A US201615174657 A US 201615174657A US 9772794 B2 US9772794 B2 US 9772794B2
Authority
US
United States
Prior art keywords
storage
snapshot
latency
server
fetch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US15/174,657
Other versions
US20160357443A1 (en
Inventor
Tao Li
Ruijin Zhou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Florida Research Foundation Inc
Original Assignee
University of Florida Research Foundation Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Florida Research Foundation Inc filed Critical University of Florida Research Foundation Inc
Priority to US15/174,657 priority Critical patent/US9772794B2/en
Assigned to UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INCORPORATED reassignment UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, TAO, ZHOU, RUIJIN, DR.
Publication of US20160357443A1 publication Critical patent/US20160357443A1/en
Application granted granted Critical
Publication of US9772794B2 publication Critical patent/US9772794B2/en
Assigned to NATIONAL SCIENCE FOUNDATION reassignment NATIONAL SCIENCE FOUNDATION CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: UNIVERSITY OF FLORIDA
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors

Definitions

  • Storage Resource Management focuses upon optimizing the efficiency and processing speed of a Storage Area Network's (SAN's) use of the available drive space.
  • SAN's Storage Area Network's
  • automatic storage resource management where storage virtualization in data centers is used to lower maintenance labor costs.
  • Storage virtualization represents the separation of logical storage from physical storage, where data may be accessed without regard to the physical storage or heterogeneous structure.
  • virtual disks are automatically rearranged and migrated, such that the performance of storage pools can meet specific IT policy requirements (e.g. performance load balancing and capacity planning).
  • big data and cloud workloads exhibit highly fluctuating and unpredictable I/O behaviors. For instance, any user/tenant on the cloud platform can submit jobs at any time, which introduces unexpected workload surges.
  • big data applications manifest distinct I/O behaviors across different execution phases. Since these workload surges occur within a couple of hours or even minutes, they can lead to unexpected storage load imbalance.
  • the Gulati reference [3] proposes the Basil system, having both workload and device models, which can automatically balance the I/O load across storage devices. Based on these models, storage latency can be predicted and the load-balancing algorithm is performed accordingly.
  • the Basil system's storage model is built offline, which limits its usability in a real system.
  • the Gulati reference [4] proposes the Pesto system implemented in VMware's® SDRS, which incorporates an online storage model (L-Q model). This system implements a workload injector to proactively adjust the storage model online when the system is not busy. The Pesto system further includes congestion management and a cost benefit function.
  • the Park reference [5] finds that both the Basil and Pesto systems make improper balance decisions due to the limitation of their models. Park proposes the Romano system, which makes multiple load-balancing decisions before actually migrating the virtual disks, where a simulated annealing algorithm is used to filter out the potentially incorrect decisions.
  • An embodiment of the system includes a plurality of storage pools, a virtualized server, and a centralized data center management server.
  • An embodiment of the virtualized server includes a base disk and a virtualized disk having a copy-on-write and a copy-on-read snapshot process responsive to an I/O request that merges a previous snapshot stored on a first one of the plurality storage pools with the base disk stored on a second one of the plurality of storage pools, while it fetches a new snapshot to be stored on a third one of the plurality of storage pools from the previous snapshot or the base virtual disk.
  • the system may also build a performance model, employing an exponentially weighted moving average with an adjustable sliding window to provide a quick and accurate performance prediction.
  • the system may include a hybrid management scheme, which can dynamically choose either the snapshot process or migration for the fastest performance tuning.
  • the system and method disclosed herein can provide fast storage performance tuning for virtualized storage management system software in light of the emerging multi-tenant big data cloud environment.
  • the method allows the management software to dynamically choose either the copy-on-write/read snapshots or the storage migration based on the estimated performance tuning time.
  • the method also provides a performance model for the snapshot based storage management.
  • embodiments of the system disclosed herein offers an advantage in that the system can significantly lower the performance overhead of storage management and facilitate the management process. Further, embodiments of the system are useful with respect to storage management can be performed for the big data and cloud computing IT industries.
  • a specific embodiment of the subject system can be referred to as the Light Storage Resource Management (LightSRM), and can work properly in a time-variant cloud environment.
  • LightSRM Light Storage Resource Management
  • embodiments of the system disclosed herein may use copy-on-write/read snapshots to redirect the I/O requests without moving the virtual disk.
  • the system may include a performance model specifically for snapshots.
  • the system and method may include an exponentially weighted moving average with adjustable sliding window to provide quick and accurate performance prediction.
  • the system may include a hybrid management scheme, which can dynamically choose either snapshot or migration for fastest performance tuning.
  • FIG. 1A shows a schematic of a modern data center with a multi-tenant cloud environment.
  • FIG. 1B illustrates a graphical representation of the average storage latency over time when running five ApacheTM Hadoop® virtual machines (VMs) on two storage pools.
  • VMs ApacheTM Hadoop® virtual machines
  • FIG. 2 illustrates a graphical representation of the average storage latency when running five Hadoop® VMs; while the idle VM is scheduled to run a database job during the execution.
  • FIG. 3A shows a schematic of storage migration.
  • FIGS. 3B-C show a graphical representation of the performance when using storage migration, where FIG. 3B shows Storage Latency versus time and FIG. 3C shows Storage IOPS versus time.
  • FIG. 4A shows a schematic of snapshot.
  • FIGS. 4B-4C show a graphical representation of the performance when using snapshot, where FIG. 4A shows Storage Latency versus time and FIG. 4C shows Storage IOPS versus time.
  • FIG. 5A displays a block diagram of an embodiment of a storage resource management system (LightSRM) described herein.
  • LightSRM storage resource management system
  • FIG. 5B shows a schematic of an embodiment of a storage resource management system (LightSRM) described herein.
  • LightSRM storage resource management system
  • FIG. 6 displays a block diagram for the process of redirecting I/O requests using copy-on-write and copy-on-read snapshots.
  • FIG. 7 illustrates a graphical representation of the latency stable period, latency changes, and latency spikes in storage latency with respect to time.
  • FIG. 8 illustrates a graphical representation of the EWMA tracking for the storage latency.
  • FIG. 9 displays a timeline for the time sequence of LightSRM load balancing.
  • FIG. 10 displays a Three-Dimensional (3D) graphical representation of the transition time with respect to the working set size and I/O throughput, when using copy-on-write and copy-on-read snapshots.
  • 3D Three-Dimensional
  • FIG. 11 shows a 3D graphical representation of the transition time with respect to the working set size and I/O throughput, when using migration.
  • FIGS. 12A-B show bar charts for the performance impact on workloads using: (a) migration and (b) snapshots.
  • FIG. 13 illustrates a graphical representation of the distribution of Residuals with respect to count.
  • FIG. 14 illustrates a graphical representation of the distribution of Residuals with respect to count for both snapshot modeling and without snapshot modeling.
  • FIGS. 15A-B displays a graphical representation of the Storage Performance Load Balance for: (a) LightSRM (snapshot) and (b) Traditional Method (migration).
  • FIG. 16 displays a graphical representation of the I/O size of PageRank during the execution with respect to time.
  • FIG. 17 shows a block diagram of the initial virtual disk placement.
  • FIG. 18A displays a graphical representation of the average latency of the storage pools with respect to time.
  • FIG. 18B displays a graphical representation of the standard deviation of the latency across all the storage pools with respect to time.
  • FIG. 19 shows a bar chart of the data traffic generated by load balance.
  • Embodiments of the subject invention relate to a lightweight and fast storage management system and method, a specific embodiment of which can be referred to as the Light Storage Resource Management (LightSRM).
  • LightSRM Light Storage Resource Management
  • Embodiments of the system can quickly respond to workload surges.
  • copy-on-write/read snapshots are used as basic units to tune the I/O performance.
  • the overhead of creating and merging a snapshot is much smaller than that of migrating the entire virtual disk.
  • the system can implement a performance model for the copy-on-write/read snapshot.
  • the system can deploy a real-time performance tracking mechanism, exponentially weighted moving average with adjustable sliding window, to provide an accurate latency prediction.
  • the system can include a hybrid method based on a characterization of both snapshots and migration, which can dynamically decide to use either snapshot or migration to achieve the fastest performance tuning.
  • An embodiment of the system may be implemented in a QEMU/KVM based virtualized environment.
  • the system may adopt libvirt as a communication framework to obtain the information of virtual machines and send commands to virtualized servers.
  • the system may also include the traditional storage resource management scheme in this environment for comparison purpose.
  • Experimental results show that implementation of copy-on-write/read snapshots has less transition time when the virtual disk size is large; whereas migration is faster when the virtual disk size is small enough. Further, experimental results show that embodiments of the subject snapshot method yields 90% less performance impact on the running applications. Equipped with hybrid snapshot/migration method, LightSRM can provide lower average storage latency (91% lower in our tests) and better-balanced storage environment (93% better) than the traditional method. Furthermore, the hybrid method generates 44% and 91% less disk I/O traffic than snapshot and migration methods respectively.
  • Specific embodiments of the subject system and method may include 1) a snapshot-based performance tuning method, 2) a performance model for copy-on-write/read snapshot, 3) a quick and simplified storage performance tracking and modeling scheme, and 4) a hybrid mechanism that chooses snapshot or migration online for faster performance tuning.
  • FIG. 1B illustrates a graphical representation of the average storage latency over time when running five ApacheTM Hadoop® virtual machines (VMs) on two storage pools.
  • Traditional server applications including file servers, mail servers, and database servers, have relatively stable I/O behaviors. Therefore, the storage pools that host these types of applications yield predictable latency on a daily basis.
  • modern datacenters also host the multi-tenant public cloud environments and run big data workloads (see e.g., FIG. 1A ).
  • the tenants' behaviors can vary from day to day. For big data workloads, even the application itself manifests frequent I/O bursts, which leads to highly fluctuating storage latency.
  • FIG. 1B illustrates a graphical representation of the average storage latency over time when running five ApacheTM Hadoop® virtual machines (VMs) on two storage pools.
  • VMs ApacheTM Hadoop® virtual machines
  • Table 1 shows the storage latency with its standard deviation during the execution of big data workloads. During the application execution, the storage latency and its standard deviation of one storage pool is measured. As can be seen in Table 1, the standard deviation of the storage latency is much larger than the average value. This indicates that big data workloads exhibit highly fluctuating I/O behaviors even in one virtual disk. The large standard deviation also reveals the bias of using single average value to represent the storage latency for the entire execution period. A further investigation of application behaviors reveals that they all have highly varying I/O characteristics (e.g. OIO, Read/Write Ratio, Randomness, I/O size). Therefore, using the I/O characteristics of the entire daytime to predict the storage latency of the next day is inaccurate.
  • I/O characteristics e.g. OIO, Read/Write Ratio, Randomness, I/O size
  • FIG. 2 illustrates a graphical representation of the average storage latency when running five Hadoop® VMs; while the idle VM is scheduled to run a database job during the execution.
  • each tenant can activate his/her rented virtual machines to run applications at any time. Therefore, it is not possible to predict the tenant behaviors. Even with a strict job scheduling policy, it is challenging to predict what type of application the tenant is going to execute (I/O intensive or not).
  • FIG. 2 six VMs are implemented on six storage pools. While the PageRank application runs in the background on five VMs, the remaining VM is assigned with a database job at a certain time that runs for a predetermined period of time. The average storage latency of the six storage pools are measured and shown in FIG.
  • Snapshot technology which is usually used to make backups and checkpoints, may almost instantly redirect I/O requests to a new location (the snapshot file). Once a snapshot is made, all the upcoming write requests will be issued to the new location immediately. The read requests will be issued to the location of the latest data. If copy-on-read technology is enabled, the data blocks will be fetched once from the old location to the new location. Then, all the following reads will be issued to the new location.
  • Embodiments of the subject system described herein leverage snapshot technology similar to that of a cache, where the snapshot file may be gradually filled with the destination storage. Typically, the use of the snapshot technology can change the I/O load more quickly than migration.
  • FIGS. 3B-C show a graphical representation of the performance when using storage migration (See FIG. 3A ): Where FIG. 3B shows Storage Latency versus time and FIG. 3C shows Storage TOPS versus time; while FIGS. 4B-4C show a graphical representation of the performance when using snapshot (see FIG. 4A ), where FIG. 4B shows Storage Latency versus time and FIG. 4C shows Storage IOPS versus time.
  • FIG. 4A shows a disk workload running inside one virtual disk is placed at source storage pool. The storage latency is around 1.5 ms.
  • FIG. 3B shows Storage Latency versus time
  • FIG. 4C shows Storage IOPS versus time
  • snapshot can swiftly tune I/O performance
  • using the snapshot technology also poses several new challenges to existing storage management systems: 1) how to implement copy-on-write/read snapshot in real systems, 2) how to build a performance model for snapshot, 3) how to quickly track the storage performance, and 4) how to deal with small size virtual disks, which can be migrated quickly.
  • the LightSRM redesigns the entire management stack to address these challenges.
  • the snapshot file may be responsible for the performance tuning, while the virtual disk may be used to guarantee the functional correctness.
  • FIG. 5A displays a block diagram of the storage resource management system (LightSRM) described herein.
  • the system implements a copy-on-write/read snapshot 12 on top of the original virtual disk 14 to hold the working set of the running applications.
  • a storage manager 20 is implemented to perform performance load balancing and capacity planning.
  • the performance tracking 22 and balancing 24 operate on the snapshots, while the capacity prediction 28 and capacity planning 30 manages the base virtual disks.
  • Performance load balancing may be triggered more frequently (i.e. a 5 minute interval) than capacity planning (i.e. a 16 hour interval).
  • the storage resource management system described herein may include a redesigned performance management 20 , while incorporating the conventional method of capacity planning and prediction 26 ; since there is no interference between the performance management 20 and capacity planning and prediction 26 .
  • FIG. 5B shows a schematic of an embodiment of the subject system, incorporating a central management machine implementation, management software, rack servers, and storage arrays.
  • the original snapshot technology is used for backups and checkpoints. In the instance of a data loss event or system failure, a virtual disk can be easily reverted to the previously stored state.
  • copy-on-write technology is leveraged to copy data from a source disk to a new disk when the workload issues a “write” to disk instruction. Since the block size of a snapshot usually is not larger than the I/O size, there is no need to fetch data from old disks. The “read” requests, on the other hand, will be issued to the location of the latest data. Therefore, making an original snapshot redirects the write requests, and not the read requests.
  • FIG. 6 displays a block diagram for the process of redirecting I/O requests using copy-on-write and copy-on-read snapshots, which can be implemented in accordance with an embodiment of the subject invention.
  • embodiments of the subject system includes a copy-on-read in the snapshot design of the subject system, as shown in FIG. 6 .
  • the read request may trigger the new snapshot to fetch the latest data from the old snapshot or the base virtual disk. After the data blocks are fetched to the new snapshot, the following reads to those data blocks will be served by the new snapshot, thereby redirecting the read requests to the new storage pool.
  • the time for the fetching process varies.
  • the system may include a hybrid method, which uses migration if the migration time is shorter and uses the snapshot procedure if the snapshot transition time is shorter.
  • Phase # 1 Merging and Fetching
  • Phase # 2 Merging or Fetching
  • the time used for Phases # 1 and # 2 can define the transition time for the subject modified snapshot process.
  • the transition time for migration is the time necessary to complete a migration.
  • FIG. 7 illustrates a graphical representation of the latency stable period, latency changes, and latency spikes in storage latency with respect to time. Since embodiments of the subject system described herein can quickly rebalance the performance load by just taking a snapshot, a new, fast, and accurate way to track the storage latency has also been implemented in specific embodiments of the subject system. I/O behaviors are not entirely random. For a certain period of time, the workload exhibits stable behaviors, which results in relatively stable storage latency (shown as a dotted line in FIG. 7 ). The length of the stable period can be workload dependent. In those periods, the average value can represent the storage latency. Similar issues have been identified and tackled in memory usage tracking and network traffic estimation.
  • Embodiments of the subject system may include a storage latency prediction, where latency is tailored and customized using exponentially weighted moving average methods (EWMA).
  • EWMA exponentially weighted moving average methods
  • LightSRM a self-adjustable sliding window tracking mechanism. Initially, the sliding window size is set as q. In the sliding window, LightSRM will sample the latency of each storage pool for q times. Then, EWMA is applied to those q samples.
  • L t ⁇ *( L t-1 +(1 ⁇ ) L t-2 +. . . +(1 ⁇ ) i-1 L t-i ) (1)
  • L i means the ith sample of latency and 0 ⁇ 1.
  • is set to 0.5.
  • the L t is calculated for all the storage pools in the system. These values will be used for the load balancing schemes.
  • the weighted average value may not be able to quickly detect the storage latency in a few sample points.
  • specific embodiments of the subject system may continuously compare the deviation of the latest sample point with the standard deviation of the entire sampling window. If the deviation of latest sample point is much higher than the standard deviation in the sliding window, the system marks the point as deviant. If a deviant point keeps appearing in the most recent sample points, the system may assume that it is highly possible that the storage latency changed to another value for the next several periods.
  • weights of previous samples may be intentionally set to 0 , such that the deviant point determines the latency of next period.
  • a fast EWMA can quickly detect latency changes.
  • a fast EWMA can be easily mislead by the deviant point (i.e., the latency spike as shown in FIG. 7 ).
  • a slow EWMA long sliding window
  • a known method on memory tracking [32] maintains both EWMAs and the maximum value is chosen as the memory working set size.
  • the length of the sliding window may be increase by ⁇ q.
  • the length of the sliding window may be decrease the length by ⁇ q.
  • ⁇ q is set to be one sample point.
  • FIG. 8 illustrates a graphical representation of the EWMA tracking for the storage latency.
  • Storage latency tracking is part of the storage manager 20 , running on a control server 16 .
  • the latency of the storage pools 18 is reported by ptdiskstats [14] and sent to the storage manager 20 .
  • the latency tracking mechanism can successfully track the latency changes and report accurate latency value for the next period.
  • embodiments of the subject system may use one or more of a variety of prediction methods, such as exponentially smoothing, moving average with different sizes of sliding window, and auto regression. Each may all serve the same purpose: to predict the future value by the history value.
  • Performance modeling consists of two parts: workload modeling and storage device modeling.
  • the workload model can be constructed using its outstanding IOs (OIOs), I/O size, Read/Write Ratio, and Randomness.
  • L predict ⁇ ⁇ ⁇ WL i ⁇ ; ( 3 ) where ⁇ is calculated via the storage device model (L-Q model [4]).
  • a workload injector may be used to proactively test the storage pool and build the L-Q model, when there is no job running in the system.
  • LightSRM leverages the same model as described above with the exception that the performance models are built to support snapshot. According to the phases described above, the workload model is as follows. During the first phase of merging and fetching, additional I/O loads may be generated to all three storage pools as shown in FIG. 6 . For storage pool 1 , which hosts the base virtual disks, the additional workload WLA can be modeled as:
  • the additional workload WLA can be modeled as:
  • the additional workload WLA can be modeled as:
  • modeling the merging process may be relatively easy, because the data copy operation has fixed values of OIO, IOsize, Read Ratio, and Randomness for a given platform.
  • the behavior of the fetch process totally depends on the workload behavior (OIO, IOsize, and Randomness).
  • Read Ratio is 100% for storage 1 , 2 (the sender) while 0% for storage 3 (the receiver).
  • one of the merging and fetching process finishes. If merging process finishes, WLA Merge is set to 0. Otherwise, WLA fetch is set to 0 .
  • ⁇ ⁇ ⁇ WL i L ( 7 )
  • ⁇ WL i computed using the I/O characteristic (OIO, IOsize, R/W ratio, randomness) and L is obtained from the last latency tracking period. Since the sampling period may be short (5 minutes in our prototype), the chance that ⁇ changes in the next period may be low. Besides, the storage latency and the workload I/O characteristics in equation (7) may be monitored during the sampling period. Therefore, device characteristic ⁇ can adjust itself in each sampling period, if the underlying device characteristics change.
  • is initially obtained from an off-line device modeling.
  • the workload injector adjusts ⁇ when the system is idle.
  • LightSRM uses the runtime performance data to recalibrate ⁇ online even when the system is busy running applications.
  • Embodiments of the subject system can use the performance data monitored in the last tracking period to calculate the device characteristics, where the chance that ⁇ changes in such short period is low.
  • the system may recalibrate ⁇ in each period, such that the device characteristics change can be captured in a timely manner.
  • the model Given that the system has derived a workload model for snapshots and storage device characteristics, the model can predict the latency changes when the workloads are rearranged among the storage pools.
  • a load balance algorithm such as described below, can benefit from this feature.
  • FIG. 9 displays a timeline for the time sequence of LightSRM load balancing.
  • the goal of load balancing is to optimize resource utilization, minimize storage latency by distributing workloads across storage pools.
  • LightSRM pursues the same goal but in a fine-grained way.
  • FIG. 9 shows a visual view of how LightSRM performs in real time.
  • T monitoring period
  • D decision point
  • the system acquires all the performance data (e.g. OIO, IOsize, etc.) during the monitoring period.
  • Performance models such as the performance models described herein, are used to predict the possible performance if a certain workload is moved to another storage pool.
  • policy based storage management algorithm e.g., load balancing
  • the system actually moves the workload.
  • LightSRM iteratively performs the following 3 steps: 1) find the storage pool with the maximum latency (max_storage) and the storage pool with minimum latency (min_storage) in the sliding window; 2) use the performance model (as noted herein) to check if one of the virtual disks moved from max_storage to min_storage will lower the average storage latency and the latency difference across storage pools; and 3) if yes, add this move to migration list and update the internal database to record this movement. If no, quit the iteration and output the migration list. The iteration will continue until a further move to further lower the storage latency is no longer found.
  • the system may generate a list of migration suggestions, where either the snapshot process or the migration process is indicated as the best process of operation.
  • the copy-on-write/read snapshot may not be the best option to apply to all I/O request, where the system may recognize that the transition time of migration could actually be shorter than that of a snapshot for some small virtual disks.
  • specific embodiments of the subject system and method for storage resource management use a hybrid method, which selects the migration process if the migration time is shorter and selects the snapshot process if the snapshot transition time is shorter. To estimate the transition time of the snapshot and migration processes, the following functions apply:
  • SnapshotSize is the size of snapshot file
  • Throughput merge is the throughput that merge process can get
  • Throughput appRead is the throughput that workload generates. Note that the behavior of fetch process depends on the workload. When workload reads the data, the fetch process will read data from old storage pool.
  • TransitionTime migration VirtualDiskSize + WorkingSetSize Throughput migration
  • VirtualDiskSize is the size of entire virtual disk
  • Throughput migration is the throughput associated with the migration process.
  • the system may also copy the data that is changed by the application during the migration (working set data).
  • WorkingSetSize may also be considered.
  • LightSRM calculates the transition time of both the migration and snapshot processes. Then, LightSRM selects the method that has a shorter transition time. By doing this, LightSRM can always choose the fastest way to perform load balance.
  • a testbed consisting of six storage pools, each of which has a different configuration, as shown in Table 2.
  • the storage pools are connected to servers through Fiber Channel. All the six storage pools are shared by Dell Power Edge r710 servers, which are equipped with 2.4 GHz Intel® Xeon® CPU and 24 GB memory.
  • the operating system on the hosts is Ubuntu 13.04.
  • the kernel version is 3.11.0-12 with kvm modules enabled.
  • the latest QEMU version 1.7.50 is used.
  • Up to 16 virtual machines were generated, each of which has 1 vCPUs and 1 GB memory. Their virtual disks are placed across the six storage pools. The size of the virtual disks ranges from 16 GBs to 1 TBs.
  • the storage manager of LightSRM may run on a Dell Desktop Optiplex 980 with 2.7 GHz Intel Core i7 and 8 GB memory.
  • the communication between the storage manager and the QEMU/KVM servers is supported by libvirt, version 1.2.1 [15].
  • a specific embodiment of LightSRM has two parts: copy-on-write/read snapshot and storage manager.
  • the QEMU block device driver (qemu/block.c) is modified to support both copy-on-write and copy-on-read at the same time.
  • To monitor the virtual machine I/O characteristics e.g. outstanding IOs, IOsize, Read/Write ratio, and randomness, several performance counters in QEMU block device driver are instrumented. These characteristics can be obtained via either QEMU command line interface (qmp) or libvirt API.
  • pt-diskstats tool [14] can be used, which can report the real time storage latency and throughput.
  • the storage manager including performance modeling and load balancing, is implemented in C. The storage manager runs on a separated machine and leverages the libvirt API to communicate with QEMU hosts and storage pools. For comparison, the Pesto scheme [4] is also implemented on this QEMU/KVM platform.
  • transition time which is the time between the workload is scheduled to move and the workload completely runs on destination
  • application observed latency which is used to measure the performance impact of performing storage management
  • distribution of errors/residuals which is used to measure the accuracy of the prediction and modeling
  • storage latency of each storage pool which is used to show how well the LightSRM design can rebalance the storage performance.
  • the transition time of snapshot can be longer than the migration time when the virtual disk size is small enough. This section analyzes the factors that determine the transition time.
  • the transition time is the maximum value of merging time and fetching time. Since the merging process copied the data from old snapshot to the base virtual disk, the merging time is determined by the working set size of the workload. Fetching time is determined by both the working set size and the read throughput of the workload. Higher read throughput leads to more data to be fetched.
  • FIG. 10 displays a Three-Dimensional (3D) graphical representation of the transition time with respect to the working set size and I/O throughput, when using copy-on-write and copy-on-read snapshots.
  • the transition time of snapshot with different working set sizes and throughputs was measured.
  • the larger the read throughput the shorter the transition time at a given working set size.
  • the volume of data that is driven to the new snapshot per second is larger, which inevitably leads to faster fetching process.
  • the larger the working set size the longer the transition time. Both merging and fetching process need more time to copy a larger size of working set data.
  • FIG. 11 shows a 3D graphical representation of the transition time with respect to the working set size and I/O throughput, when using migration.
  • the transition time depends on the virtual disk size and size of the data that is modified during the copy process. As can be seen in FIG. 11 , larger throughput leads to longer migration time. With larger throughput, the volume of modified data is larger during the copy process. The migration process has to retransmit those modified data. Therefore, the higher the workload write throughput, the longer the transition time.
  • snapshot method is preferable when workload exhibits higher read throughput and has smaller working set size, and migration method is more efficient than snapshot method when the workload exhibits lower write throughput.
  • the workload exhibits variable working set sizes and different I/O behaviors. Therefore, a hybrid method that can dynamically choose a faster method (snapshot or migration) online, as discussed herein, can be beneficial.
  • FIGS. 12A-B show bar charts for the performance impact on workloads, where FIG. 12A shows the performance impact on workloads using migration and FIG. 12B shows the performance impact on workloads using snapshots.
  • snapshot and migration technology have impact on the workload performance. They both generate additional disk traffic, which will compete the storage resources with the currently running workloads.
  • the observed latency changes of the workloads were measured during the performance of the snapshot process and migration.
  • Three different types of workloads were run (e.g., OLTP, Workstation, and Exchange). At the beginning, they are running on a slow storage pool. These were planned to move to a faster storage pool.
  • FIG. 13 illustrates a graphical representation of the distribution of Residuals with respect to count
  • FIG. 14 illustrates a graphical representation of the distribution of Residuals with respect to count for both snapshot modeling and without snapshot modeling.
  • LightSRM uses exponentially weighted moving average (EWMA) with adjustable window size to predict the storage latency.
  • EWMA exponentially weighted moving average
  • ExpoS exponential smoothing
  • MA_xL moving average with different lags
  • AR auto regression
  • FIG. 13 shows the distribution of residuals when different prediction methods are used.
  • An accurate prediction will have a distribution graph that is close to 0.
  • MA_xL means weighted moving average is used with a window length of x.
  • MA_xL, AR and ExpoS yield similar results.
  • EWMA with adjustable window shows the most accurate results as most of the residuals are around 0 .
  • FIG. 14 shows the prediction residuals when a snapshot is created on the new storage pool.
  • most of the residuals are negative, which means most of the prediction value is lower than the actual latency.
  • the snapshot model is used to capture the additional traffic generated by snapshots, the residuals are closer to 0 . Therefore, LightSRM can accurately capture the system performance by modeling the snapshots.
  • LightSRM detects the imbalance at the beginning and decides to move one of the VMs from storage 2 to storage 4 . Since the virtual disk size is large, taking snapshots will be faster. LightSRM makes a snapshot on storage 4 so that the performance is quickly balanced as shown in FIG. 15A . Here, it is notable that most of big data workloads tend to read the data at the beginning of execution. Therefore, the transition time of snapshot is very short because the workload fetches the data it needs quickly to the new location. If the migration method is used as in the traditional method in FIG. 15B , the migration time is very long due to the large size of the virtual disk. As can be seen, the migration cannot even finish within the workload execution. The performance of storage pools remains imbalanced until the end of workload execution.
  • the virtual disk sizes for VMh 1 x and VMh 2 x are ranging from 200 GB to 1 TB while the size for VMinjx is 16 GB. Therefore, we intentionally create a scenario that some VMs are potentially suitable for migration, while others are not.
  • FIG. 18A displays a graphical representation of the average latency of the storage pools with respect to time
  • FIG. 18B displays a graphical representation of the standard deviation of the latency across all the storage pools with respect to time.
  • the traditional management scheme uses migration for during movement of all data, even if the virtual disk is very large, such as VMhx.
  • An embodiment of LightSRM implementing the hybrid method described herein, can dynamically choose a suitable way to relocate virtual disks.
  • FIG. 18A shows the average latency of the six storage pools, which will exhibit value jump if suddenly scheduled job is not handled in time.
  • FIG. 18B shows the standard deviation for the latency of the six storage pools, which reflects how well the storage pools are balanced.
  • LightSRM has 91% lower average storage latency (28 ms) than that of the traditional method (312 ms). Besides, LightSRM shows 93% lower standard deviation (49 ms) across storage pools. Further looking into the reason behind those numbers, LightSRM takes two snapshots, for the large size VMs, and uses three migrations, for small size VMs, as shown in Table 4. As can be seen in Table 4, the transition time of making those moves are relatively short. However, the traditional method takes around 1 hr to migrate the large size virtual disk (VMh 21 , VMh 22 ). These lengthy migrations yield high I/O traffic in the system, which leads to high latency in the system (as shown in FIG. 18A ). Worse, the randomly scheduled VMs are not handled timely, which results in highly imbalanced storage system as shown in FIG. 18B . Therefore, LightSRM is more capable in handling situations with both large and small size virtual disks.
  • FIG. 19 shows a bar chart of the data traffic generated by load balancing management.
  • FIG. 19 shows the additional data traffic that is generated by the load balancing management.
  • the additional data traffic will be roughly equal to the sum of sizes of migrated virtual disks.
  • the additional data traffic is the double of the workload working set size (one for merging process, one for fetching process). If virtual disk size is smaller than double working set size, migration actually yields less additional traffic. Therefore, using a hybrid method, as shown in FIG. 19 , can achieve much less data traffic than other methods (44% less than the snapshot method and 91% less than the migration method in the experiment).
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the invention may be practiced with a variety of computer-system configurations, including multiprocessor systems, microprocessor-based or programmable-consumer electronics, minicomputers, mainframe computers, and the like. Any number of computer-systems and computer networks are acceptable for use with the present invention.
  • embodiments of the present invention may be embodied as, among other things: a method, system, or computer-program product. Accordingly, the embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. In an embodiment, the present invention takes the form of a computer-program product that includes computer-useable instructions embodied on one or more computer-readable media.
  • Computer-readable media include both volatile and nonvolatile media, transitory and non-transitory, transient and non-transient media, removable and nonremovable media, and contemplate media readable by a database, a switch, and various other network devices.
  • computer-readable media comprise media implemented in any method or technology for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations.
  • Media examples include, but are not limited to, information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), holographic media or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These technologies can store data momentarily, temporarily, or permanently.
  • the invention may be practiced in distributed-computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer-storage media including memory storage devices.
  • the computer-useable instructions form an interface to allow a computer to react according to a source of input.
  • the instructions cooperate with other code segments to initiate a variety of tasks in response to data received in conjunction with the source of the received data.
  • the present invention may be practiced in a network environment such as a communications network.
  • a network environment such as a communications network.
  • Such networks are widely used to connect various types of network elements, such as routers, servers, gateways, and so forth.
  • the invention may be practiced in a multi-network environment having various, connected public and/or private networks.
  • Communication between network elements may be wireless or wireline (wired).
  • communication networks may take several different forms and may use several different communication protocols. And the present invention is not limited by the forms and communication protocols described herein.
  • the methods and processes described herein can be embodied as code and/or data.
  • the software code and data described herein can be stored on one or more computer-readable media or machine-readable media, which may include any device or medium that can store code and/or data for use by a computer system.
  • a computer system reads and executes the code and/or data stored on a computer-readable medium, the computer system performs the methods and processes embodied as data structures and code stored within the computer-readable storage medium.
  • computer-readable media include removable and non-removable structures/devices that can be used for storage of information, such as computer-readable instructions, data structures, program modules, and other data used by a computing system/environment.
  • a computer-readable medium includes, but is not limited to, volatile memory such as random access memories (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only-memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM), and magnetic and optical storage devices (hard drives, magnetic tape, CDs, DVDs); network devices; or other media now known or later developed that is capable of storing computer-readable information/data.
  • volatile memory such as random access memories (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only-memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM), and magnetic and optical
  • Computer-readable media should not be construed or interpreted to include any propagating signals.
  • a computer-readable medium of the subject invention can be, for example, a compact disc (CD), digital video disc (DVD), flash memory device, volatile memory, or a hard disk drive (HDD), such as an external HDD or the HDD of a computing device, though embodiments are not limited thereto.
  • a computing device can be, for example, a laptop computer, desktop computer, server, cell phone, or tablet, though embodiments are not limited thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Computer Security & Cryptography (AREA)

Abstract

A system and method for storage resource management using copy-on-write/read snapshots to redirect the I/O requests without moving the virtual disk is disclosed herein. The system includes storage pools, a virtualized server, and a centralized data center management server. The virtualized server includes a base disk and a virtualized disk having a copy-on-write and a copy-on-read snapshot process responsive to an I/O request that merges a previous snapshot stored on a first one of the plurality storage pools with the base disk stored on a second one of the plurality of storage pools, while it fetches a new snapshot to be stored on a third one of the plurality of storage pools from the previous snapshot or the base virtual disk. The system includes a performance model, employing an exponentially weighted moving average with an adjustable sliding window.

Description

CROSS-REFERENCE RELATED APPLICATIONS
This application claims the benefit of U.S. Provisional Application Ser. No. 61/171,447, filed Jun. 5, 2015, which is incorporated herein by reference in its entirety.
GOVERNMENT SUPPORT
This invention was made with government support under grant numbers NSF#1017000; NSF#0937869; and NSF#0845721 awarded by National Science Foundation. The government has certain rights in the invention.
BACKGROUND OF INVENTION
Storage Resource Management (SRM) focuses upon optimizing the efficiency and processing speed of a Storage Area Network's (SAN's) use of the available drive space. As organizations are faced with increased hardware and storage management costs, many have introduced automatic storage resource management, where storage virtualization in data centers is used to lower maintenance labor costs. Storage virtualization represents the separation of logical storage from physical storage, where data may be accessed without regard to the physical storage or heterogeneous structure. Particularly, through the use of automatic storage resource management, virtual disks are automatically rearranged and migrated, such that the performance of storage pools can meet specific IT policy requirements (e.g. performance load balancing and capacity planning).
Commercial software, such as VMware's® Storage Distributed Resource Scheduler (SDRS), have been deployed in modern data centers. However, during the era of public/hybrid cloud and big data analytics, traditional storage management schemes fail to respond to the real-time I/O burst in a public/hybrid cloud due to the large size of virtual disks. This commercial software is typically incapable of performing real time policy-based storage management due to the high cost of migrating large size virtual disks. More particularly, although traditional storage resource management schemes work fine in a private data center that executes most of the jobs at daytime (keeping idle at night), modern data centers usually host multi-tenant cloud platforms and run big data applications 24 hours, seven days a week (24/7). Unlike the traditional server applications, big data and cloud workloads exhibit highly fluctuating and unpredictable I/O behaviors. For instance, any user/tenant on the cloud platform can submit jobs at any time, which introduces unexpected workload surges. Secondly, the big data applications manifest distinct I/O behaviors across different execution phases. Since these workload surges occur within a couple of hours or even minutes, they can lead to unexpected storage load imbalance.
Specifically, due to the large size of virtual disks, virtual storage migration takes a long time and causes high I/O traffic overhead to the system. Moving a virtual disk from one storage pool to another can take up to several minutes or hours, during which the workload behavior may have already changed. Worse, the current load-balancing interval (i.e. 8˜16 hours) is too long for detecting and responding to workload surges. These limitations can lead to: 1) high average latency of the entire storage system; 2) extremely unbalanced storage resource utilization; 3) low quality of service (QoS); and 4) frequent breaking of the service level agreement (SLA).
Current models of storage management systems mainly focus upon improving the physical device behavior [8, 19, 20, 21, 22, and 23]. As virtualization has been widely adopted in data centers, efforts of managing storage resources using virtual machines have emerged as indicated above. Although the Singh reference [9] proposes a system entitled, HARMONY, including a VectorDot algorithm that minimizes performance degradation, the VectorDot algorithm only considers the storage system utilization and ignores workload behaviors. Related works, Basil [3], Pesto [4], and Romano [5], consider both the device and workload behaviors; yet, they leverage workload and device characteristics reported by the virtual machine monitor and rearrange storage resources by migrating virtual disks across different storage pools, which is a lengthy process.
The Gulati reference [3] proposes the Basil system, having both workload and device models, which can automatically balance the I/O load across storage devices. Based on these models, storage latency can be predicted and the load-balancing algorithm is performed accordingly. However, the Basil system's storage model is built offline, which limits its usability in a real system.
To address this issue, the Gulati reference [4] proposes the Pesto system implemented in VMware's® SDRS, which incorporates an online storage model (L-Q model). This system implements a workload injector to proactively adjust the storage model online when the system is not busy. The Pesto system further includes congestion management and a cost benefit function. However, the Park reference [5] finds that both the Basil and Pesto systems make improper balance decisions due to the limitation of their models. Park proposes the Romano system, which makes multiple load-balancing decisions before actually migrating the virtual disks, where a simulated annealing algorithm is used to filter out the potentially incorrect decisions.
In summary, all existing storage management schemes share several common drawbacks. The basic unit of these management schemes is a virtual disk, whose size can range from several hundreds of Gigabytes (GBs) to even several Terabytes (TBs). Often, migrating this large size virtual disk results in long migration time and high performance degradation. Although there have been efforts to improve the efficiency of storage migration, the cost of migrating large size virtual disks is still significant. The lengthy migration process hinders the current storage management from being used in real time. Instead of tracking and migrating virtual disks frequently, existing systems usually monitor and collect performance characteristics during the entire daytime, using 95% of the sampled data to predict the average latency of the next day. The actual load balancing decisions and storage migrations are made at night, when no application is running. When a private data center has steady I/O behavior, these traditional methods can achieve a desirable load balancing effect.
Nevertheless, as indicated above, for modern data centers that host public cloud platforms (e.g. Amazon AWS [16], Microsoft Azure [17]) and run big data applications [18], workload I/O behavior can heavily fluctuate even within one day. Although the Basak reference [7] discloses a dynamic performance model for multi-tenant cloud, no resource-scheduling algorithm is proposed. The Alvarez reference [30] presents an approach that selects cloud storage services from a cloud tenant's perspective. Yet, in a multi-tenant cloud environment, highly varying I/O behavior leads to frequent storage load imbalances using this approach, which cannot be handled in a timely manner using existing storage management schemes.
BRIEF SUMMARY
A system and method for storage resource management that improves the efficiency of big data cloud storage management by leveraging the copy-on-write/read snapshots to redirect the I/O requests without moving the virtual disk is disclosed herein. An embodiment of the system includes a plurality of storage pools, a virtualized server, and a centralized data center management server. An embodiment of the virtualized server includes a base disk and a virtualized disk having a copy-on-write and a copy-on-read snapshot process responsive to an I/O request that merges a previous snapshot stored on a first one of the plurality storage pools with the base disk stored on a second one of the plurality of storage pools, while it fetches a new snapshot to be stored on a third one of the plurality of storage pools from the previous snapshot or the base virtual disk. The system may also build a performance model, employing an exponentially weighted moving average with an adjustable sliding window to provide a quick and accurate performance prediction. Furthermore, the system may include a hybrid management scheme, which can dynamically choose either the snapshot process or migration for the fastest performance tuning.
The system and method disclosed herein can provide fast storage performance tuning for virtualized storage management system software in light of the emerging multi-tenant big data cloud environment. The method allows the management software to dynamically choose either the copy-on-write/read snapshots or the storage migration based on the estimated performance tuning time. The method also provides a performance model for the snapshot based storage management.
In light of the traditional data center storage management's (VMware SDRS's) slow performance and high overhead, embodiments of the system disclosed herein offers an advantage in that the system can significantly lower the performance overhead of storage management and facilitate the management process. Further, embodiments of the system are useful with respect to storage management can be performed for the big data and cloud computing IT industries.
A specific embodiment of the subject system can be referred to as the Light Storage Resource Management (LightSRM), and can work properly in a time-variant cloud environment. To mitigate the storage migration cost, embodiments of the system disclosed herein may use copy-on-write/read snapshots to redirect the I/O requests without moving the virtual disk. To support snapshots in a storage management, the system may include a performance model specifically for snapshots. The system and method may include an exponentially weighted moving average with adjustable sliding window to provide quick and accurate performance prediction. Furthermore, the system may include a hybrid management scheme, which can dynamically choose either snapshot or migration for fastest performance tuning.
An embodiment was built in a Quick Emulator Kernel-based Virtual Machine (QEMU/KVM) based virtualized environment. Experimental results show that the implementation of the copy-on-write/read snapshot can redirect I/O requests in a faster manner than migration can do when the virtual disk size is large. Further, implementation of a snapshot method has less disk performance impact on applications running on the computer system or network. By employing the hybrid snapshot/migration method disclosed herein, LightSRM yields less overall latency, better load balance, and less I/O traffic overhead.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1A shows a schematic of a modern data center with a multi-tenant cloud environment.
FIG. 1B illustrates a graphical representation of the average storage latency over time when running five Apache™ Hadoop® virtual machines (VMs) on two storage pools.
FIG. 2 illustrates a graphical representation of the average storage latency when running five Hadoop® VMs; while the idle VM is scheduled to run a database job during the execution.
FIG. 3A shows a schematic of storage migration.
FIGS. 3B-C show a graphical representation of the performance when using storage migration, where FIG. 3B shows Storage Latency versus time and FIG. 3C shows Storage IOPS versus time.
FIG. 4A shows a schematic of snapshot.
FIGS. 4B-4C show a graphical representation of the performance when using snapshot, where FIG. 4A shows Storage Latency versus time and FIG. 4C shows Storage IOPS versus time.
FIG. 5A displays a block diagram of an embodiment of a storage resource management system (LightSRM) described herein.
FIG. 5B shows a schematic of an embodiment of a storage resource management system (LightSRM) described herein.
FIG. 6 displays a block diagram for the process of redirecting I/O requests using copy-on-write and copy-on-read snapshots.
FIG. 7 illustrates a graphical representation of the latency stable period, latency changes, and latency spikes in storage latency with respect to time.
FIG. 8 illustrates a graphical representation of the EWMA tracking for the storage latency.
FIG. 9 displays a timeline for the time sequence of LightSRM load balancing.
FIG. 10 displays a Three-Dimensional (3D) graphical representation of the transition time with respect to the working set size and I/O throughput, when using copy-on-write and copy-on-read snapshots.
FIG. 11 shows a 3D graphical representation of the transition time with respect to the working set size and I/O throughput, when using migration.
FIGS. 12A-B show bar charts for the performance impact on workloads using: (a) migration and (b) snapshots.
FIG. 13 illustrates a graphical representation of the distribution of Residuals with respect to count.
FIG. 14 illustrates a graphical representation of the distribution of Residuals with respect to count for both snapshot modeling and without snapshot modeling.
FIGS. 15A-B displays a graphical representation of the Storage Performance Load Balance for: (a) LightSRM (snapshot) and (b) Traditional Method (migration).
FIG. 16 displays a graphical representation of the I/O size of PageRank during the execution with respect to time.
FIG. 17 shows a block diagram of the initial virtual disk placement.
FIG. 18A displays a graphical representation of the average latency of the storage pools with respect to time.
FIG. 18B displays a graphical representation of the standard deviation of the latency across all the storage pools with respect to time.
FIG. 19 shows a bar chart of the data traffic generated by load balance.
DETAILED DISCLOSURE
Embodiments of the subject invention relate to a lightweight and fast storage management system and method, a specific embodiment of which can be referred to as the Light Storage Resource Management (LightSRM). Embodiments of the system can quickly respond to workload surges. To free storage management from large virtual disks and lengthy migration duration, copy-on-write/read snapshots are used as basic units to tune the I/O performance. The overhead of creating and merging a snapshot is much smaller than that of migrating the entire virtual disk. Since the introduction of snapshot changes the storage I/O behavior, the system can implement a performance model for the copy-on-write/read snapshot. With the help of quick performance tuning, the system can deploy a real-time performance tracking mechanism, exponentially weighted moving average with adjustable sliding window, to provide an accurate latency prediction. Additionally, the system can include a hybrid method based on a characterization of both snapshots and migration, which can dynamically decide to use either snapshot or migration to achieve the fastest performance tuning.
An embodiment of the system may be implemented in a QEMU/KVM based virtualized environment. The system may adopt libvirt as a communication framework to obtain the information of virtual machines and send commands to virtualized servers. The system may also include the traditional storage resource management scheme in this environment for comparison purpose. Experimental results show that implementation of copy-on-write/read snapshots has less transition time when the virtual disk size is large; whereas migration is faster when the virtual disk size is small enough. Further, experimental results show that embodiments of the subject snapshot method yields 90% less performance impact on the running applications. Equipped with hybrid snapshot/migration method, LightSRM can provide lower average storage latency (91% lower in our tests) and better-balanced storage environment (93% better) than the traditional method. Furthermore, the hybrid method generates 44% and 91% less disk I/O traffic than snapshot and migration methods respectively.
Specific embodiments of the subject system and method may include 1) a snapshot-based performance tuning method, 2) a performance model for copy-on-write/read snapshot, 3) a quick and simplified storage performance tracking and modeling scheme, and 4) a hybrid mechanism that chooses snapshot or migration online for faster performance tuning.
FIG. 1B illustrates a graphical representation of the average storage latency over time when running five Apache™ Hadoop® virtual machines (VMs) on two storage pools. Traditional server applications, including file servers, mail servers, and database servers, have relatively stable I/O behaviors. Therefore, the storage pools that host these types of applications yield predictable latency on a daily basis. However, modern datacenters also host the multi-tenant public cloud environments and run big data workloads (see e.g., FIG. 1A). In a multi-tenant cloud environment, the tenants' behaviors can vary from day to day. For big data workloads, even the application itself manifests frequent I/O bursts, which leads to highly fluctuating storage latency. With respect to the graphical representation shown in FIG. 1B, five virtual machines, existing in two storage pools, are used as Hadoop nodes. The hardware platform configuration can be found in the experimental setup section below. During the experimentation, applications of hibench suite are run iteratively. As can be seen, the average storage latency of the two storage pools is high with a large variance.
There are two types of performance variance that can occur in data centers: (1) time-variant I/O behavior of the workload and (2) new tasks initiate at any time. Table 1 shows the storage latency with its standard deviation during the execution of big data workloads. During the application execution, the storage latency and its standard deviation of one storage pool is measured. As can be seen in Table 1, the standard deviation of the storage latency is much larger than the average value. This indicates that big data workloads exhibit highly fluctuating I/O behaviors even in one virtual disk. The large standard deviation also reveals the bias of using single average value to represent the storage latency for the entire execution period. A further investigation of application behaviors reveals that they all have highly varying I/O characteristics (e.g. OIO, Read/Write Ratio, Randomness, I/O size). Therefore, using the I/O characteristics of the entire daytime to predict the storage latency of the next day is inaccurate.
FIG. 2 illustrates a graphical representation of the average storage latency when running five Hadoop® VMs; while the idle VM is scheduled to run a database job during the execution. In a multi-tenant public cloud environment, each tenant can activate his/her rented virtual machines to run applications at any time. Therefore, it is not possible to predict the tenant behaviors. Even with a strict job scheduling policy, it is challenging to predict what type of application the tenant is going to execute (I/O intensive or not). In FIG. 2, six VMs are implemented on six storage pools. While the PageRank application runs in the background on five VMs, the remaining VM is assigned with a database job at a certain time that runs for a predetermined period of time. The average storage latency of the six storage pools are measured and shown in FIG. 2. As illustrated, the average latency bumps up once the idle VM is activated to run. In real datacenters, this type of latency jump can occur more frequently. The uncertainty of when a new application is going to be scheduled makes the storage latency prediction even harder. In summary, the system's performance behavior is very difficult to predict, if not impossible. A datacenter administrator typically will have no idea what type of application is going to run at any point in time. Even under a strict scheduling policy, each application has time-variant I/O behaviors. Traditional storage management systems yield inaccurate performance prediction, because they use the performance statistics of one day to predict the storage latency of next day.
TABLE 1
Storage latency (ms) when running big data workloads
Average Latency Standard Deviation
PageRank 35.6 119.3
Bayes 24.4 102.2
HiveBench 2.0 6.4
Kmeans 29.3 71.1
Nutchindex 41.9 77.5
Sort 133.9 121.8
TeraSort 43.1 55.3
Wardcount 52.7 111.6
Since the storage latency is highly fluctuating and unpredictable, virtual disks may have to be migrated frequently in order to tune the I/O performance and to ensure the policy-based storage management works properly. However, the size of a virtual disk is usually too large to move quickly, hence making the real-time storage management impossible. Worse, a multitenant public cloud environment requires 24/7 service. The conventional way of performing migration at night will not work in this situation. Hence, the system described herein addresses the large size virtual disk in order to implement real-time storage performance management.
Snapshot technology, which is usually used to make backups and checkpoints, may almost instantly redirect I/O requests to a new location (the snapshot file). Once a snapshot is made, all the upcoming write requests will be issued to the new location immediately. The read requests will be issued to the location of the latest data. If copy-on-read technology is enabled, the data blocks will be fetched once from the old location to the new location. Then, all the following reads will be issued to the new location. Embodiments of the subject system described herein leverage snapshot technology similar to that of a cache, where the snapshot file may be gradually filled with the destination storage. Typically, the use of the snapshot technology can change the I/O load more quickly than migration.
FIGS. 3B-C show a graphical representation of the performance when using storage migration (See FIG. 3A): Where FIG. 3B shows Storage Latency versus time and FIG. 3C shows Storage TOPS versus time; while FIGS. 4B-4C show a graphical representation of the performance when using snapshot (see FIG. 4A), where FIG. 4B shows Storage Latency versus time and FIG. 4C shows Storage IOPS versus time. At the beginning, a disk workload running inside one virtual disk is placed at source storage pool. The storage latency is around 1.5 ms. In FIG. 3B, migration of the virtual disk from source to destination using the default QEMU migration method takes a long time and the storage latency jumps to a much higher value in both source (around 70 ms) and destination (around 60 ms) storage pool. Consequently, the storage throughput (IOPS shown in FIG. 3C) drops to a lower value. In FIG. 4B, a snapshot is processed and placed on the destination pool. As can be seen, the latency of source storage pool immediately drops and the latency of destination pool increases; because, the write requests are served by destination storage instantly, while the latency of the source storage maintains at a low level to serve the read requests. After all the needed data blocks are fetched to the snapshot, the latency of source storage further drops to zero and the workloads completely runs on the destination storage. The same trend can be found for the throughput as shown in FIG. 4C. Therefore, using snapshot technology to tune I/O performance is faster and has lower impact on the storage performance.
Despite the fact that snapshot can swiftly tune I/O performance, using the snapshot technology also poses several new challenges to existing storage management systems: 1) how to implement copy-on-write/read snapshot in real systems, 2) how to build a performance model for snapshot, 3) how to quickly track the storage performance, and 4) how to deal with small size virtual disks, which can be migrated quickly.
One or more of these challenges are addressed by embodiments of the subject lightweight storage management system and method that is based on the foundation of snapshot described herein. Particularly, the LightSRM redesigns the entire management stack to address these challenges. In the embodiment referred to as LightSRM, the snapshot file may be responsible for the performance tuning, while the virtual disk may be used to guarantee the functional correctness.
FIG. 5A displays a block diagram of the storage resource management system (LightSRM) described herein. On each virtualized server 10, the system implements a copy-on-write/read snapshot 12 on top of the original virtual disk 14 to hold the working set of the running applications. On a centralized datacenter management server 16, a storage manager 20 is implemented to perform performance load balancing and capacity planning. The performance tracking 22 and balancing 24 operate on the snapshots, while the capacity prediction 28 and capacity planning 30 manages the base virtual disks. Performance load balancing may be triggered more frequently (i.e. a 5 minute interval) than capacity planning (i.e. a 16 hour interval). The storage resource management system described herein may include a redesigned performance management 20, while incorporating the conventional method of capacity planning and prediction 26; since there is no interference between the performance management 20 and capacity planning and prediction 26.
FIG. 5B shows a schematic of an embodiment of the subject system, incorporating a central management machine implementation, management software, rack servers, and storage arrays.
The original snapshot technology is used for backups and checkpoints. In the instance of a data loss event or system failure, a virtual disk can be easily reverted to the previously stored state. In an original snapshot design, copy-on-write technology is leveraged to copy data from a source disk to a new disk when the workload issues a “write” to disk instruction. Since the block size of a snapshot usually is not larger than the I/O size, there is no need to fetch data from old disks. The “read” requests, on the other hand, will be issued to the location of the latest data. Therefore, making an original snapshot redirects the write requests, and not the read requests.
FIG. 6 displays a block diagram for the process of redirecting I/O requests using copy-on-write and copy-on-read snapshots, which can be implemented in accordance with an embodiment of the subject invention. In order to redirect the read requests as well, embodiments of the subject system includes a copy-on-read in the snapshot design of the subject system, as shown in FIG. 6. With copy-on-read technology, the read request may trigger the new snapshot to fetch the latest data from the old snapshot or the base virtual disk. After the data blocks are fetched to the new snapshot, the following reads to those data blocks will be served by the new snapshot, thereby redirecting the read requests to the new storage pool. Depending on the workload behavior, the time for the fetching process varies. Typically, it is much less than migration time, since the working set size of a certain application is smaller than the virtual disk size. For the cases where the fetching process takes a longer time than the migration time, the system may include a hybrid method, which uses migration if the migration time is shorter and uses the snapshot procedure if the snapshot transition time is shorter.
Combining both copy-on-write and copy-on-read, in accordance with embodiments of the subject invention, enables the snapshots to quickly divert I/O requests to the destination storage. If the old snapshot is kept, the system may save time associated with the I/O traffic and the time of transferring the old snapshot; however, as time goes by, there may be a lot of snapshots scattered around the storage pools, which will yield a high level of fragmentation. Worse, a long chain of snapshots, not only makes the management even harder but also adds high indexing overhead to each I/O request. Therefore, the process for LightSRM may merge the old snapshot to the base virtual disk at the moment the new snapshot is created, as shown in FIG. 6. Since the size of old snapshot (the workload working set size) is much smaller than the virtual disk size, the merging time is also shorter than migration time. In sum, in specific embodiments of the subject invention, there are two phases of using copy-on-write/read snapshot to redirect I/O requests. During the first phase (Phase #1: Merging and Fetching), LightSRM may merge the old snapshot to base virtual disk, while LightSRM fetches the needed data to the new snapshot. As shown in FIG. 6, additional I/O traffic will be generated on storage pools 1, 2, and 3. During the second phase (Phase #2: Merging or Fetching), either the merging or fetching process finishes, while the unfinished process continues transmitting the data. After Phase # 2, the workload may completely run on the destination storage. Therefore, the time used for Phases # 1 and #2 can define the transition time for the subject modified snapshot process. Similarly, the transition time for migration is the time necessary to complete a migration.
FIG. 7 illustrates a graphical representation of the latency stable period, latency changes, and latency spikes in storage latency with respect to time. Since embodiments of the subject system described herein can quickly rebalance the performance load by just taking a snapshot, a new, fast, and accurate way to track the storage latency has also been implemented in specific embodiments of the subject system. I/O behaviors are not entirely random. For a certain period of time, the workload exhibits stable behaviors, which results in relatively stable storage latency (shown as a dotted line in FIG. 7). The length of the stable period can be workload dependent. In those periods, the average value can represent the storage latency. Similar issues have been identified and tackled in memory usage tracking and network traffic estimation. Embodiments of the subject system may include a storage latency prediction, where latency is tailored and customized using exponentially weighted moving average methods (EWMA).
In a specific embodiment, referred to as LightSRM, a self-adjustable sliding window tracking mechanism is employed. Initially, the sliding window size is set as q. In the sliding window, LightSRM will sample the latency of each storage pool for q times. Then, EWMA is applied to those q samples. The equation is as below:
L t=α*(L t-1+(1−α)L t-2+. . . +(1−α)i-1 L t-i)  (1)
where Li means the ith sample of latency and 0<α<1. In the system described herein, α is set to 0.5. The Lt is calculated for all the storage pools in the system. These values will be used for the load balancing schemes.
As time elapses, new sample points may be added to the sliding window, while the oldest point will be left out. During stable periods, this method can predict the storage latency accurately. However, in the case of the sudden latency change as shown in FIG. 7, the weighted average value may not be able to quickly detect the storage latency in a few sample points. In order to address this issue, specific embodiments of the subject system may continuously compare the deviation of the latest sample point with the standard deviation of the entire sampling window. If the deviation of latest sample point is much higher than the standard deviation in the sliding window, the system marks the point as deviant. If a deviant point keeps appearing in the most recent sample points, the system may assume that it is highly possible that the storage latency changed to another value for the next several periods. In a specific embodiment, weights of previous samples may be intentionally set to 0, such that the deviant point determines the latency of next period.
A fast EWMA (short sliding window) can quickly detect latency changes. However, a fast EWMA can be easily mislead by the deviant point (i.e., the latency spike as shown in FIG. 7). A slow EWMA (long sliding window) can provide a smooth estimate. But, a slow EWMA might cover two different I/O behaviors and report an inaccurate latency value. A known method on memory tracking [32] maintains both EWMAs and the maximum value is chosen as the memory working set size. However, in latency prediction, always choosing maximum value is not accurate. Therefore, LightSRM leverages a self-adjustable sliding window. When the reported latency value keeps flipping, the length of the sliding window may be increase by Δq. In the alternative, if the reported latency does not change for a long time, the length of the sliding window may be decrease the length by Δq. In a specific embodiment, Δq is set to be one sample point.
FIG. 8 illustrates a graphical representation of the EWMA tracking for the storage latency. Storage latency tracking is part of the storage manager 20, running on a control server 16. The latency of the storage pools 18 is reported by ptdiskstats [14] and sent to the storage manager 20. As can be seen in FIG. 8, the latency tracking mechanism can successfully track the latency changes and report accurate latency value for the next period.
In the realm of time series analysis, embodiments of the subject system may use one or more of a variety of prediction methods, such as exponentially smoothing, moving average with different sizes of sliding window, and auto regression. Each may all serve the same purpose: to predict the future value by the history value. These are compared in detail with respect to FIG. 13 as follows.
Performance modeling consists of two parts: workload modeling and storage device modeling. The workload model can be constructed using its outstanding IOs (OIOs), I/O size, Read/Write Ratio, and Randomness. The workload model may be defined as:
WL i =f(OIO,IOsize,Read Ratio,randomness)  (2);
where the total workload on one storage pool is the summation of all WLi on that storage pool. To convert the total workload to storage latency, the following equation may be used:
L predict = Σ WL i ρ ; ( 3 )
where ρ is calculated via the storage device model (L-Q model [4]). A workload injector may be used to proactively test the storage pool and build the L-Q model, when there is no job running in the system.
For workload modeling, LightSRM leverages the same model as described above with the exception that the performance models are built to support snapshot. According to the phases described above, the workload model is as follows. During the first phase of merging and fetching, additional I/O loads may be generated to all three storage pools as shown in FIG. 6. For storage pool 1, which hosts the base virtual disks, the additional workload WLA can be modeled as:
WLA sp 1 = WLA Merge + WLA fetch = f ( 1 , 512 KB , 0 % , 0 % ) + f ( OIO , IOsize , 100 % , randomness ) ( 4 )
For storage pool 2, which hosts the old snapshot, the additional workload WLA can be modeled as:
WLA sp 2 = WLA Merge + WLA fetch = f ( 1 , 512 KB , 100 % , 0 % ) + f ( OIO , IOsize , 100 % , randomness ) ( 5 )
For storage pool 3, which hosts the new snapshot, the additional workload WLA can be modeled as:
WLA sp 3 = WLA workload + WLA fetch = f ( OIO , IOsize , Read Ratio , randomness ) + f ( OIO , IOsize , 0 % , randomness ) ( 6 )
As can be seen in equations (4) (5) (6), modeling the merging process may be relatively easy, because the data copy operation has fixed values of OIO, IOsize, Read Ratio, and Randomness for a given platform. On the other hand, the behavior of the fetch process totally depends on the workload behavior (OIO, IOsize, and Randomness). Read Ratio is 100% for storage 1, 2 (the sender) while 0% for storage 3 (the receiver).
During the second phase of merging or fetching, one of the merging and fetching process finishes. If merging process finishes, WLAMerge is set to 0. Otherwise, WLAfetch is set to 0.
During both the first and the second phases, additional I/O workload may be added to the workload model WLi in equation (2). After the second phase, the workload completely runs on destination, in which case equation (2) itself is adequate.
For the storage device modeling, we reverse the equation (3) to compute the device characteristics ρ as below:
ρ = Σ WL i L ( 7 )
where ΣWLi is computed using the I/O characteristic (OIO, IOsize, R/W ratio, randomness) and L is obtained from the last latency tracking period. Since the sampling period may be short (5 minutes in our prototype), the chance that ρ changes in the next period may be low. Besides, the storage latency and the workload I/O characteristics in equation (7) may be monitored during the sampling period. Therefore, device characteristic ρ can adjust itself in each sampling period, if the underlying device characteristics change.
In a traditional storage management system, ρ is initially obtained from an off-line device modeling. The workload injector adjusts ρ when the system is idle. LightSRM, however, uses the runtime performance data to recalibrate ρ online even when the system is busy running applications.
Embodiments of the subject system, such as, LightSRM, can use the performance data monitored in the last tracking period to calculate the device characteristics, where the chance that ρ changes in such short period is low. In addition, the system may recalibrate ρ in each period, such that the device characteristics change can be captured in a timely manner. Given that the system has derived a workload model for snapshots and storage device characteristics, the model can predict the latency changes when the workloads are rearranged among the storage pools. A load balance algorithm, such as described below, can benefit from this feature.
FIG. 9 displays a timeline for the time sequence of LightSRM load balancing. The goal of load balancing is to optimize resource utilization, minimize storage latency by distributing workloads across storage pools. LightSRM pursues the same goal but in a fine-grained way. FIG. 9 shows a visual view of how LightSRM performs in real time. In a monitoring period (T), all performance characteristics and storage latency may be sampled. At decision point (D), the latency prediction for next period is computed, as described in more detail. The system acquires all the performance data (e.g. OIO, IOsize, etc.) during the monitoring period. Performance models, such as the performance models described herein, are used to predict the possible performance if a certain workload is moved to another storage pool. Based on the performance data, policy based storage management algorithm (e.g., load balancing) may be executed. In the load balance (LB) period, the system actually moves the workload.
For load balancing, LightSRM iteratively performs the following 3 steps: 1) find the storage pool with the maximum latency (max_storage) and the storage pool with minimum latency (min_storage) in the sliding window; 2) use the performance model (as noted herein) to check if one of the virtual disks moved from max_storage to min_storage will lower the average storage latency and the latency difference across storage pools; and 3) if yes, add this move to migration list and update the internal database to record this movement. If no, quit the iteration and output the migration list. The iteration will continue until a further move to further lower the storage latency is no longer found.
When the load balance algorithm is completed, the system may generate a list of migration suggestions, where either the snapshot process or the migration process is indicated as the best process of operation. Notably, the copy-on-write/read snapshot may not be the best option to apply to all I/O request, where the system may recognize that the transition time of migration could actually be shorter than that of a snapshot for some small virtual disks. Accordingly, specific embodiments of the subject system and method for storage resource management use a hybrid method, which selects the migration process if the migration time is shorter and selects the snapshot process if the snapshot transition time is shorter. To estimate the transition time of the snapshot and migration processes, the following functions apply:
TransitionTime snapshot = max { Time Fetching , Time Merging } = max { SnapshotSize Throughput appRead , SnapshotSize Throughput merge }
where SnapshotSize is the size of snapshot file, Throughputmerge is the throughput that merge process can get, and ThroughputappRead is the throughput that workload generates. Note that the behavior of fetch process depends on the workload. When workload reads the data, the fetch process will read data from old storage pool.
TransitionTime migration = VirtualDiskSize + WorkingSetSize Throughput migration
where VirtualDiskSize is the size of entire virtual disk; Throughputmigration is the throughput associated with the migration process. According to the implementation of migration, the system may also copy the data that is changed by the application during the migration (working set data). Thus, WorkingSetSize may also be considered. For each suggestion on the migration list, LightSRM calculates the transition time of both the migration and snapshot processes. Then, LightSRM selects the method that has a shorter transition time. By doing this, LightSRM can always choose the fastest way to perform load balance.
Experimentation
Experimental Setup
To evaluate LightSRM, a testbed consisting of six storage pools is built, each of which has a different configuration, as shown in Table 2. The storage pools are connected to servers through Fiber Channel. All the six storage pools are shared by Dell Power Edge r710 servers, which are equipped with 2.4 GHz Intel® Xeon® CPU and 24 GB memory. The operating system on the hosts is Ubuntu 13.04. The kernel version is 3.11.0-12 with kvm modules enabled. The latest QEMU version 1.7.50 is used. Up to 16 virtual machines were generated, each of which has 1 vCPUs and 1 GB memory. Their virtual disks are placed across the six storage pools. The size of the virtual disks ranges from 16 GBs to 1 TBs. The storage manager of LightSRM may run on a Dell Desktop Optiplex 980 with 2.7 GHz Intel Core i7 and 8 GB memory. The communication between the storage manager and the QEMU/KVM servers is supported by libvirt, version 1.2.1 [15].
Implementation
A specific embodiment of LightSRM has two parts: copy-on-write/read snapshot and storage manager. The QEMU block device driver (qemu/block.c) is modified to support both copy-on-write and copy-on-read at the same time. To monitor the virtual machine I/O characteristics (e.g. outstanding IOs, IOsize, Read/Write ratio, and randomness), several performance counters in QEMU block device driver are instrumented. These characteristics can be obtained via either QEMU command line interface (qmp) or libvirt API. In order to track the storage performance, we use pt-diskstats tool [14] can be used, which can report the real time storage latency and throughput. The storage manager, including performance modeling and load balancing, is implemented in C. The storage manager runs on a separated machine and leverages the libvirt API to communicate with QEMU hosts and storage pools. For comparison, the Pesto scheme [4] is also implemented on this QEMU/KVM platform.
TABLE 2
Storage pool configurations
No. of RAID Disk
disks Interface Level Size Vendor
Storage_1
2 FC 1   2 TB IBM
Storage_2
3 FC 5   2 TB IBM
Storage_3
2 FC 0 1.3 TB IBM
Storage_4
2 FC 1 0.7 TB IBM
Storage_5
5 FC 5 2.7 TB IBM
Storage_6
4 FC 10   2 TB IBM
Workloads and Metrics
On the testbed, two hadoop platforms are deployed, which occupy up to ten virtual machines in total. On the hadoop platform, hibench suite [11] is run as big data applications. Six virtual machines are also created with IOmeter [31] installed to mimic the I/O behavior of traditional server applications. The behaviors are summarized in Table 3, which are also used and verified in several prior studies [3, 4, 5, 28]. In general, a multi-tenant cloud environment is mimicked with two hadoop clusters running regularly. The cloud tenant can also schedule their server applications to run at any time. In the evaluation section, several execution scenarios are created based on this setup.
TABLE 3
IOmeter settings for simulated workloads
Read
Type OIO IO Size Ratio Randomness
Workstation 8 8 80% 80%
OLTP
12 8 70% 100%
Exchange 8 4 67% 100%
In order to compare LightSRM with other alternative designs, an evaluation is performed for the following metrics: 1) transition time, which is the time between the workload is scheduled to move and the workload completely runs on destination; 2) application observed latency, which is used to measure the performance impact of performing storage management; 3) distribution of errors/residuals, which is used to measure the accuracy of the prediction and modeling; 4) storage latency of each storage pool, which is used to show how well the LightSRM design can rebalance the storage performance.
Results and Analysis
Transition Time
As discussed herein, the transition time of snapshot can be longer than the migration time when the virtual disk size is small enough. This section analyzes the factors that determine the transition time.
For snapshot, the transition time is the maximum value of merging time and fetching time. Since the merging process copied the data from old snapshot to the base virtual disk, the merging time is determined by the working set size of the workload. Fetching time is determined by both the working set size and the read throughput of the workload. Higher read throughput leads to more data to be fetched.
FIG. 10 displays a Three-Dimensional (3D) graphical representation of the transition time with respect to the working set size and I/O throughput, when using copy-on-write and copy-on-read snapshots. The transition time of snapshot with different working set sizes and throughputs was measured. First, the larger the read throughput, the shorter the transition time at a given working set size. With larger read throughput, the volume of data that is driven to the new snapshot per second is larger, which inevitably leads to faster fetching process. In contrast, the larger the working set size, the longer the transition time. Both merging and fetching process need more time to copy a larger size of working set data.
FIG. 11 shows a 3D graphical representation of the transition time with respect to the working set size and I/O throughput, when using migration. For migration, the transition time depends on the virtual disk size and size of the data that is modified during the copy process. As can be seen in FIG. 11, larger throughput leads to longer migration time. With larger throughput, the volume of modified data is larger during the copy process. The migration process has to retransmit those modified data. Therefore, the higher the workload write throughput, the longer the transition time.
Cross-examining FIGS. 10 and 11, snapshot method is preferable when workload exhibits higher read throughput and has smaller working set size, and migration method is more efficient than snapshot method when the workload exhibits lower write throughput. In modern data centers, which host multi-tenant public clouds, the workload exhibits variable working set sizes and different I/O behaviors. Therefore, a hybrid method that can dynamically choose a faster method (snapshot or migration) online, as discussed herein, can be beneficial.
Performance Impact on Workloads
FIGS. 12A-B show bar charts for the performance impact on workloads, where FIG. 12A shows the performance impact on workloads using migration and FIG. 12B shows the performance impact on workloads using snapshots. Both snapshot and migration technology have impact on the workload performance. They both generate additional disk traffic, which will compete the storage resources with the currently running workloads. The observed latency changes of the workloads were measured during the performance of the snapshot process and migration. Three different types of workloads were run (e.g., OLTP, Workstation, and Exchange). At the beginning, they are running on a slow storage pool. These were planned to move to a faster storage pool.
FIG. 12A shows the latency changes, when the workload was moved using storage migration. As can be seen, the workload latency jumps as much as 4× during the migration although eventually the latency drops as the migration finishes. The copy process of the migration occupies the disk resources on both source and destination storage, which inevitably leads to low performance of the running application.
FIG. 12B shows the latency changes, when the workload was moved using the snapshot process. The latency observed by the workload during the transition is between the latency of the slow and fast storage pool. The moment the snapshot is taken, all the write requests of the workload are served by new storage pool. The new storage pool will serve the read requests after the first batch of read requests. Besides, the merging time is much shorter than the migration time so that the impact on the workload latency is smaller (on average, 90% less overhead in the experiments).
Accuracy of Latency Prediction
FIG. 13 illustrates a graphical representation of the distribution of Residuals with respect to count; while FIG. 14 illustrates a graphical representation of the distribution of Residuals with respect to count for both snapshot modeling and without snapshot modeling. There are two predictions in LightSRM: (1) latency prediction for the next period and (2) latency prediction when a certain virtual disk is moved. The traditional method uses the 95-percentile of the latency samples as the latency value of next period. The LightSRM uses exponentially weighted moving average (EWMA) with adjustable window size to predict the storage latency. In time series analysis, exponential smoothing (ExpoS), moving average with different lags (MA_xL), and auto regression (AR) can also be utilized to do prediction. FIG. 13 shows the distribution of residuals when different prediction methods are used. An accurate prediction will have a distribution graph that is close to 0. In FIG. 13, MA_xL means weighted moving average is used with a window length of x. As can be seen, the traditional way of prediction (95-percentile) yields the most inaccurate prediction results, whose residuals are mostly far away from 0. MA_xL, AR and ExpoS yield similar results. EWMA with adjustable window shows the most accurate results as most of the residuals are around 0.
FIG. 14 shows the prediction residuals when a snapshot is created on the new storage pool. As can be seen, without snapshot modeling, most of the residuals are negative, which means most of the prediction value is lower than the actual latency. When the snapshot model is used to capture the additional traffic generated by snapshots, the residuals are closer to 0. Therefore, LightSRM can accurately capture the system performance by modeling the snapshots.
Storage Policy-Based Management for Big Data Workloads
FIGS. 15A-B display a graphical representation of the Storage Performance Load Balance, where FIG. 15A shows the storage performance load balance for LightSRM (snapshot) and FIG. 15A shows the storage performance load balance for the Traditional Method (migration). The LightSRM is deployed in a big data cloud environment, where two execution scenarios to test how well LightSRM performs were designed: Case # 1 Performance Imbalance at Beginning and Case #2: Unexpected Scheduled Jobs. With respect to the first case, in a storage system with four storage devices (storage 1˜4 in Table 2), one VM is placed on storage 1, two VMs are placed on storage 2, one VM is placed on storage 3, and no VM is placed on storage 4. The size of virtual disks may range from 400 GB to 1 TB. The PageRank from hibench suite in the virtual machines may be run. So, at the beginning, the system is imbalanced (storage 2 is overloaded while storage 4 is idle).
As can be seen in FIG. 15A, LightSRM detects the imbalance at the beginning and decides to move one of the VMs from storage 2 to storage 4. Since the virtual disk size is large, taking snapshots will be faster. LightSRM makes a snapshot on storage 4 so that the performance is quickly balanced as shown in FIG. 15A. Here, it is notable that most of big data workloads tend to read the data at the beginning of execution. Therefore, the transition time of snapshot is very short because the workload fetches the data it needs quickly to the new location. If the migration method is used as in the traditional method in FIG. 15B, the migration time is very long due to the large size of the virtual disk. As can be seen, the migration cannot even finish within the workload execution. The performance of storage pools remains imbalanced until the end of workload execution.
FIG. 16 displays a graphical representation of the I/O size of PageRank during the execution with respect to time. Investigating the application behavior, it is notable that big data workloads tend to have large I/O size and different I/O pattern among computing nodes (virtual machines). Assuming network bandwidth is not saturated, large I/O size usually leads to high disk throughput, which helps to copy the data to the new location when using snapshots. Therefore, in general, using snapshot to tune performance is preferable in the light of big data workloads. However, as can be seen in the PageRank example (shown in FIG. 16), the I/O behavior of big data application varies across different virtual machines during the execution process. Therefore, it can be desirable to introduce the hybrid method, as described herein and implemented in a specific embodiment of LightSRM, which can dynamically choose a faster tuning method based on the workload behavior.
FIG. 17 shows a block diagram of the initial virtual disk placement. As shown in FIG. 17, two sets of hadoop platform (VMh11˜VMh15, VMh21˜VMh25) are deployed on six storage pools. There are also six VMs (VMinj1˜VMinj6) installed with IOmeter. Initially, there are all idle and balanced. In the experiments, the hibench suite is run on both hadoop platforms. At a certain time during the execution, three VMs from VMinj1˜VMinj6 are randomly selected to run IOmeter with traditional server application configurations as shown in Table 3. The virtual disk sizes for VMh1 x and VMh2 x are ranging from 200 GB to 1 TB while the size for VMinjx is 16 GB. Therefore, we intentionally create a scenario that some VMs are potentially suitable for migration, while others are not.
FIG. 18A displays a graphical representation of the average latency of the storage pools with respect to time, and FIG. 18B displays a graphical representation of the standard deviation of the latency across all the storage pools with respect to time. As known, the traditional management scheme uses migration for during movement of all data, even if the virtual disk is very large, such as VMhx. An embodiment of LightSRM implementing the hybrid method described herein, can dynamically choose a suitable way to relocate virtual disks. FIG. 18A shows the average latency of the six storage pools, which will exhibit value jump if suddenly scheduled job is not handled in time. FIG. 18B shows the standard deviation for the latency of the six storage pools, which reflects how well the storage pools are balanced. As can be seen, LightSRM has 91% lower average storage latency (28 ms) than that of the traditional method (312 ms). Besides, LightSRM shows 93% lower standard deviation (49 ms) across storage pools. Further looking into the reason behind those numbers, LightSRM takes two snapshots, for the large size VMs, and uses three migrations, for small size VMs, as shown in Table 4. As can be seen in Table 4, the transition time of making those moves are relatively short. However, the traditional method takes around 1 hr to migrate the large size virtual disk (VMh21, VMh22). These lengthy migrations yield high I/O traffic in the system, which leads to high latency in the system (as shown in FIG. 18A). Worse, the randomly scheduled VMs are not handled timely, which results in highly imbalanced storage system as shown in FIG. 18B. Therefore, LightSRM is more capable in handling situations with both large and small size virtual disks.
TABLE 4
Load balance suggestions (rounds with no suggestions are excluded)
Round Balancing Decisions Time
2 Snapshot VMh22 from Storage 1 to 2 12 s
4 Migrate VMinj3 from Storage 2 to 5 85 s
Migrate VMinj4 from Storage 2 to 6 70 s
9 Snapshot VMh21 from Storage 1 to 3 39 s
15 Migrate VMinj1 from Storage 6 to 1 71 s
FIG. 19 shows a bar chart of the data traffic generated by load balancing management. In particular, FIG. 19 shows the additional data traffic that is generated by the load balancing management. When migration is used, the additional data traffic will be roughly equal to the sum of sizes of migrated virtual disks. When snapshot is used, the additional data traffic is the double of the workload working set size (one for merging process, one for fetching process). If virtual disk size is smaller than double working set size, migration actually yields less additional traffic. Therefore, using a hybrid method, as shown in FIG. 19, can achieve much less data traffic than other methods (44% less than the snapshot method and 91% less than the migration method in the experiment).
Aspects of the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention may be practiced with a variety of computer-system configurations, including multiprocessor systems, microprocessor-based or programmable-consumer electronics, minicomputers, mainframe computers, and the like. Any number of computer-systems and computer networks are acceptable for use with the present invention.
Specific hardware devices, programming languages, components, processes, protocols, and numerous details including operating environments and the like are set forth to provide a thorough understanding of the present invention. In other instances, structures, devices, and processes are shown in block-diagram form, rather than in detail, to avoid obscuring the present invention. But an ordinary-skilled artisan would understand that the present invention may be practiced without these specific details. Computer systems, servers, work stations, and other machines may be connected to one another across a communication medium including, for example, a network or networks.
As one skilled in the art will appreciate, embodiments of the present invention may be embodied as, among other things: a method, system, or computer-program product. Accordingly, the embodiments may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. In an embodiment, the present invention takes the form of a computer-program product that includes computer-useable instructions embodied on one or more computer-readable media.
Computer-readable media include both volatile and nonvolatile media, transitory and non-transitory, transient and non-transient media, removable and nonremovable media, and contemplate media readable by a database, a switch, and various other network devices. By way of example, and not limitation, computer-readable media comprise media implemented in any method or technology for storing information. Examples of stored information include computer-useable instructions, data structures, program modules, and other data representations. Media examples include, but are not limited to, information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), holographic media or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage, and other magnetic storage devices. These technologies can store data momentarily, temporarily, or permanently.
The invention may be practiced in distributed-computing environments where tasks are performed by remote-processing devices that are linked through a communications network. In a distributed-computing environment, program modules may be located in both local and remote computer-storage media including memory storage devices. The computer-useable instructions form an interface to allow a computer to react according to a source of input. The instructions cooperate with other code segments to initiate a variety of tasks in response to data received in conjunction with the source of the received data.
The present invention may be practiced in a network environment such as a communications network. Such networks are widely used to connect various types of network elements, such as routers, servers, gateways, and so forth. Further, the invention may be practiced in a multi-network environment having various, connected public and/or private networks.
Communication between network elements may be wireless or wireline (wired). As will be appreciated by those skilled in the art, communication networks may take several different forms and may use several different communication protocols. And the present invention is not limited by the forms and communication protocols described herein.
The methods and processes described herein can be embodied as code and/or data. The software code and data described herein can be stored on one or more computer-readable media or machine-readable media, which may include any device or medium that can store code and/or data for use by a computer system. When a computer system reads and executes the code and/or data stored on a computer-readable medium, the computer system performs the methods and processes embodied as data structures and code stored within the computer-readable storage medium.
It should be appreciated by those skilled in the art that computer-readable media include removable and non-removable structures/devices that can be used for storage of information, such as computer-readable instructions, data structures, program modules, and other data used by a computing system/environment. A computer-readable medium includes, but is not limited to, volatile memory such as random access memories (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only-memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM), and magnetic and optical storage devices (hard drives, magnetic tape, CDs, DVDs); network devices; or other media now known or later developed that is capable of storing computer-readable information/data. Computer-readable media should not be construed or interpreted to include any propagating signals. A computer-readable medium of the subject invention can be, for example, a compact disc (CD), digital video disc (DVD), flash memory device, volatile memory, or a hard disk drive (HDD), such as an external HDD or the HDD of a computing device, though embodiments are not limited thereto. A computing device can be, for example, a laptop computer, desktop computer, server, cell phone, or tablet, though embodiments are not limited thereto.
It should be understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application.
All patents, patent applications, provisional applications, and publications referred to or cited herein (including those in the “References” section) are incorporated by reference in their entirety, including all figures and tables, to the extent they are not inconsistent with the explicit teachings of this specification.
REFERENCES
  • [1] VMware Storage Distributed Resource Scheduler, http://www.vmware.com/products/vsphere/features-storagedrs.
  • [2] VMware Distributed Resource Scheduler, http://www.vmware.com/products/vsphere/features-drs-dpm.
  • [3] Ajay Gulati, Chethan Kumar, Irfan Ahmad, and Karan Kumar, BASIL: Automated I/O load balancing across storage devices. USENIX, FAST 2010.
  • [4] Ajay Gulati, Ganesha Shanmuganathan, Irfan Ahmad, Carl Waldspurger, and Mustafa Uysal, Pesto: Online storage performance management in virtualized datacenters. SOCC 2011.
  • [5] Nohhyun Park, Irfan Ahmad, and David Lilja, Romano: Autonomous Storage Management using Performance Prediction in Multi-Tenant Datacenters, SOCC 2012.
  • [6] Ali Mashtizadeh, Emr'e Celebi Tal Garfinkel, and Min Cai, The Design and Evolution of Live Storage Migration in VMware ESX, USENIX ATC, 2011.
  • [7] Jayanta Basak, Kushal Wadhwani, Kaladhar Voruganti, Srinivasan Narayanamurthy, Vipul Mathur, Siddhartha Nandi, Model Building for Dynamic Multi-tenant Provider Environments, ACM SIGOPS Operating Systems Review, 2012.
  • [8] Mengzhi Wang, Kinman Au, Anastassia Ailamaki, Anthony Brockwell, Christos Faloutsos, and Gregory R. Ganger, Storage Device Performance Prediction with CART Models, MASCOT 2014.
  • [9] Aameek Singh, Madhukar Korupolu, Dushmanta Mohapatra, Server-Storage Virtualization: Integration and Load Balancing in Data Centers, High Performance Computing, Networking, Storage and Analysis, 2008.
  • [10] Hadoop, http://hadoop.apache.org.
  • [11] Hibench, https://github.com/intel-hadoop/HiBench.
  • [12] QEMU, http://wiki.qemu.org.
  • [13] KVM, http://www.linux-kvm.org/page/Main_Page.
  • [14] pt-diskstats. http://www.percona.com/doc/percona-toolkit/2.1/pt-diskstats.html.
  • [15] Libvrit 1.2.1, http://libvirt.org.
  • [16] Amazon AWS, http://aws.amazon.com.
  • [17] Azure, http://www.azure.microsoft.com/en-us/.
  • [18] Oracle Information Architecture: An Architect's Guide to Big Data, An Oracle White Paper in Enterprise Architecture, August 2012.
  • [19] Swaroop Kavalanekar, Bruce Worthington, Qi Zhang, Vishal Sharda, Characterization of storage workload traces from production Windows Servers, IISWC 2008.
  • [20] Ajay Gulati, Chethan Kumar, Irfan Ahmad, Storage Workload Characterization and Consolidation in Virtualized Environments, VPACT, 2009.
  • [21] Eric Anderson, and et al., Hippodrome: running circles around storage administration, FAST 2002.
  • [22] George Alvarez, et al., MINERVA: An Automated Resource Provisioning Tool for Large-Scale Storage Systems, ACM transactions on computer systems, 2001.
  • [23] Chenyang Lu, Guillermo Alvarez, John Wilkes, Aqueduct: online data migration with performance guarantees, USENIX, FAST 2002.
  • [24] Bhaskar Prasad Rimal, Eunmi Choi, Ian Lumb, A Taxonomy and Survery of Cloud Computing System, Fifth International Joint Conference on INC, IMS and IDC, 2009.
  • [25] Divyakant Agrawal, Sudipto Das, Amr El Abbadi, Big data and cloud computing: current state and future opportunities, EDBT/ICDT, 2011.
  • [26] Xuelian Lin, Zide Meng, Chuan Xu, Meng Wang, A Practical Performance Model for Hadoop MapReduce, Cluster Computing Workshops, 2012.
  • [27] Herodotos Herodotou, Hadoop Performance Models, Technical Report, CS-2011-05, Duke University, http://www.cs.duke.edu/starfish/files/hadoop-models.pdf.
  • [28] Wang, L. Workload configurations for typical enterprise workloads. Technical Report, Microsoft, 2009.
  • [29] Ruijin Zhou, Fang Liu, Chao Li, Tao Li, Optimizing virtual machine live storage migration in heterogeneous storage environment, VEE, 2013.
  • [30] Arkaitz Ruiz-Alvarez, Marty Humphrey, An Automated Approach to Cloud Storage Service Selection, ScienceCloud 2011.
  • [31] IOmeter, http://www.iometer.org.
  • [32] Carl A. Waldspurger. Memory Resource Management in VMware ESX Server, OSDI 2002.
  • [33] Minkyong Kim, and Brian Noble. “MobileNetworkEstimation,” Proc. Seventh Annual International Conference on Mobile Computing and Networking, July 2001

Claims (20)

What is claimed is:
1. A system for storage resource management, comprising:
a plurality of storage pools;
at least one server, wherein each server of the at least one server comprises a corresponding processor of a corresponding at least one processor and a corresponding memory of a corresponding at least one memory, wherein each server of the at least one server is connected to the plurality of storage pools,
wherein a virtualized server is implemented on a server of the at least one server, wherein the virtualized server has a plurality of base virtual disks, wherein the virtualized server has a plurality of snapshots on top of the plurality of base virtual disks such that each snapshot of the plurality of snapshots is on top of a corresponding base virtual disk of the plurality of base virtual disks, wherein the plurality of snapshots is a plurality of copy-on-write/read snapshots,
wherein the plurality of base virtual disks and the plurality of copy-on-write/read snapshots are on the plurality of storage pools,
wherein the virtualized server implements a copy-on-write/read snapshot creation process with respect to a specific base virtual disk stored on an initial storage pool of the plurality of storage pools and an existing snapshot stored in an old storage pool of the plurality of storage pools, such that the copy-on-write/read snapshot creation process creates a new snapshot on a new storage pool of the plurality of storage pools, wherein the existing snapshot corresponds to the specific base virtual disk stored on an initial storage pool, wherein the new storage pool is different from the old storage pool,
wherein after creation of the new snapshot,
(i) the existing snapshot becomes an old snapshot,
(ii) the copy-on-write/read snapshot creation process initiates a merge, and
(iii) the copy-on-write/read snapshot creation process initiates a fetch,
such that until either the merge or the fetch is completed,
the merge comprises merging data from the old snapshot stored on the old storage pool with the specific base virtual disk stored on the initial storage pool, wherein received write requests with respect to the specific base virtual disk are directed to the new snapshot and writes with respect to the specific base virtual disk are written to the new snapshot, and
the fetch comprises received read requests with respect to the specific base virtual disk are directed to the new snapshot such that
(a) before a data block of the specific base disk is fetched to the new snapshot, in response to a read request with respect to the data block triggers the new snapshot to fetch latest data for the data block from either the old snapshot or the specific base virtual disk,
(b) after the data block is fetched to the new snapshot, further read request with respect to the data block is read from the data block fetches to the new snapshot,
wherein after either the merge or the fetch is completed, and until both the merge and the fetch are completed,
(i) if the merge is completed and the fetch is not completed, the merge terminates and the fetch continues,
(ii) if the fetch is completed and the merge is not completed, the fetch terminates and the merge continues,
wherein after both the merge and the fetch are completed, write requests and read requests with respect to the specific base virtual disk are sent to the new snapshot and are completely run on the new storage pool.
2. The system according to claim 1, wherein each base virtual disk of the plurality of base virtual disks corresponds to a virtual machine of a plurality of virtual machines, wherein the at least one server comprises:
a centralized data center management server, wherein a storage manager is implemented on the centralized data center management server, wherein the storage manager monitors input/output (IO) characteristics of the plurality of virtual machines, wherein the centralized data center management server is connected to each other server of the at least one server and is coupled to the virtualized server, wherein the storage manager provides instructions to the virtualized server regarding imitating the copy-on-write/read snapshot creation process, including identification of the new storage pool.
3. The system according to claim 2, wherein the storage manager implements performance tracking and performance load balancing based on the monitored IO characteristics of the plurality of virtual machines.
4. The system of claim 3, wherein the storage manager implements capacity prediction and capacity planning based on the monitored IO characteristics of the plurality of virtual machines.
5. The system according to claim 3, wherein performance tracking tracks storage latency using dynamic sliding window tracking that samples latency of each storage pool for q times and uses a latency prediction method to predict the storage latency.
6. The system of claim 5, wherein the latency prediction method is an exponentially weighted moving average (EWMA) method.
7. The system of claim 5, wherein the latency prediction method is an exponentially smoothing method.
8. The system of claim 5, wherein the latency prediction method is an auto regression method.
9. The system according to claim 1, wherein each snapshot of the plurality of snapshots holds a corresponding working set of running applications with respect to the corresponding base virtual disk.
10. A method of storage resource management, comprising:
providing a plurality of storage pools,
providing at least one server, wherein each server of the at least one server comprises a corresponding processor of a corresponding at least one processor and a corresponding memory of a corresponding at least one memory, wherein each server of the at least one server is connected to the plurality of storage pools,
implementing a virtualized server on a first server of the at least one server, wherein the virtual server has a plurality of base virtual disks, wherein the virtualized server has a plurality of snapshots on top of the plurality of base virtual disks such that each snapshot of the plurality of snapshots is on top of a corresponding base virtual disk of the plurality of base virtual disks, wherein the plurality of snapshots is a plurality of copy-on-write/read snapshots,
storing the plurality of base virtual disks and the plurality of snapshots on the plurality of storage pools,
implementing a copy-on-write/read snapshot creation process with respect to a specific base virtual disk stored on an initial storage pool of the plurality of storage pools and an existing snapshot stored in an old storage pool of the plurality of storage pools, such that the copy-on-write/read snapshot creation process creates a new snapshot on a new storage pool of the plurality of storage pools, wherein the existing snapshot corresponds to the specific base virtual disk stored on an initial storage pool, wherein the new storage pool is different from the old storage pool,
wherein after creation of the new snapshot,
(i) the existing snapshot becomes an old snapshot,
(ii) the copy-on-write/read snapshot creation process initiates a merge, wherein the merge comprises merging data from the old snapshot stored on the old storage pool with the specific base virtual disk stored on the initial storage pool, wherein write requests received are directed to the new snapshot with respect to the specific base virtual disk,
(iii) the copy-on-write/read snapshot creation process initiates a fetch, where the fetch comprises sending read requests to the new snapshot such that
(i) before a data block of the specific base disk is fetched to the new snapshot, in response to a read request with respect to the data block triggers the new snapshot to fetch latest data for the data block from either the old snapshot or the specific base virtual disk,
(ii) after the data block is fetched to the new snapshot, further read request with respect to the data block is read from the data in the new snapshot,
until either the merge or the fetch is completed,
wherein after either the merge or the fetch is completed,
(i) if the merge is completed and the fetch is not completed, the merge terminates and the fetch continues,
(ii) if the fetch is completed and the merge is not completed, the fetch terminates and the merge continues,
until both the merge and the fetch are completed,
wherein after both the merge and the fetch are completed, write requests and read requests with respect to the specific base virtual disk are sent to the new snapshot and are completely run on the new storage pool.
11. The method according to claim 10, wherein each base virtual disk of the plurality of base virtual disks corresponds to a virtual machine of a plurality of virtual machines, wherein the at least one server comprises:
a centralized data center management server, wherein a storage manager is implemented on the centralized data center management server, wherein the storage manager monitors IO characteristics of the plurality of virtual machines, wherein the centralized data center management server is connected to each other server of the at least one server and is coupled to the virtualized server, wherein the storage manager provides instructions to the virtualized server regarding imitating the copy-on-write/read snapshot creation process, including identification of the new storage pool.
12. The method according to claim 11, wherein the storage manager implements performance tracking and performance load balancing based on the monitored IO characteristics of the plurality of virtual machines.
13. The method of claim 12, wherein performance tracking tracks storage latency using dynamic sliding window tracking that samples the latency of each storage pool for q times and uses a latency prediction method to predict the storage latency.
14. The method of claim 13, wherein the latency prediction method is an exponentially weighted moving average (EWMA) method.
15. The method of claim 13, wherein the latency prediction method is an exponentially smoothing method.
16. The method of claim 13, wherein the latency prediction method is an auto regression method.
17. The method of claim 10, wherein the method further comprises a policy-based storage management algorithm comprising:
determining a storage pool with the maximum latency “max_storage”;
determining a storage pool with the minimum latency “min_storage”; and
generating performance using a performance model to check whether moving of one of the virtual disks from the maximum latency to the minimum latency will lower average storage latency and the latency difference across storage pools, wherein, when affirmative, adding the move to a migration list and updating an internal database to record the moving of one of the virtual disks; and wherein when negative, quitting iteration and outputting the migration list;
wherein the performance model includes a workload modeling portion and a storage device modeling portion.
18. The method of claim 17, wherein the workload modeling portion comprises:
generating an I/O workload for a first, second and third storage pool during a first phase of merging and fetching, where respective workload (WLA) equations are:
WLA sp 1 = WLA Merge + WLA fetch = f ( 1 , 512 KB , 0 % , 0 % ) + f ( OIO , IOsize , 100 % , randomness ) , WLA sp 2 = WLA Merge + WLA fetch = f ( 1 , 512 KB , 100 % , 0 % ) + f ( OIO , IOsize , 100 % , randomness ) , WLA sp 3 = WLA workload + WLA fetch = f ( OIO , IOsize , Read Ratio , randomness ) + f ( OIO , IOsize , 0 % , randomness ) ,
where sp1 refers to the first storage pool, sp2 refers to the second storage pool, sp3 refers to the third storage pool, Merge refers to the merging of the first phase, and fetch refers to the fetching of the first phase;
generating the I/O workload for the first, second and third storage pools during a second phase of merging or fetching using the respective workload equations, where if the merging step of the second phase is finished first, WLAMerge is set to 0, or if the fetching step of the second phase is finished first, WLAfetch is set to 0;
generating the I/O workload after the second phase using a workload model equation:

WL i=(OIO,IOsize,Read Ratio,randomness); and
summing all I/O workloads together (ΣWLi).
19. The method of claim 17, wherein the storage device modeling portion uses the following equation:
ρ = Σ WL i L
where, ρ represents the device characteristic and L represents the last storage latency period generated by the following equation:

L t=α*(L t-1+(1−α)L t-2+. . . +(1−α)i-1 L t-i)
where Li means the ith sample of latency and 0<α<1; α equals 0.5; and Lt is calculated for all storage pools; and
wherein the device characteristic ρ is recalibrated online at any time by a system.
20. The method of claim 10, wherein the method further comprises a hybrid method for selecting a snapshot process or a migration process comprising:
estimating a transition time for the snapshot process using the following equation:
TransitionTime snapshot = max { Time Fetching , Time Merging } = max { SnapshotSize Throughput appRead , SnapshotSize Throughput merge }
where SnapshotSize is the size of snapshot file, Throughputmerge is the throughput that merge process can get, and ThroughputappRead is throughput associated with a workload;
estimating a transition time for the migration process using the following equation:
TransitionTime migration = VirtualDiskSize + WorkingSetSize Throughput migration
where VirtualDiskSize is the size of entire virtual disk, Throughputmigration is the throughput associated with the migration process, and WorkingSetSize is the working set size; and
selecting a process for operation based upon a shorter transition time, where when the transition time for the snapshot process is shorter than the transition time for the migration process, the snapshot process is selected; and when the transition time for the migration process is shorter than the transition time for the snapshot process, the migration process is selected.
US15/174,657 2015-06-05 2016-06-06 Method and apparatus for big data cloud storage resource management Expired - Fee Related US9772794B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/174,657 US9772794B2 (en) 2015-06-05 2016-06-06 Method and apparatus for big data cloud storage resource management

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562171447P 2015-06-05 2015-06-05
US15/174,657 US9772794B2 (en) 2015-06-05 2016-06-06 Method and apparatus for big data cloud storage resource management

Publications (2)

Publication Number Publication Date
US20160357443A1 US20160357443A1 (en) 2016-12-08
US9772794B2 true US9772794B2 (en) 2017-09-26

Family

ID=85279536

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/174,657 Expired - Fee Related US9772794B2 (en) 2015-06-05 2016-06-06 Method and apparatus for big data cloud storage resource management

Country Status (1)

Country Link
US (1) US9772794B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170344393A1 (en) * 2016-05-31 2017-11-30 Huawei Technologies Co., Ltd. Virtual machine resource utilization in a data center
US10778552B2 (en) 2018-04-30 2020-09-15 Hewlett Packard Enterprise Development Lp Storage system latency evaluation based on I/O patterns
US10895985B1 (en) * 2018-05-29 2021-01-19 Amazon Technologies, Inc. Real-time estimation of working sets
US11070455B2 (en) 2018-04-30 2021-07-20 Hewlett Packard Enterprise Development Lp Storage system latency outlier detection
US11481117B2 (en) 2019-06-17 2022-10-25 Hewlett Packard Enterprise Development Lp Storage volume clustering based on workload fingerprints

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9971397B2 (en) 2014-10-08 2018-05-15 Apple Inc. Methods and apparatus for managing power with an inter-processor communication link between independently operable processors
US10659532B2 (en) * 2015-09-26 2020-05-19 Intel Corporation Technologies for reducing latency variation of stored data object requests
CN107104992B (en) * 2016-02-19 2019-11-22 杭州海康威视数字技术股份有限公司 A kind of the storage resource distribution method and device of video cloud storage
US10095416B2 (en) * 2016-09-27 2018-10-09 International Business Machines Corporation Storage optimization based on application feedback
US10372572B1 (en) * 2016-11-30 2019-08-06 Amazon Technologies, Inc. Prediction model testing framework
US11074220B2 (en) 2017-01-06 2021-07-27 Oracle International Corporation Consistent file system semantics with cloud object storage
US11803453B1 (en) 2017-03-10 2023-10-31 Pure Storage, Inc. Using host connectivity states to avoid queuing I/O requests
US10521344B1 (en) * 2017-03-10 2019-12-31 Pure Storage, Inc. Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US10564854B2 (en) 2017-04-11 2020-02-18 Microsoft Technology Licensing, Llc Tracking internal latencies for load balancing of file server
US10761743B1 (en) 2017-07-17 2020-09-01 EMC IP Holding Company LLC Establishing data reliability groups within a geographically distributed data storage environment
US10620843B2 (en) * 2017-07-26 2020-04-14 Netapp, Inc. Methods for managing distributed snapshot for low latency storage and devices thereof
CN107609194B (en) * 2017-10-17 2020-08-07 吉林省电力科学研究院有限公司 Cloud computing-oriented storage method for time redundancy power load data
US10437619B2 (en) * 2017-12-12 2019-10-08 Arch Systems Inc. System and method for physical machine monitoring and analysis
US11321249B2 (en) 2018-03-26 2022-05-03 Samsung Electronics Co., Ltd. Mechanism to autonomously manage SSDS in an array
US11792307B2 (en) 2018-03-28 2023-10-17 Apple Inc. Methods and apparatus for single entity buffer pool management
US10579297B2 (en) 2018-04-27 2020-03-03 EMC IP Holding Company LLC Scaling-in for geographically diverse storage
US10592156B2 (en) 2018-05-05 2020-03-17 International Business Machines Corporation I/O load balancing between virtual storage drives making up raid arrays
US10846224B2 (en) 2018-08-24 2020-11-24 Apple Inc. Methods and apparatus for control of a jointly shared memory-mapped region
CN109446020B (en) * 2018-09-21 2022-05-13 曙光云计算集团有限公司 Dynamic evaluation method and device of cloud storage system
US11436203B2 (en) 2018-11-02 2022-09-06 EMC IP Holding Company LLC Scaling out geographically diverse storage
CN109408353B (en) * 2018-11-13 2022-02-18 郑州云海信息技术有限公司 Storage partition utilization rate and IOPS analysis method, system and terminal equipment
US11119683B2 (en) 2018-12-20 2021-09-14 EMC IP Holding Company LLC Logical compaction of a degraded chunk in a geographically diverse data storage system
WO2020185752A1 (en) 2019-03-12 2020-09-17 Arch Systems Inc. System and method for network communication monitoring
US11221782B1 (en) * 2019-03-27 2022-01-11 Amazon Technologies, Inc. Customizable progressive data-tiering service
CN110071965B (en) * 2019-03-27 2022-05-31 上海德衡数据科技有限公司 Data center management system based on cloud platform
US11113146B2 (en) 2019-04-30 2021-09-07 EMC IP Holding Company LLC Chunk segment recovery via hierarchical erasure coding in a geographically diverse data storage system
US11119686B2 (en) 2019-04-30 2021-09-14 EMC IP Holding Company LLC Preservation of data during scaling of a geographically diverse data storage system
US11121727B2 (en) 2019-04-30 2021-09-14 EMC IP Holding Company LLC Adaptive data storing for data storage systems employing erasure coding
US11748004B2 (en) 2019-05-03 2023-09-05 EMC IP Holding Company LLC Data replication using active and passive data storage modes
US11209996B2 (en) 2019-07-15 2021-12-28 EMC IP Holding Company LLC Mapped cluster stretching for increasing workload in a data storage system
US11449399B2 (en) 2019-07-30 2022-09-20 EMC IP Holding Company LLC Mitigating real node failure of a doubly mapped redundant array of independent nodes
US11228322B2 (en) 2019-09-13 2022-01-18 EMC IP Holding Company LLC Rebalancing in a geographically diverse storage system employing erasure coding
US11449248B2 (en) 2019-09-26 2022-09-20 EMC IP Holding Company LLC Mapped redundant array of independent data storage regions
US11477123B2 (en) 2019-09-26 2022-10-18 Apple Inc. Methods and apparatus for low latency operation in user space networking
US11558348B2 (en) 2019-09-26 2023-01-17 Apple Inc. Methods and apparatus for emerging use case support in user space networking
US11829303B2 (en) 2019-09-26 2023-11-28 Apple Inc. Methods and apparatus for device driver operation in non-kernel space
US11288139B2 (en) 2019-10-31 2022-03-29 EMC IP Holding Company LLC Two-step recovery employing erasure coding in a geographically diverse data storage system
US11119690B2 (en) 2019-10-31 2021-09-14 EMC IP Holding Company LLC Consolidation of protection sets in a geographically diverse data storage environment
US11435910B2 (en) 2019-10-31 2022-09-06 EMC IP Holding Company LLC Heterogeneous mapped redundant array of independent nodes for data storage
US11435957B2 (en) 2019-11-27 2022-09-06 EMC IP Holding Company LLC Selective instantiation of a storage service for a doubly mapped redundant array of independent nodes
US11144220B2 (en) 2019-12-24 2021-10-12 EMC IP Holding Company LLC Affinity sensitive storage of data corresponding to a doubly mapped redundant array of independent nodes
US11231860B2 (en) 2020-01-17 2022-01-25 EMC IP Holding Company LLC Doubly mapped redundant array of independent nodes for data storage with high performance
US11507308B2 (en) * 2020-03-30 2022-11-22 EMC IP Holding Company LLC Disk access event control for mapped nodes supported by a real cluster storage system
US11288229B2 (en) 2020-05-29 2022-03-29 EMC IP Holding Company LLC Verifiable intra-cluster migration for a chunk storage system
US11606302B2 (en) 2020-06-12 2023-03-14 Apple Inc. Methods and apparatus for flow-based batching and processing
US11775359B2 (en) 2020-09-11 2023-10-03 Apple Inc. Methods and apparatuses for cross-layer processing
US11954540B2 (en) 2020-09-14 2024-04-09 Apple Inc. Methods and apparatus for thread-level execution in non-kernel space
US11799986B2 (en) 2020-09-22 2023-10-24 Apple Inc. Methods and apparatus for thread level execution in non-kernel space
US11693983B2 (en) 2020-10-28 2023-07-04 EMC IP Holding Company LLC Data protection via commutative erasure coding in a geographically diverse data storage system
US11847141B2 (en) 2021-01-19 2023-12-19 EMC IP Holding Company LLC Mapped redundant array of independent nodes employing mapped reliability groups for data storage
US11625174B2 (en) 2021-01-20 2023-04-11 EMC IP Holding Company LLC Parity allocation for a virtual redundant array of independent disks
US11550491B2 (en) * 2021-04-30 2023-01-10 Nutanix, Inc. Container-based application processing
US11354191B1 (en) 2021-05-28 2022-06-07 EMC IP Holding Company LLC Erasure coding in a large geographically diverse data storage system
US11449234B1 (en) 2021-05-28 2022-09-20 EMC IP Holding Company LLC Efficient data access operations via a mapping layer instance for a doubly mapped redundant array of independent nodes
US11876719B2 (en) 2021-07-26 2024-01-16 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements
US11882051B2 (en) 2021-07-26 2024-01-23 Apple Inc. Systems and methods for managing transmission control protocol (TCP) acknowledgements

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120459A1 (en) * 2006-11-22 2008-05-22 Yasunori Kaneda Method and apparatus for backup and restore in a dynamic chunk allocation storage system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080120459A1 (en) * 2006-11-22 2008-05-22 Yasunori Kaneda Method and apparatus for backup and restore in a dynamic chunk allocation storage system

Non-Patent Citations (36)

* Cited by examiner, † Cited by third party
Title
"Oracle Information Architecture: An Enterprise Architect's Guide to Big Data," Oracle Enterprise Architecture White Paper, Mar. 2016, pp. 1-49.
Agrawal, Divyakant et al., "Big Data and Cloud Computing: Current State and Future Opportunities," EDBT 2011, Uppsala, Sweden, Mar. 22-24, 2011, pp. 1-4.
Alvarez, Guillermo A. et al., "MINERVA: An Automated Resource Provisioning Tool for Large-Scale Storage Systems," ACM Transactions on Computer Systems, Jun. 11, 2001, pp. 1-28.
Amazon AWS, https://aws.amazon.com/, 2016.
Anderson, Eric et al., "Hippodrome: running circles around storage administration," FAST '02, Monterey, California, U.S.A., Jan. 28-30, 2002, pp. 175-188.
Azure, https://azure.microsoft.com/en-us/, 2016.
Basak, Jayanta et al., "Model Building for Dynamic Multi-tenant Provider Environments," AGM SIGOPS Operating Systems Review, 2012, pp. 1-12.
Gulati, Ajay et al., "BASIL: Automated IO Load Balancing Across Storage Devices," USENIX, FAST 2010, pp. 1-14.
Gulati, Ajay et al., "Pesto: Online Storage Performance Management in Virtualized Datacenters," SOCC'11, Cascais, Portugal, Oct. 27-28, 2011, pp. 1-14.
Gulati, Ajay et al., "Storage Workload Characterization and Consolidation in Virtualized Environments," VPACT, 2009, pp. 1-10.
Hadoop, http://hadoop.apache.org, last published Jun. 16, 2016.
Herodotou, Herodotos. "Hadoop Performance Models," Technical Report, Duke University, CS-May 2011, pp. 1-19.
Herodotou, Herodotos. "Hadoop Performance Models," Technical Report, Duke University, CS—May 2011, pp. 1-19.
Hibench, https://github.com/intel-hadoop/HiBench, 2016.
IOmeter, http://www.iometer.org/, 2016.
Kavalanekar, Swaroop et al., "Characterization of Storage Workload Traces from Production Windows Servers," IISWC, Seattle, Washington, U.S.A., Sep. 14-16, 2008, pp. 119-128.
Kim, Minkyoung et al., "Mobile Network Estimation," Proceedings of the ACM Conference on Mobile Computing and Networking, Rome, Italy, Jun. 2001, pp. 1-13.
KVM, http://www.linux-kvm.org/page/Main-Page, accessed on Jun. 17, 2016.
KVM, http://www.linux-kvm.org/page/Main—Page, accessed on Jun. 17, 2016.
Libvrit 1.2.1, https://libvirt.org/, 2016.
Lin, Xuelian et al., "A Practical Performance Model for Hadoop MapReduce," IEEE International Conference on Cluster Computing Workshops, Beijing, China, Sep. 24-28, 2012, pp. 231-239.
Lu, Chenyang et al., "Aqueduct: online data migration with performance guarantees," FAST '02, Monterey, California, U.S.A., Jan. 28-30, 2002, pp. 219-230.
Mashtizadeh, Ali et al., "The Design and Evolution of Live Storage Migration in VMware ESX," USENIX ATC, 2011, pp. 1-14.
Park, Nohhyun et al., "Romano: Autonomous Storage Management using Performance Prediction in Multi-Tenant Datacenters," SOCC'12, Oct. 14-17, 2012, San Jose, California, U.S.A., pp. 1-14.
pt-diskstats, https://www.percona.com/doc/percona-toolkit/2.1/pt-diskstats.html, 2016.
QEMU, http://wiki.qemu.org/Main-Page, last updated May 11, 2016.
QEMU, http://wiki.qemu.org/Main—Page, last updated May 11, 2016.
Rimal, Bhaskar Prasad et al., "A Taxonomy and Survey of Cloud Computing Systems," Fifth International Joint Conference on INC, IMS and IDC, 2009, pp. 44-51.
Ruiz-Alvarez, Arkaitz et al., "An Automated Approach to Cloud Storage Service Selection," ScienceCloud'11, San Jose, California, U.S.A., Jun. 8, 2011, pp. 1-10.
Singh, Aameek et al., "Server-Storage Virtualization: Integration and Load Balancing in Data Centers," High Performance Computing, Networking, Storage, and Analysis, 2008, pp. 1-12.
VMware Distributed Resource Scheduler, accessed from: https://www.vmware.com/products/vsphere/features/drs-dpm, 2016.
VMware Storage Distributed Resource Scheduler, accessed from: http://www.vmware.com/products/vsphere/features/storage-drs.html, 2016.
Voellm, Tony. "Useful IO profiles for simulating various workloads," Microsoft Developer, May 7, 2009, accessed from: https://blogs.msdn.microsoft.com/tvoellm/2009/05/07/useful-io-profiles-for-simulating-various-workloads/.
Waldspurger, Carl A. "Memory Resource Management in VMware ESX Server," Proceedings on the 5th Symposium on Operating Systems Design and Implementation, Boston, Massachusetts, U.S.A., Dec. 9-11, 2002, pp. 1-15.
Wang, Mengzhi et al., "Storage Device Performance Prediction with CART Models," 12th Anual Meeting of the IEEE/ACM International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Volendam, the Netherlands, Oct. 5-7, 2004, pp. 1-8.
Zhou, Ruijin et al., "Optimizing Virtual Machine Live Storage Migration in Heterogeneous Storage Environment," VEE'13, Houston, Texas, U.S.A., Mar. 16-17, 2013, pp. 73-84.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170344393A1 (en) * 2016-05-31 2017-11-30 Huawei Technologies Co., Ltd. Virtual machine resource utilization in a data center
US10102025B2 (en) * 2016-05-31 2018-10-16 Huawei Technologies Co., Ltd. Virtual machine resource utilization in a data center
US10778552B2 (en) 2018-04-30 2020-09-15 Hewlett Packard Enterprise Development Lp Storage system latency evaluation based on I/O patterns
US11070455B2 (en) 2018-04-30 2021-07-20 Hewlett Packard Enterprise Development Lp Storage system latency outlier detection
US10895985B1 (en) * 2018-05-29 2021-01-19 Amazon Technologies, Inc. Real-time estimation of working sets
US11481117B2 (en) 2019-06-17 2022-10-25 Hewlett Packard Enterprise Development Lp Storage volume clustering based on workload fingerprints

Also Published As

Publication number Publication date
US20160357443A1 (en) 2016-12-08

Similar Documents

Publication Publication Date Title
US9772794B2 (en) Method and apparatus for big data cloud storage resource management
US11073999B2 (en) Extent migration in multi-tier storage systems
US12008406B1 (en) Predictive workload placement amongst storage systems
US11399064B2 (en) Peak cyclical workload-based storage management in a multi-tier storage environment
JP4699837B2 (en) Storage system, management computer and data migration method
JP2019021315A (en) System for performing auto-tiering, method of the same, and record medium recording instruction word for execution
US20150234617A1 (en) Method and apparatus for virtual machine live storage migration in heterogeneous storage environment
US10069753B2 (en) Relationship-based resource-contention analysis system and method
US20180018379A1 (en) Application performance using multidimensional predictive algorithm for automated tiering mechanisms
US10699798B2 (en) Testing storage device power circuitry
JP2014502395A (en) Methods, systems, and computer programs for eliminating run-time dynamic performance skew in computing storage environments (run-time dynamic performance skew elimination)
US20160164755A1 (en) Reducing the Impact of Noisy Neighbors via Pro-Active Log Offloading in Shared Storage Environment
US20150286419A1 (en) Estimation of performance utilization of a storage device
US9576061B2 (en) Information processing system and data update control method
US20160321008A1 (en) Smart Load Balancing Replication When Adding or Removing Storage Disks in a Distributed Storage System
US10691357B2 (en) Consideration of configuration-based input/output predictions in multi-tiered data storage system management
US10296436B2 (en) Adjusting trace points based on overhead analysis
US9952770B2 (en) Responding to recall operations based on file migration time
US20180217875A1 (en) Data processing system and data processing method
US10956084B2 (en) Drive utilization in multi-tiered systems with read-intensive flash
US20160092364A1 (en) Management of storage system
US12056385B2 (en) Storage media scrubber
US10452273B2 (en) Preemptive event-based data migration
Zhou et al. Towards lightweight and swift storage resource management in big data cloud era
JP6497233B2 (en) Storage control device, storage control program, and storage control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: UNIVERSITY OF FLORIDA RESEARCH FOUNDATION, INCORPO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, TAO;ZHOU, RUIJIN, DR.;SIGNING DATES FROM 20160621 TO 20160628;REEL/FRAME:039035/0353

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: NATIONAL SCIENCE FOUNDATION, VIRGINIA

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:UNIVERSITY OF FLORIDA;REEL/FRAME:047282/0957

Effective date: 20180710

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20210926