US20080172526A1 - Method and System for Placement of Logical Data Stores to Minimize Request Response Time - Google Patents

Method and System for Placement of Logical Data Stores to Minimize Request Response Time Download PDF

Info

Publication number
US20080172526A1
US20080172526A1 US11/622,008 US62200807A US2008172526A1 US 20080172526 A1 US20080172526 A1 US 20080172526A1 US 62200807 A US62200807 A US 62200807A US 2008172526 A1 US2008172526 A1 US 2008172526A1
Authority
US
United States
Prior art keywords
logical data
data store
data stores
storage device
storage devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/622,008
Inventor
Akshat Verma
Ashok Anand
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/622,008 priority Critical patent/US20080172526A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ANAND, ASHOK, VERMA, AKSHAT
Priority to US12/056,591 priority patent/US9223504B2/en
Publication of US20080172526A1 publication Critical patent/US20080172526A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2206/00Indexing scheme related to dedicated interfaces for computers
    • G06F2206/10Indexing scheme related to storage interfaces for computers, indexing schema related to group G06F3/06
    • G06F2206/1012Load balancing

Definitions

  • This invention relates generally to placing (i.e., allocating) logical data stores on an array of storage devices, and more particularly to placement such that store request time is minimized.
  • I/O systems have been employed due to their ability to provide fast and reliable access, while supporting high transfer rates for dedicated supercomputing applications as well as diverse enterprise applications.
  • Disk arrays are typically arranged to partition data across multiple hard disk drives within a storage pool, and provide concurrent access to multiple applications at the same time.
  • a single application having large data requirements may further partition its data into stores and place them across multiple disks, such that the resulting parallelism alleviates the I/O bottleneck to a certain degree.
  • a logical data store can be a database table, files owned by a particular user, or data used by an application, among other types of logical data stores.
  • a number of logical data stores may be placed over an array of parallel hard disk drives, which can be referred to as disks, or more generally as storage devices.
  • a sequence of disk requests generated by an application or user can be denoted as a stream, and the logical data group accessed by the stream can be synonymously considered a logical data store as well.
  • logical data stores Where there are a number of logical data stores to be placed on an array of storage devices, they are desirably placed on the storage devices such that the average response time for all store requests is minimized, and that their work load is balanced across all the storage devices. This issue also finds applications in web services, where user streams—i.e., logical data stores—are allocated to different web servers, and each server may manage its own storage. Current strategies for placing logical data stores on storage devices, however, do not minimize response.
  • This invention relates to placing logical data stores on an array of storage devices such that store request time is minimized.
  • a method of one embodiment of this invention determines the average load over all the storage devices within the array.
  • the logical data stores are sorted by some metric of the stores, and both a logical data store counter and a storage device counter are set equal to one.
  • the following steps, parts, acts, or actions are repeated until the storage device counter exceeds the number of the storage devices within the array. First, a load for the storage device specified by the storage device counter is set equal to zero. Second, while the load for the storage device specified by the storage device counter is less the average load over all the storage devices within the array, the following steps, parts, acts, or actions are performed:
  • the storage device counter is incremented by one.
  • the result of the method is that the logical data stores are stored on the storage devices to which the logical data stores have been allocated, for user access of the logical data stores.
  • a data-processing system of an embodiment of the invention includes an array of storage devices over which a plurality of logical data stores is placed.
  • the system further includes a mechanism coupled to the array of storage devices to determine on which storage device of the array of storage devices each logical data store is to be placed such that request times of the logical data stores are minimized.
  • the system instead includes means for allocating each data store to one of the storage devices of the array of storage devices, such that request times of the logical data stores are minimized.
  • An advantage of the foregoing is that average response time for logical data store requests is significantly minimized by placing the logical data stores on the storage devices of an array. Enterprises and other organizations using embodiments of the invention are therefore better able to efficiently ensure performance guarantees in which average response time has to be under certain thresholds relating to the maximum length of time this average response time can be. Further advantages, aspects, and embodiments of the invention will become apparent by reading the detailed description that follows, and by referring to the accompanying drawings.
  • FIG. 1 is a diagram of a system for placing logical data stores on an array of storage devices, according to an embodiment of the invention.
  • FIG. 2 is a diagram of a portion of the system of FIG. 1 in more detail, according to an embodiment of the invention.
  • FIG. 3 is a flowchart of a method for placing logical data stores on an array of storage devices, such that store request time is minimized, according to an embodiment of the invention.
  • FIG. 1 shows a data-processing system 100 , according to an embodiment of the invention.
  • the system 100 includes N logical data stores 102 A, 102 B, . . . , 102 N, collectively referred to as the logical data stores 102 .
  • the system 100 further includes an array 104 of M storage devices 106 A, 106 B, . . . , 106 M, collectively referred to as the storage devices 106 .
  • the number N of logical data stores 102 may be equal to, greater than, or less than the number M of storage devices 106 .
  • the system 100 also includes a mechanism 108 , the functionality of which will be described later in the detailed description, and which may be implemented in software, hardware, or a combination of software and hardware.
  • the logical data stores 102 each is a logically aggregated set of data.
  • a logical data store is a table or a set of associated tables.
  • all files belonging to a given user may constitute a store.
  • all source files or all email files may constitute a logical data store.
  • Access to each of the logical data stores 102 is represented as a number of streams, where each stream can be considered as an individual access from an application or a user. All such streams on an aggregated basis may therefore be considered synonymous with a logical data store. That is, as used herein, the notions of logical data stores and logical data streams are combined, such that either a logical data store or a (logical data) stream may be used to denote a set of logically grouped requests.
  • the storage devices 106 of the storage device array 104 may be hard disk drives in one embodiment.
  • the storage devices 106 may each be an individual, single hard disk drive, or may each be a (sub-)array within the storage device array 104 itself.
  • each of the storage devices 106 may be considered a RAID array in one embodiment of the invention.
  • E( ⁇ j ) denotes the response time for storage device D j for a given allocation.
  • the request (arrival) rate, the expected service time, and the second moment of the service time, respectively for a disk D k or a logical data store G i are denoted by ⁇ k , E(S k ), and E(S k 2 ).
  • the request arrival rate specifies the rate at which requests to the logical data store or storage device arrive at the logical data store or storage device.
  • the expected service time specifies the expected length of time needed to serve a request. It is noted that in cases of ambiguity, ⁇ D k , E(S D k ), and E(S D k 2 ) are used for storage device parameters to distinguish such storage device parameters from stream or logical data store parameters.
  • the logical data stores 102 may each be represented as a set of requests with associated statistical parameters estimated ⁇ priori.
  • Each data store may be identified by G i ( ⁇ i ,E(S i ),E(S i 2 ),V i ), where ⁇ i is the arrival rate of the requests, E(S i ) is the expected service time of each request, and E(S i 2 ) is the expected second moment of the service time of each request, and V i is the size of the data store.
  • MMPP Markov Modulated Poisson Process
  • An MMPP is essentially modeled as a Poisson process with multiple states, where a given state determines the mean Poisson parameter ⁇ .
  • a two-state MMPP may be employed, where one state represents the on period and the other state represents the off period of the store placed on a given storage device.
  • a storage device server may be considered as including a pending queue where incoming requests are queued and a storage device, such as one of the storage devices 106 , on which data is read or written.
  • Data on a hard disk drive in particular is placed on concentric circular tracks that rotate at constant speed.
  • the disk head When a request in the queue is selected to be served, the disk head is moved to the appropriate track, where it waits until the appropriate sector is positioned under the disk head, and then transfers (reads or writes) the data under consideration from and/or to the desired hard disk location.
  • the access time for a hard disk drive includes seek time (the time to travel to the right track), rotational latency (time to access the correct sector), and transfer time (of the data). In modern hard disk drives, the seek and rotational latency dominate the transfer times.
  • the mechanism 108 in one embodiment is the component that performs a methodology for placing the logical data stores 102 on the storage devices 106 such that store request time is minimized. That is, the mechanism 108 determines on which of the storage devices 106 each of the logical data stores 102 can reside. Thus, clients access the logical data stores 102 , which are placed, or stored, on the storage devices 106 as determined by the mechanism 108 in a way that store request time by these clients is minimized. When a client accesses a logical data store 102 , the mechanism 108 can be considered to map such a request to the corresponding storage device 106 on which the logical data store 102 has been placed. A detailed presentment of one such methodology is described in the next section of the detailed description.
  • the mechanism 108 in one embodiment resides in, or is situated within, one or more of a number of different components commonly found within computing systems.
  • the mechanism 108 may be implemented within a logical volume manager (LVM), which more generally is a logical space-to-physical space mapping mechanism that maps the logical data stores 102 to the storage devices 106 .
  • the mechanism 108 may be implemented within the file system of the storage devices 106 .
  • the mechanism 108 may be implemented within a database that directly employs raw partitions of the storage devices 106 without using a filesystem.
  • the mechanism 108 may further be implemented within a controller for the array 104 of the storage devices 106 .
  • FIG. 2 shows one implementation of the mechanism 108 , according to an embodiment of the invention.
  • the mechanism 108 includes a mapper 202 , a predictor 204 , and a manager 206 .
  • the mapper 202 stores the mappings of the logical data stores 102 to the storage devices 106 .
  • the mapper 202 interacts directly with client accesses to the logical data stores 102 , and with the storage devices 106 themselves.
  • the predictor 204 receives and/or monitors information regarding the logical data stores 102 and the storage devices 106 through the mapper 202 .
  • the predictor 204 estimates various stream parameters by probing the data path of the logical data stores 102 to the storage devices 106 .
  • These stream parameters may include the request arrival rate, expected service time, and the second moment of the service time, as have been described previously.
  • the predictor 204 can in one embodiment employ time-series analysis-based prediction, as known within the art, to estimate the request arrival rate.
  • Other parameters, such as the expected service time and the second moment of this expected service time may be estimated by employing a history-based sliding window model with the weight of a measurement falling exponentially with the age of the measurement, as can be appreciated by those of ordinary skill within the art.
  • the manager 206 receives the stream, or logical data store, parameters from the predictor 204 , and determines the placement of the logical data stores 102 on the storage devices 106 on that basis. Once this determination has been made, the manager 206 notifies the mapper 202 , which stores the logical data store-to-storage device mappings. That is, the mapper 202 actually places the logical data stores 102 on the storage devices 106 , as instructed by the manager 206 .
  • FIG. 3 shows a method 300 for placing logical data stores on storage devices such that store request time can be minimized, according to an embodiment of the invention.
  • the method 300 may be performed in one embodiment by the mechanism 108 .
  • the mechanism 108 may determine which of the storage devices 106 to place each of the logical data stores 102 .
  • the method 300 can be considered as leveraging the notion that the average waiting time for a request on a storage device can be divided into the time the disk was seeking, the time the disk was rotating, and the time that the disk was transferring data, which have been described above.
  • E( ⁇ j ) is the average waiting time for storage device D j .
  • E( ⁇ j,s ) is the average waiting time due to seeks.
  • E( ⁇ j,r ) is the average waiting time due to rotation.
  • E( ⁇ j,t ) is the average waiting time due to data transfer.
  • Minimizing the average seek waiting time is referred to herein as solving the seek time issue.
  • Minimizing the average waiting due to rotation is referred to herein as solving the rotational delay issue.
  • minimizing the average waiting due to transfer is referred to herein as solving the transfer time issue.
  • the method 300 minimizes store request time by minimizing the average waiting time E( ⁇ j ), which in turn can be considered by minimizing one or more of the average waiting time due to seeks E( ⁇ j,s ), the average waiting time due to rotational latency E( ⁇ j,r ), and the average waiting time due to transfer E( ⁇ j,t ).
  • the seek time issue relates to the fact that the seek time for a request depends directly on the scheduling methodology employed by the controller of the storage device in question.
  • Many hard disk drive controllers in particular use a C-SCAN scheduling methodology, as known within the art. For simplicity, it is assumed that seek time is proportional to the number of tracks covered.
  • the disk head moves from the outermost track to the innermost track and serves requests in the order in which it encounters them.
  • a request therefore, sees no delay due to other requests being served. Instead, the disk head moves in a fixed manner and serves requests as they come in its paths, without spending any time in serving the requests. This is a direct implication of the linearity assumption and the fact that no time is spent for serving a request. Mathematical analysis has shown that the average delay in seeking is half of the time required to seek the complete disk (T S ), or,
  • ⁇ J is the access time for storage device D j .
  • the rotational delay issue relates to the notion that even though the rotational delay of a request may not depend on the location of the previously accessed request, the requests are not served in first come, first served (FCFS) fashion, but rather are reordered by a parameter other than arrival time. However, the rotational delay issue can nevertheless still be formulated using queuing theoretic results for FCFS. This is because that, first, it can be proven that any work-conserving permutation of R s , which is an ordered request set where all requests r i ⁇ R s have the same service time s, has a total waiting time equal to the waiting time of R s .
  • Disk run length L i d of a logical data store G i is defined, for a given schedule ⁇ j of requests on a storage device, as the expected number of requests of the logical data store that are served in a consecutive fashion in ⁇ j where access locations are proximate to one another.
  • Disk run length is in some sense the run length of a logical data store as perceived by the controller for a storage device.
  • a logical data store may be completely sequential in its stream, as far as the storage device is concerned, it can serve just a number of such consecutive requests together, and this number is denoted as the disk run length of the logical data store in question.
  • the rotational delay issue can be represented as follows:
  • the transfer time issue can be formulated in the same manner in which the rotational delay issue has been formulated, by replacing E(S i,r ) with E(S i,t ) and E(S i,r 2 ) by E(S i,t 2 ) in expression (15).
  • E(S i,r ) with E(S i,t ) and E(S i,r 2 ) by E(S i,t 2 ) in expression (15).
  • the only difference is that there may be no relationship between E(S i,t ) and E(S i,t 2 ) since transfer times can be arbitrarily variable.
  • the method 300 is applied to N logical data stores in relation to M storage devices.
  • the average load over all the storage devices is determined ( 302 ).
  • the average load can be determined as follows:
  • A[i]. ⁇ is the request arrival rate of logical data store i
  • A[i].E(S r ) is the service time for requests made to logical data store i.
  • the logical data stores are then sorted ( 304 ).
  • the logical data stores are sorted by run length.
  • the run length of a logical data store corresponds to the expected number of requests of the logical data store that are served consecutively where access locations on the storage devices on which the logical data store are proximate to each another. More formally, the run length L i of a logical data store G i is defined as the expected number of consecutive requests of G i that immediately follow r k and access a location that is close (within the track boundary) to loc k , where r k is a request of the store G i accessing a location loc k .
  • logical data stores with higher run length are in the order before logical data stores with lower run length.
  • Sorting the logical data stores by run length allows the rest of the method 300 to minimize request time by solving the seek time issue, which refers to the time to travel to the right track, as well as the rotational latency issue, which refers to the time to access the correct sector. Sorting the logical data stores by run length does not allow the rest of the method 300 to minimize request time by solving the transfer time (of the data to the storage device) issue. However, this is acceptable, because transfer times are an order of magnitude smaller than rotational times, for instance. Sorting the logical data stores by run length are especially appropriate for homogenous traffic, such as multimedia constant bit rate applications, where transfer time has low variance.
  • the logical data stores may instead be sorted by their expected second moments of service time, which corresponds to the second moment of the expected service time of each request of a logical data store, where the expected service time corresponds to the expected delay time after a request has been made until it has been serviced.
  • Such sorting may be advantageous where it cannot be assumed that transfer times are small as compared to rotational latency and seek times.
  • the service time of a request r k excluding the seek time component, can be represented by a single equation. This is because once the schedule is fixed, the variation in waiting time from FCFS is captured by the seek time problem, and the rotational delay and transfer time issues for a stream G k can be considered as a combined problem with service time S k,rt :
  • the rotational delay issue and the transfer time issue can be combined into an issue that is referred to as the rotational transfer issue herein, as follows.
  • the logical data stores are sorted by E(S i,rt 2 ).
  • the method sets a logical data store counter i to a numerical value one ( 306 ), as well as a storage device counter j to a numerical value one ( 308 ).
  • the method 300 then repeats parts 312 , 314 , and 322 until the storage device counter j exceeds the total number of storage devices M within the array.
  • the load ⁇ j for storage device j is initially set to a numerical value of zero ( 312 ). While this load is less than the average load ⁇ ( 314 ), parts 316 , 318 , and 320 are performed.
  • the logical data store i is allocated to, or placed on, storage device j ( 316 ).
  • the load for the storage device j is then incremented as follows ( 318 ):
  • A[i]. ⁇ is the request arrival rate of logical data store i.
  • A[i].E(S r ) is the service time for the requests of logical data store i.
  • the logical data store counter I is incremented by one.
  • the method 300 increments the storage device counter j ( 322 ), and the method 300 is repeated in part 310 until all the storage devices within the array have been processed.
  • the algorithm of method 300 returns a logical data store allocation over the storage devices such that on average the waiting time is minimized, while at the same time the storage devices have balanced loads.
  • seek times are linear in the number of storage device tracks covered.
  • disk heads can take some time to start moving. They then accelerate for some time before settling at a constant speed.
  • seek times are represented by a constant component and a linear component.
  • the acceleration phase is represented by a constant component and a square root component. If the number of logical data stores on a storage device is small, the equations for constant speed phase can be used throughout. Otherwise, they are nevertheless a reasonable approximation.
  • An advantage with the model described within this invention is that the non-linear model also leads to optimal results.
  • the methodology of the method 300 of FIG. 3 depends, however, on the specific storage devices employed. As a result, the logical data store assignment may potentially vary depending on the storage devices used. Estimating the run length of a stream (or store) G i has been shown, but no specific methodology has been provided to estimate the disk run length of a logical data store G i on a disk server D i . However, it can be observed that this actual value is not needed. Rather, an order of the values is sufficient for the methodology of FIG. 3 to perform properly. That is, for all streams G i ,G j ,i,j ⁇ 1, N ⁇
  • DRL x y is the disk run length for stream, or logical data store, x in relation to storage device y. It can be shown that an ordering based on run length is the same as an ordering based on disk run length. Hence, the method of this invention advantageously sorts streams based on run length, which can be easily estimated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Logical data stores are placed on storages to minimize store request time. The stores are sorted. A store counter and a storage counter are each set to one. (A), (B), and (C) are repeated until the storage counter exceeds the number of storages within the array. (A) is setting a load for the storage specified by the storage counter to zero. (B) is performing (i), (ii), and (iii) while the load for the storage specified by the storage counter is less an average determined load over all the storages. (i) is allocating the store specified by the store counter to the storage specified by the storage counter; and, (ii) is incrementing the load for this storage by this storage's request arrival rate multiplied by an expected service time for the requests of this store. (iii) is incrementing the store counter by one. (C) is incrementing the storage counter by one.

Description

    FIELD OF THE INVENTION
  • This invention relates generally to placing (i.e., allocating) logical data stores on an array of storage devices, and more particularly to placement such that store request time is minimized.
  • BACKGROUND OF THE INVENTION
  • Parallel input/output (I/O) systems have been employed due to their ability to provide fast and reliable access, while supporting high transfer rates for dedicated supercomputing applications as well as diverse enterprise applications. Disk arrays are typically arranged to partition data across multiple hard disk drives within a storage pool, and provide concurrent access to multiple applications at the same time. A single application having large data requirements may further partition its data into stores and place them across multiple disks, such that the resulting parallelism alleviates the I/O bottleneck to a certain degree.
  • However, in modern web-services scenario where performance guarantees are in place, throughput is no longer the only performance requirement for applications. Many applications require that the average response time of their requests is maintained within certain thresholds, such that the average response time does not exceed a predetermined maximum time. Since storage latencies continue to dominate request response times, reducing the response time of a request effectively means minimizing storage latency. The high variance within service times due to the heterogeneous applications service from a disk array, combined with the non-work conserving nature of disk drives, implies that the response time of the requests of a logical data store is influenced primarily by the characteristics of other logical data stores placed on the same disk.
  • A logical data store can be a database table, files owned by a particular user, or data used by an application, among other types of logical data stores. A number of logical data stores may be placed over an array of parallel hard disk drives, which can be referred to as disks, or more generally as storage devices. A sequence of disk requests generated by an application or user can be denoted as a stream, and the logical data group accessed by the stream can be synonymously considered a logical data store as well.
  • Where there are a number of logical data stores to be placed on an array of storage devices, they are desirably placed on the storage devices such that the average response time for all store requests is minimized, and that their work load is balanced across all the storage devices. This issue also finds applications in web services, where user streams—i.e., logical data stores—are allocated to different web servers, and each server may manage its own storage. Current strategies for placing logical data stores on storage devices, however, do not minimize response.
  • SUMMARY OF THE INVENTION
  • This invention relates to placing logical data stores on an array of storage devices such that store request time is minimized. A method of one embodiment of this invention determines the average load over all the storage devices within the array. The logical data stores are sorted by some metric of the stores, and both a logical data store counter and a storage device counter are set equal to one. The following steps, parts, acts, or actions are repeated until the storage device counter exceeds the number of the storage devices within the array. First, a load for the storage device specified by the storage device counter is set equal to zero. Second, while the load for the storage device specified by the storage device counter is less the average load over all the storage devices within the array, the following steps, parts, acts, or actions are performed:
  • allocating the logical data store specified by the logical data store counter to the storage device specified by the storage device counter;
  • incrementing the load for the storage device specified by the storage device counter as a product of a request arrival rate of the logical data store specified by the logical data store counter and an average service time for the requests of the logical data store specified by the logical data store counter; and,
  • incrementing the logical data store counter by one.
  • Third, the storage device counter is incremented by one. The result of the method is that the logical data stores are stored on the storage devices to which the logical data stores have been allocated, for user access of the logical data stores.
  • A data-processing system of an embodiment of the invention includes an array of storage devices over which a plurality of logical data stores is placed. The system further includes a mechanism coupled to the array of storage devices to determine on which storage device of the array of storage devices each logical data store is to be placed such that request times of the logical data stores are minimized. In a further embodiment, the system instead includes means for allocating each data store to one of the storage devices of the array of storage devices, such that request times of the logical data stores are minimized.
  • An advantage of the foregoing is that average response time for logical data store requests is significantly minimized by placing the logical data stores on the storage devices of an array. Enterprises and other organizations using embodiments of the invention are therefore better able to efficiently ensure performance guarantees in which average response time has to be under certain thresholds relating to the maximum length of time this average response time can be. Further advantages, aspects, and embodiments of the invention will become apparent by reading the detailed description that follows, and by referring to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The drawings referenced herein form a part of the specification. Features shown in the drawing are meant as illustrative of only some embodiments of the invention, and not of all embodiments of the invention, unless otherwise explicitly indicated, and implications to the contrary are otherwise not to be made.
  • FIG. 1 is a diagram of a system for placing logical data stores on an array of storage devices, according to an embodiment of the invention.
  • FIG. 2 is a diagram of a portion of the system of FIG. 1 in more detail, according to an embodiment of the invention.
  • FIG. 3 is a flowchart of a method for placing logical data stores on an array of storage devices, such that store request time is minimized, according to an embodiment of the invention.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific exemplary embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized, and logical, mechanical, and other changes may be made without departing from the spirit or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims.
  • System and Overview
  • FIG. 1 shows a data-processing system 100, according to an embodiment of the invention. The system 100 includes N logical data stores 102A, 102B, . . . , 102N, collectively referred to as the logical data stores 102. The system 100 further includes an array 104 of M storage devices 106A, 106B, . . . , 106M, collectively referred to as the storage devices 106. The number N of logical data stores 102 may be equal to, greater than, or less than the number M of storage devices 106. The system 100 also includes a mechanism 108, the functionality of which will be described later in the detailed description, and which may be implemented in software, hardware, or a combination of software and hardware.
  • The logical data stores 102 each is a logically aggregated set of data. For instance, within a database scenario, a logical data store is a table or a set of associated tables. For example, in a shared filesystem, all files belonging to a given user may constitute a store. In an information technology (IT) production scenario, all source files or all email files may constitute a logical data store. Access to each of the logical data stores 102 is represented as a number of streams, where each stream can be considered as an individual access from an application or a user. All such streams on an aggregated basis may therefore be considered synonymous with a logical data store. That is, as used herein, the notions of logical data stores and logical data streams are combined, such that either a logical data store or a (logical data) stream may be used to denote a set of logically grouped requests.
  • The storage devices 106 of the storage device array 104 may be hard disk drives in one embodiment. The storage devices 106 may each be an individual, single hard disk drive, or may each be a (sub-)array within the storage device array 104 itself. For instance, each of the storage devices 106 may be considered a RAID array in one embodiment of the invention.
  • The mechanism 108 locates, or maps, the logical data stores 102 over the storage devices 106 of the storage device array 104 such that request time as to the logical data stores 102, on average, is minimized. More specifically, given N logical data streams or stores Gi, and a set of M data storage devices Dj in which to place the data stores, response time minimization locates an allocation of data stores to storage devices (denoted by a set of mappings xi,j, where xi,j=1 if store Gi is placed on storage device Dj) such that the response time average over the requests on all the storage devices is minimized, subject to an additional constraint that the load is balanced evenly across all the storage devices.
  • More formally, the foregoing can be expressed as follows:
  • min 1 λ tot j = 1 M λ j E ( δ j ) ( 1 ) s . t . streams G i j = 1 M x i , j = 1 , x i , j [ 0 , 1 ] ( 2 )

  • ∀ storage devices D j ,D k λj E(S j)=λk E(S k) (Balanced load condition)  (3)
  • storage devices D j λ D j = i = 1 N λ i , j λ i ( 4 ) λ tot = i = 1 N λ i ( 5 )
  • E(δj) denotes the response time for storage device Dj for a given allocation. The request (arrival) rate, the expected service time, and the second moment of the service time, respectively for a disk Dk or a logical data store Gi are denoted by λk, E(Sk), and E(Sk 2). The request arrival rate specifies the rate at which requests to the logical data store or storage device arrive at the logical data store or storage device. The expected service time specifies the expected length of time needed to serve a request. It is noted that in cases of ambiguity, λD k , E(SD k ), and E(SD k 2) are used for storage device parameters to distinguish such storage device parameters from stream or logical data store parameters.
  • The logical data stores 102 may each be represented as a set of requests with associated statistical parameters estimated α priori. Each data store may be identified by Gii,E(Si),E(Si 2),Vi), where λi is the arrival rate of the requests, E(Si) is the expected service time of each request, and E(Si 2) is the expected second moment of the service time of each request, and Vi is the size of the data store.
  • Request arrivals can be modeled by a Markov Modulated Poisson Process (MMPP), as known within the art. An MMPP is essentially modeled as a Poisson process with multiple states, where a given state determines the mean Poisson parameter λ. Where the storage devices 106 are hard disk drives, a two-state MMPP may be employed, where one state represents the on period and the other state represents the off period of the store placed on a given storage device.
  • A storage device server may be considered as including a pending queue where incoming requests are queued and a storage device, such as one of the storage devices 106, on which data is read or written. Data on a hard disk drive in particular is placed on concentric circular tracks that rotate at constant speed. When a request in the queue is selected to be served, the disk head is moved to the appropriate track, where it waits until the appropriate sector is positioned under the disk head, and then transfers (reads or writes) the data under consideration from and/or to the desired hard disk location. Hence, the access time for a hard disk drive includes seek time (the time to travel to the right track), rotational latency (time to access the correct sector), and transfer time (of the data). In modern hard disk drives, the seek and rotational latency dominate the transfer times.
  • The mechanism 108 in one embodiment is the component that performs a methodology for placing the logical data stores 102 on the storage devices 106 such that store request time is minimized. That is, the mechanism 108 determines on which of the storage devices 106 each of the logical data stores 102 can reside. Thus, clients access the logical data stores 102, which are placed, or stored, on the storage devices 106 as determined by the mechanism 108 in a way that store request time by these clients is minimized. When a client accesses a logical data store 102, the mechanism 108 can be considered to map such a request to the corresponding storage device 106 on which the logical data store 102 has been placed. A detailed presentment of one such methodology is described in the next section of the detailed description.
  • The mechanism 108 in one embodiment resides in, or is situated within, one or more of a number of different components commonly found within computing systems. For instance, the mechanism 108 may be implemented within a logical volume manager (LVM), which more generally is a logical space-to-physical space mapping mechanism that maps the logical data stores 102 to the storage devices 106. The mechanism 108 may be implemented within the file system of the storage devices 106. The mechanism 108 may be implemented within a database that directly employs raw partitions of the storage devices 106 without using a filesystem. The mechanism 108 may further be implemented within a controller for the array 104 of the storage devices 106.
  • FIG. 2 shows one implementation of the mechanism 108, according to an embodiment of the invention. Particularly, the mechanism 108 includes a mapper 202, a predictor 204, and a manager 206. The mapper 202 stores the mappings of the logical data stores 102 to the storage devices 106. The mapper 202 interacts directly with client accesses to the logical data stores 102, and with the storage devices 106 themselves.
  • The predictor 204 receives and/or monitors information regarding the logical data stores 102 and the storage devices 106 through the mapper 202. In particular, the predictor 204 estimates various stream parameters by probing the data path of the logical data stores 102 to the storage devices 106. These stream parameters may include the request arrival rate, expected service time, and the second moment of the service time, as have been described previously. The predictor 204 can in one embodiment employ time-series analysis-based prediction, as known within the art, to estimate the request arrival rate. Other parameters, such as the expected service time and the second moment of this expected service time, may be estimated by employing a history-based sliding window model with the weight of a measurement falling exponentially with the age of the measurement, as can be appreciated by those of ordinary skill within the art.
  • The manager 206 receives the stream, or logical data store, parameters from the predictor 204, and determines the placement of the logical data stores 102 on the storage devices 106 on that basis. Once this determination has been made, the manager 206 notifies the mapper 202, which stores the logical data store-to-storage device mappings. That is, the mapper 202 actually places the logical data stores 102 on the storage devices 106, as instructed by the manager 206.
  • Method and Conclusion
  • FIG. 3 shows a method 300 for placing logical data stores on storage devices such that store request time can be minimized, according to an embodiment of the invention. The method 300 may be performed in one embodiment by the mechanism 108. For instance, the mechanism 108 may determine which of the storage devices 106 to place each of the logical data stores 102.
  • It is noted first that the method 300 can be considered as leveraging the notion that the average waiting time for a request on a storage device can be divided into the time the disk was seeking, the time the disk was rotating, and the time that the disk was transferring data, which have been described above. Mathematically,

  • Ej)=Ej,s)+Ej,r)+Ej,s)  (6)
  • In equation (6), E(δj) is the average waiting time for storage device Dj. E(δj,s) is the average waiting time due to seeks. E(δj,r) is the average waiting time due to rotation. E(δj,t) is the average waiting time due to data transfer.
  • Minimizing the average seek waiting time is referred to herein as solving the seek time issue. Minimizing the average waiting due to rotation is referred to herein as solving the rotational delay issue. Likewise, minimizing the average waiting due to transfer is referred to herein as solving the transfer time issue. Thus, the method 300 minimizes store request time by minimizing the average waiting time E(δj), which in turn can be considered by minimizing one or more of the average waiting time due to seeks E(δj,s), the average waiting time due to rotational latency E(δj,r), and the average waiting time due to transfer E(δj,t).
  • The seek time issue relates to the fact that the seek time for a request depends directly on the scheduling methodology employed by the controller of the storage device in question. Many hard disk drive controllers in particular use a C-SCAN scheduling methodology, as known within the art. For simplicity, it is assumed that seek time is proportional to the number of tracks covered. Within the C-SCAN scheduling methodology, the disk head moves from the outermost track to the innermost track and serves requests in the order in which it encounters them.
  • A request, therefore, sees no delay due to other requests being served. Instead, the disk head moves in a fixed manner and serves requests as they come in its paths, without spending any time in serving the requests. This is a direct implication of the linearity assumption and the fact that no time is spent for serving a request. Mathematical analysis has shown that the average delay in seeking is half of the time required to seek the complete disk (TS), or,
  • E ( δ j , s ) = T S 2 ( 7 )
  • Therefore, the objective in solving the seek time issue is given by
  • min 1 λ tot j = 1 M λ j T S 2 = min 1 λ tot · T S 2 j = 1 M λ j = T S 2 ( 8 )
  • λJ is the access time for storage device Dj. Thus, solving the seek time problem is independent of allocating logical data stores to the storage devices. Therefore, any allocation of logical data stores to storage devices is optimal for the seek time issue, such that the rotational delay and average transfer issues can be optimized and any solution that is optimal for both the rotational delay issue and the transfer time issue is optimal for the overall placement of logical data stores on the storage devices.
  • The rotational delay issue relates to the notion that even though the rotational delay of a request may not depend on the location of the previously accessed request, the requests are not served in first come, first served (FCFS) fashion, but rather are reordered by a parameter other than arrival time. However, the rotational delay issue can nevertheless still be formulated using queuing theoretic results for FCFS. This is because that, first, it can be proven that any work-conserving permutation of Rs, which is an ordered request set where all requests ri∈Rs have the same service time s, has a total waiting time equal to the waiting time of Rs. Second, for a randomly ordered request set R with general service times, it can be proven that any random permutation of R has the same expected total waiting time as the expected total waiting time of the ordered set R. Therefore, the rotational delay E(δj,r) for a storage device Dj is estimated on this basis.
  • It is noted that a notion called the disk (i.e., storage device) run length Li d of a logical data store Gi is defined, for a given schedule Ψj of requests on a storage device, as the expected number of requests of the logical data store that are served in a consecutive fashion in Ψj where access locations are proximate to one another. Disk run length is in some sense the run length of a logical data store as perceived by the controller for a storage device. Thus, even though a logical data store may be completely sequential in its stream, as far as the storage device is concerned, it can serve just a number of such consecutive requests together, and this number is denoted as the disk run length of the logical data store in question.
  • It is noted that since arrivals are Markovian, the FCFS order is a random permutation of the requests. Therefore, where the scheduling methodology is not FCFS and is uncorrelated with rotational delay Sk,r of request rk, the waiting time equals the waiting time in the FCFS order and the standards results for FCFS can nevertheless be employed, as described in the previous paragraph. As such, the rotational delay issue can be represented as follows:
  • min j = 1 M λ j E ( δ j , r ) ( 9 ) storage devices D j λ j = i = 1 N x i , j λ i ( 10 ) logical data stores G i E ( S i , r ) = S rot / 2 L i d ( 11 ) storage devices D j E ( S D j , r ) = i = 1 N x i , j λ i E ( S i . r ) λ j ( 12 ) storage devices D j E ( S D j , r 2 ) = i = 1 N x i , j λ i E ( S i , r 2 ) λ j ( 13 ) E ( δ j , r ) = λ j E ( S j , r 2 ) 2 ( 1 - λ j E ( S j , r ) ) ( 14 )
  • Here, Srot/2 is the time taken by the storage device to complete a half rotation. Mathematical analysis can show that under the assumption that all rotation times are equally likely and disk run length has low variance, E(Si,r 2)=c(E(Si,r))2, where c=4/3. Even if this is not the case, c is simply some other constant. Therefore, the optimization problem can be expressed as
  • min c j = 1 M λ j λ j ( E ( S j , r ) ) 2 2 ( 1 - λ j E ( S j , r ) ) ( 15 )
  • It is noted that the transfer time issue can be formulated in the same manner in which the rotational delay issue has been formulated, by replacing E(Si,r) with E(Si,t) and E(Si,r 2) by E(Si,t 2) in expression (15). The only difference is that there may be no relationship between E(Si,t) and E(Si,t 2) since transfer times can be arbitrarily variable.
  • Now, the method 300 is applied to N logical data stores in relation to M storage devices. First, the average load over all the storage devices is determined (302). The average load can be determined as follows:
  • ρ = 1 M i = 1 N ( A [ i ] · λ ) · ( A [ i ] · E ( S r ) ) ( 16 )
  • In equation (16), A[i].λ is the request arrival rate of logical data store i, and A[i].E(Sr) is the service time for requests made to logical data store i.
  • The logical data stores are then sorted (304). In one embodiment, the logical data stores are sorted by run length. The run length of a logical data store corresponds to the expected number of requests of the logical data store that are served consecutively where access locations on the storage devices on which the logical data store are proximate to each another. More formally, the run length Li of a logical data store Gi is defined as the expected number of consecutive requests of Gi that immediately follow rk and access a location that is close (within the track boundary) to lock, where rk is a request of the store Gi accessing a location lock. Thus, logical data stores with higher run length are in the order before logical data stores with lower run length.
  • Sorting the logical data stores by run length allows the rest of the method 300 to minimize request time by solving the seek time issue, which refers to the time to travel to the right track, as well as the rotational latency issue, which refers to the time to access the correct sector. Sorting the logical data stores by run length does not allow the rest of the method 300 to minimize request time by solving the transfer time (of the data to the storage device) issue. However, this is acceptable, because transfer times are an order of magnitude smaller than rotational times, for instance. Sorting the logical data stores by run length are especially appropriate for homogenous traffic, such as multimedia constant bit rate applications, where transfer time has low variance.
  • However, in a further embodiment, the logical data stores may instead be sorted by their expected second moments of service time, which corresponds to the second moment of the expected service time of each request of a logical data store, where the expected service time corresponds to the expected delay time after a request has been made until it has been serviced. Such sorting may be advantageous where it cannot be assumed that transfer times are small as compared to rotational latency and seek times. Thus, what is leveraged is the observation that for a given scheduling methodology, the service time of a request rk, excluding the seek time component, can be represented by a single equation. This is because once the schedule is fixed, the variation in waiting time from FCFS is captured by the seek time problem, and the rotational delay and transfer time issues for a stream Gk can be considered as a combined problem with service time Sk,rt:

  • S k,rt =S k,r +S k,t  (17)
  • Therefore, the rotational delay issue and the transfer time issue can be combined into an issue that is referred to as the rotational transfer issue herein, as follows.
  • min j = 1 M λ j E ( δ j , rt ) ( 18 ) storage devices D j λ j = i = 1 N x i , j λ i ( 19 ) logical data stores G i E ( S i , r ) = S rot / 2 L i d ( 20 ) logical data stores G i E ( S i , rt ) = E ( S i , r ) + E ( S i , t ) ( 21 ) storage devices D j E ( S j , rt ) = i = 1 N x i , j λ i E ( S i , rt ) λ j ( 22 ) storage devices D j E ( S j , rt 2 ) = i = 1 N x i , j λ i E ( S i , rt 2 ) λ j ( 23 ) E ( δ j , rt ) = i = 1 N λ j E ( S j , rt 2 ) 2 ( 1 - λ j E ( S j , rt ) ) ( 24 )
  • Therefore, rather than sorting the logical data stores by run length, in this embodiment the logical data stores are sorted by E(Si,rt 2).
  • Next, the method sets a logical data store counter i to a numerical value one (306), as well as a storage device counter j to a numerical value one (308). The method 300 then repeats parts 312, 314, and 322 until the storage device counter j exceeds the total number of storage devices M within the array. The load ρj for storage device j is initially set to a numerical value of zero (312). While this load is less than the average load ρ(314), parts 316, 318, and 320 are performed.
  • The logical data store i is allocated to, or placed on, storage device j (316). The load for the storage device j is then incremented as follows (318):

  • ρjj+(A[i].λ)·(A[i].E(S r))  (25)
  • In equation (25), A[i].λ is the request arrival rate of logical data store i. A[i].E(Sr) is the service time for the requests of logical data store i. Finally, the logical data store counter I is incremented by one.
  • Once the while condition is no longer satisfied in part 314, the method 300 increments the storage device counter j (322), and the method 300 is repeated in part 310 until all the storage devices within the array have been processed. The algorithm of method 300 returns a logical data store allocation over the storage devices such that on average the waiting time is minimized, while at the same time the storage devices have balanced loads.
  • The foregoing discussion has assumed that seek times are linear in the number of storage device tracks covered. In practice, however, after serving a request, disk heads can take some time to start moving. They then accelerate for some time before settling at a constant speed. During the constant speed phase, seek times are represented by a constant component and a linear component. The acceleration phase is represented by a constant component and a square root component. If the number of logical data stores on a storage device is small, the equations for constant speed phase can be used throughout. Otherwise, they are nevertheless a reasonable approximation. An advantage with the model described within this invention is that the non-linear model also leads to optimal results.
  • The methodology of the method 300 of FIG. 3 depends, however, on the specific storage devices employed. As a result, the logical data store assignment may potentially vary depending on the storage devices used. Estimating the run length of a stream (or store) Gi has been shown, but no specific methodology has been provided to estimate the disk run length of a logical data store Gi on a disk server Di. However, it can be observed that this actual value is not needed. Rather, an order of the values is sufficient for the methodology of FIG. 3 to perform properly. That is, for all streams Gi,Gj,i,j∈{1, N}

  • DRLi k≧DRLj k
    Figure US20080172526A1-20080717-P00001
    DRLi l≧DRLj l  (26)
  • Here, DRLx y is the disk run length for stream, or logical data store, x in relation to storage device y. It can be shown that an ordering based on run length is the same as an ordering based on disk run length. Hence, the method of this invention advantageously sorts streams based on run length, which can be easily estimated.
  • It is noted that, although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This application is thus intended to cover any adaptations or variations of embodiments of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims and equivalents thereof

Claims (2)

1. A method for placing a plurality of logical data stores on an array of storage devices such that store request time is minimized, comprising:
allocating the logical data stores to the storage devices of the array such that request times of the logical data stores are minimized;
storing the logical data stores on the storage devices of the array as has been allocated,
wherein allocating the logical data stores to the storage devices of the array such that request times of the logical data stores are minimized comprises:
determining an average load over all the storage devices within the array;
sorting the plurality of logical data stores;
setting a logical data store counter equal to one;
setting a storage device counter equal to one; and
repeating setting a load for the storage device specified by the storage device counter equal to zero;
while the load for the storage device specified by the storage device counter is less the average load over all the storage devices within the array:
allocating the logical data store specified by the logical data store counter to the storage device specified by the storage device counter;
incrementing the load for the storage device specified by the storage device counter by a request arrival rate of the logical data store specified by the logical data store counter multiplied by an expected service time for the requests of the logical data store specified by the logical data store counter;
incrementing the logical data store counter by one; and
incrementing the storage device counter by one, until the storage device counter exceeds a number of the storage devices within the array,
wherein determining the average load over all the storage devices within the array comprises:
for each logical data store, determining a product of the request arrival rate of the logical data store multiplied by the expected service time for the requests of the logical data store;
determining a summation of the products determined for the logical data stores; and
dividing the summation by a number of the logical data stores to yield the average load over all the storage devices within the array,
wherein the request arrival rate of a logical data store specifies a rate at which requests to the logical data store arrive at the logical data store,
wherein the expected service time for the requests of a logical data store corresponds to an expected time of delay between the request being submitted to the disk from the queue, to it being served by the disk to which the logical store is assigned,
wherein sorting the plurality of logical data stores comprises sorting the plurality of logical data stores by run length,
wherein sorting the plurality of logical data stores comprises sorting the plurality of logical data stores by disk run length,
wherein the run length of a logical data store corresponds to an expected number of consecutive requests the logical data store that are served and access locations on the storage devices that are close to one another,
wherein the expected service time of each request corresponds to an expected time of delay between the request being submitted to the disk from the queue, to it being served by the disk to which the logical store is assigned,
wherein sorting the plurality of logical data stores comprises sorting the plurality of logical data stores by expected second moment of service time,
wherein the expected second moment of service time of a logical data store corresponds to a second moment of an expected service time of each request of the logical data store, and
wherein each logical data store corresponds to an aggregated plurality of streams of requests to the logical data store.
2-20. (canceled)
US11/622,008 2007-01-11 2007-01-11 Method and System for Placement of Logical Data Stores to Minimize Request Response Time Abandoned US20080172526A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/622,008 US20080172526A1 (en) 2007-01-11 2007-01-11 Method and System for Placement of Logical Data Stores to Minimize Request Response Time
US12/056,591 US9223504B2 (en) 2007-01-11 2008-03-27 Method and system for placement of logical data stores to minimize request response time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/622,008 US20080172526A1 (en) 2007-01-11 2007-01-11 Method and System for Placement of Logical Data Stores to Minimize Request Response Time

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/056,591 Continuation US9223504B2 (en) 2007-01-11 2008-03-27 Method and system for placement of logical data stores to minimize request response time

Publications (1)

Publication Number Publication Date
US20080172526A1 true US20080172526A1 (en) 2008-07-17

Family

ID=39618650

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/622,008 Abandoned US20080172526A1 (en) 2007-01-11 2007-01-11 Method and System for Placement of Logical Data Stores to Minimize Request Response Time
US12/056,591 Expired - Fee Related US9223504B2 (en) 2007-01-11 2008-03-27 Method and system for placement of logical data stores to minimize request response time

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/056,591 Expired - Fee Related US9223504B2 (en) 2007-01-11 2008-03-27 Method and system for placement of logical data stores to minimize request response time

Country Status (1)

Country Link
US (2) US20080172526A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222640A1 (en) * 2007-03-07 2008-09-11 International Business Machines Corporation Prediction Based Priority Scheduling
US8135924B2 (en) 2009-01-14 2012-03-13 International Business Machines Corporation Data storage device driver
US9298636B1 (en) * 2011-09-29 2016-03-29 Emc Corporation Managing data storage
US20190034452A1 (en) * 2017-07-28 2019-01-31 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
WO2020135737A1 (en) * 2018-12-28 2020-07-02 杭州海康威视数字技术股份有限公司 Methods, apparatuses, devices and mediums for partition management and data storage and querying

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0718706D0 (en) 2007-09-25 2007-11-07 Creative Physics Ltd Method and apparatus for reducing laser speckle
US9335604B2 (en) 2013-12-11 2016-05-10 Milan Momcilo Popovich Holographic waveguide display
US11726332B2 (en) 2009-04-27 2023-08-15 Digilens Inc. Diffractive projection apparatus
US11320571B2 (en) 2012-11-16 2022-05-03 Rockwell Collins, Inc. Transparent waveguide display providing upper and lower fields of view with uniform light extraction
US10795160B1 (en) 2014-09-25 2020-10-06 Rockwell Collins, Inc. Systems for and methods of using fold gratings for dual axis expansion
US11300795B1 (en) 2009-09-30 2022-04-12 Digilens Inc. Systems for and methods of using fold gratings coordinated with output couplers for dual axis expansion
US8233204B1 (en) 2009-09-30 2012-07-31 Rockwell Collins, Inc. Optical displays
US8659826B1 (en) 2010-02-04 2014-02-25 Rockwell Collins, Inc. Worn display system and method without requiring real time tracking for boresight precision
US9021175B2 (en) 2010-08-24 2015-04-28 International Business Machines Corporation Method for reordering access to reduce total seek time on tape media
WO2012136970A1 (en) 2011-04-07 2012-10-11 Milan Momcilo Popovich Laser despeckler based on angular diversity
US10670876B2 (en) 2011-08-24 2020-06-02 Digilens Inc. Waveguide laser illuminator incorporating a despeckler
WO2016020630A2 (en) 2014-08-08 2016-02-11 Milan Momcilo Popovich Waveguide laser illuminator incorporating a despeckler
EP2748670B1 (en) 2011-08-24 2015-11-18 Rockwell Collins, Inc. Wearable data display
US8634139B1 (en) 2011-09-30 2014-01-21 Rockwell Collins, Inc. System for and method of catadioptric collimation in a compact head up display (HUD)
US8937772B1 (en) 2011-09-30 2015-01-20 Rockwell Collins, Inc. System for and method of stowing HUD combiners
US9599813B1 (en) 2011-09-30 2017-03-21 Rockwell Collins, Inc. Waveguide combiner system and method with less susceptibility to glare
US9366864B1 (en) 2011-09-30 2016-06-14 Rockwell Collins, Inc. System for and method of displaying information without need for a combiner alignment detector
US8749890B1 (en) 2011-09-30 2014-06-10 Rockwell Collins, Inc. Compact head up display (HUD) for cockpits with constrained space envelopes
US9715067B1 (en) 2011-09-30 2017-07-25 Rockwell Collins, Inc. Ultra-compact HUD utilizing waveguide pupil expander with surface relief gratings in high refractive index materials
US8903207B1 (en) 2011-09-30 2014-12-02 Rockwell Collins, Inc. System for and method of extending vertical field of view in head up display utilizing a waveguide combiner
US20150010265A1 (en) 2012-01-06 2015-01-08 Milan, Momcilo POPOVICH Contact image sensor using switchable bragg gratings
US8830588B1 (en) 2012-03-28 2014-09-09 Rockwell Collins, Inc. Reflector and cover glass for substrate guided HUD
US9523852B1 (en) 2012-03-28 2016-12-20 Rockwell Collins, Inc. Micro collimator system and method for a head up display (HUD)
JP6238965B2 (en) 2012-04-25 2017-11-29 ロックウェル・コリンズ・インコーポレーテッド Holographic wide-angle display
US9078091B2 (en) * 2012-05-02 2015-07-07 Nokia Technologies Oy Method and apparatus for generating media based on media elements from multiple locations
US9104462B2 (en) * 2012-08-14 2015-08-11 Alcatel Lucent Method and apparatus for providing traffic re-aware slot placement
US9933684B2 (en) * 2012-11-16 2018-04-03 Rockwell Collins, Inc. Transparent waveguide display providing upper and lower fields of view having a specific light output aperture configuration
US9674413B1 (en) 2013-04-17 2017-06-06 Rockwell Collins, Inc. Vision system and method having improved performance and solar mitigation
WO2015015138A1 (en) 2013-07-31 2015-02-05 Milan Momcilo Popovich Method and apparatus for contact image sensing
US9244281B1 (en) 2013-09-26 2016-01-26 Rockwell Collins, Inc. Display system and method using a detached combiner
US10732407B1 (en) 2014-01-10 2020-08-04 Rockwell Collins, Inc. Near eye head up display system and method with fixed combiner
US9519089B1 (en) 2014-01-30 2016-12-13 Rockwell Collins, Inc. High performance volume phase gratings
US9244280B1 (en) 2014-03-25 2016-01-26 Rockwell Collins, Inc. Near eye display system and method for display enhancement or redundancy
US10359736B2 (en) 2014-08-08 2019-07-23 Digilens Inc. Method for holographic mastering and replication
US10241330B2 (en) 2014-09-19 2019-03-26 Digilens, Inc. Method and apparatus for generating input images for holographic waveguide displays
US10088675B1 (en) 2015-05-18 2018-10-02 Rockwell Collins, Inc. Turning light pipe for a pupil expansion system and method
US9715110B1 (en) 2014-09-25 2017-07-25 Rockwell Collins, Inc. Automotive head up display (HUD)
WO2016113534A1 (en) 2015-01-12 2016-07-21 Milan Momcilo Popovich Environmentally isolated waveguide display
US9632226B2 (en) 2015-02-12 2017-04-25 Digilens Inc. Waveguide grating device
US10247943B1 (en) 2015-05-18 2019-04-02 Rockwell Collins, Inc. Head up display (HUD) using a light pipe
US10126552B2 (en) 2015-05-18 2018-11-13 Rockwell Collins, Inc. Micro collimator system and method for a head up display (HUD)
US11366316B2 (en) 2015-05-18 2022-06-21 Rockwell Collins, Inc. Head up display (HUD) using a light pipe
US10108010B2 (en) 2015-06-29 2018-10-23 Rockwell Collins, Inc. System for and method of integrating head up displays and head down displays
EP3359999A1 (en) 2015-10-05 2018-08-15 Popovich, Milan Momcilo Waveguide display
US10598932B1 (en) 2016-01-06 2020-03-24 Rockwell Collins, Inc. Head up display for integrating views of conformally mapped symbols and a fixed image source
EP3433659A1 (en) 2016-03-24 2019-01-30 DigiLens, Inc. Method and apparatus for providing a polarization selective holographic waveguide device
EP3433658B1 (en) 2016-04-11 2023-08-09 DigiLens, Inc. Holographic waveguide apparatus for structured light projection
US11513350B2 (en) 2016-12-02 2022-11-29 Digilens Inc. Waveguide device with uniform output illumination
WO2018129398A1 (en) 2017-01-05 2018-07-12 Digilens, Inc. Wearable heads up displays
US10295824B2 (en) 2017-01-26 2019-05-21 Rockwell Collins, Inc. Head up display with an angled light pipe
CN116149058A (en) 2017-10-16 2023-05-23 迪吉伦斯公司 System and method for multiplying image resolution of pixellated display
US10914950B2 (en) 2018-01-08 2021-02-09 Digilens Inc. Waveguide architectures and related methods of manufacturing
KR20200108030A (en) 2018-01-08 2020-09-16 디지렌즈 인코포레이티드. System and method for high throughput recording of holographic gratings in waveguide cells
WO2020023779A1 (en) 2018-07-25 2020-01-30 Digilens Inc. Systems and methods for fabricating a multilayer optical structure
CN113692544A (en) 2019-02-15 2021-11-23 迪吉伦斯公司 Method and apparatus for providing holographic waveguide display using integrated grating
KR20210134763A (en) 2019-03-12 2021-11-10 디지렌즈 인코포레이티드. Holographic waveguide backlights and related manufacturing methods
CN114207492A (en) 2019-06-07 2022-03-18 迪吉伦斯公司 Waveguide with transmission grating and reflection grating and method for producing the same
EP4004646A4 (en) 2019-07-29 2023-09-06 Digilens Inc. Methods and apparatus for multiplying the image resolution and field-of-view of a pixelated display
US11442222B2 (en) 2019-08-29 2022-09-13 Digilens Inc. Evacuated gratings and methods of manufacturing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030115410A1 (en) * 1999-06-03 2003-06-19 Lucent Technologies Inc. Method and apparatus for improving file system response time

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277984B2 (en) * 2004-06-23 2007-10-02 International Business Machines Corporation Methods, apparatus and computer programs for scheduling storage requests

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030115410A1 (en) * 1999-06-03 2003-06-19 Lucent Technologies Inc. Method and apparatus for improving file system response time

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222640A1 (en) * 2007-03-07 2008-09-11 International Business Machines Corporation Prediction Based Priority Scheduling
US8185899B2 (en) * 2007-03-07 2012-05-22 International Business Machines Corporation Prediction based priority scheduling
US8448178B2 (en) 2007-03-07 2013-05-21 International Business Machines Corporation Prediction based priority scheduling
US8135924B2 (en) 2009-01-14 2012-03-13 International Business Machines Corporation Data storage device driver
US9298636B1 (en) * 2011-09-29 2016-03-29 Emc Corporation Managing data storage
US20190034452A1 (en) * 2017-07-28 2019-01-31 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
US10642797B2 (en) * 2017-07-28 2020-05-05 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
US11269814B2 (en) * 2017-07-28 2022-03-08 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
US11726963B2 (en) 2017-07-28 2023-08-15 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
US20230350851A1 (en) * 2017-07-28 2023-11-02 Chicago Mercantile Exchange Inc. Concurrent write operations for use with multi-threaded file logging
WO2020135737A1 (en) * 2018-12-28 2020-07-02 杭州海康威视数字技术股份有限公司 Methods, apparatuses, devices and mediums for partition management and data storage and querying

Also Published As

Publication number Publication date
US9223504B2 (en) 2015-12-29
US20090019222A1 (en) 2009-01-15

Similar Documents

Publication Publication Date Title
US9223504B2 (en) Method and system for placement of logical data stores to minimize request response time
US9575664B2 (en) Workload-aware I/O scheduler in software-defined hybrid storage system
US8380947B2 (en) Storage application performance matching
US11048411B2 (en) Method of consolidating data streams for multi-stream enabled SSDs
Balakrishnan et al. Pelican: A building block for exascale cold data storage
Lee et al. File assignment in parallel I/O systems with minimal variance of service time
US6948042B2 (en) Hierarchical storage apparatus and control apparatus thereof
US8327103B1 (en) Scheduling data relocation activities using configurable fairness criteria
US20080222311A1 (en) Management of shared storage I/O resources
US20110161964A1 (en) Utility-Optimized Scheduling of Time-Sensitive Tasks in a Resource-Constrained Environment
US20070300035A1 (en) Systems and methods of allocating a zone bit recorded disk drive
US20060288184A1 (en) Admission control in data storage devices
US20020065833A1 (en) System and method for evaluating changes in performance arising from reallocation of files among disk storage units
JP2015517147A (en) System, method and computer program product for scheduling processing to achieve space savings
US9858959B2 (en) Adaptively mounting and unmounting removable storage media based on monitoring requests and states of storage drives and the storage media
US10346094B2 (en) Storage system, storage device, and hard disk drive scheduling method
JP2003005920A (en) Storage system and data rearranging method and data rearranging program
JPH09258907A (en) Highly available external storage device having plural storage disk parts
US10686721B2 (en) Storage device access mediation
US6772285B2 (en) System and method for identifying busy disk storage units
Skourtis et al. QBox: guaranteeing I/O performance on black box storage systems
Verma et al. General store placement for response time minimization in parallel disks
JP5415338B2 (en) Storage system, load balancing management method and program thereof
Verma et al. On store placement for response time minimization in parallel disks
Tarasov et al. Efficient I/O scheduling with accurately estimated disk drive latencies

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VERMA, AKSHAT;ANAND, ASHOK;REEL/FRAME:018742/0816

Effective date: 20061229

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION