US8458399B2 - Methods and structure for determining cache size in a storage system - Google Patents
Methods and structure for determining cache size in a storage system Download PDFInfo
- Publication number
- US8458399B2 US8458399B2 US12/948,321 US94832110A US8458399B2 US 8458399 B2 US8458399 B2 US 8458399B2 US 94832110 A US94832110 A US 94832110A US 8458399 B2 US8458399 B2 US 8458399B2
- Authority
- US
- United States
- Prior art keywords
- cache memory
- cache
- determining
- memory size
- storage system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0646—Configuration or reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3485—Performance evaluation by tracing or monitoring for I/O devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/81—Threshold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/88—Monitoring involving counting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
Definitions
- the invention relates generally to storage systems utilizing cache memory and more specifically relates to methods and structures within the storage system for analyzing cache memory utilization to determine a desired cache memory size within the storage system for improved performance.
- Cache memory is used in storage systems to improve the speed of processing I/O requests. Data written to the storage system may be stored in the cache memory so that subsequent read requests for the same data may be completed using the data in cache memory rather than the slower access typical of accessing data stored on storage devices (e.g., disk drives) of the storage system.
- Cache memory for these storage subsystems is typically based on random access semiconductor memory (RAM) technology.
- RAM random access semiconductor memory
- Some storage systems use other tiers of data storage in order to optimize systems for performance or cost.
- solid state devices (SSD) based on flash memory technology can be used as a medium to store information that can be accessed much faster than information stored on typical rotating magnetic or optical hard disk drives (HDD).
- the cache management routines typically contain mechanisms to track use of data. These mechanisms typically include a list of data blocks accessed, kept in least recently used (LRU) order for determining which blocks of data should be stored in the higher speed cache memory.
- LRU least recently used
- the cache memory size can be adjusted by operation of the storage system by re-allocating a memory subsystem to use more or less of the available memory for the caching functions.
- the size of the cache may be determined at time of manufacture of the storage system but may be upgraded by field personnel or by end users.
- the collection of data that is frequently accessed and thus may benefit from being stored in cache memory may be referred to as the “working set”. If the working set size exceeds the present size of the cache memory, undesirable thrashing may take place wherein the storage system is frequently swapping data in and out of the cache memory. In such a case, an increased cache size may be desired to reduce the thrashing. However, as noted above, determining this size a priori for a particular storage system application is a difficult problem.
- the present invention solves the above and other problems, thereby advancing the state of the useful arts, by providing methods and structure for analyzing I/O operations and associated use of cache memory to determine a preferred size for the cache memory of the storage system. Based on the analysis, features and aspects hereof may advise an administrator of the analysis results to permit an administrator to reconfigure the storage system appropriately. Other features and aspects hereof utilize the analysis results within the storage system to automatically reconfigure the size of the cache memory to improve performance of the storage system.
- a method and computer readable medium embodying the method is provided.
- the method is operable in a storage system having a cache memory.
- the method determines a desired cache memory size and reconfigures the storage system to utilize the desired cache memory size.
- the method comprises tracking usage of cache memory in the storage system. The tracking gathers usage information for more data than can fit in the cache memory.
- the method determines the desired cache memory size for the storage system based on the gathered usage information and utilizes the desired cache memory size in the storage system to reconfigure the size of the cache memory of the storage system.
- Another aspect hereof provides a method operable in a storage system having a cache memory.
- the cache memory having a number of cache lines the number of which defines the present cache memory size.
- the method determines a desired cache memory size and reconfigures the storage system to utilize the desired cache memory size.
- the method comprises detecting a request to access a cache line in the cache memory and determining whether the requested cache line is a new cache line in the cache memory. Responsive to a determination that the request is to a new cache line, the method performs the additional steps of: incrementing a counter; creating a new cache history entry wherein the new cache history entry comprises information associating the new cache history entry with the new cache line; and storing the present count value of the counter in the new cache history entry.
- the method Responsive to a determination that the request is to an existing cache line, the method performs the additional steps of: locating the cache history entry associated with the requested cache line; subtracting the count value stored in the located cache history entry from the present count value of the counter to generate a delta value; and storing the delta value in the located cache history entry. The method then determines the desired cache memory size for the storage system based on the delta values in the cache history entries and utilizes the desired cache memory size in the storage system to reconfigure the size of the cache memory of the storage system.
- FIG. 1 is a block diagram of a system enhanced in accordance with features and aspects hereof to provide substantially automated determination of a desired cache memory size in a storage controller based on cache memory usage information and to provide reconfiguration of the storage controller to utilize the desired cache memory size.
- FIGS. 2 through 5 are flowcharts describing exemplary methods in accordance with features and aspects hereof to provide substantially automated determination of a desired cache memory size in a storage controller based on cache memory usage information and to provide reconfiguration of the storage controller to utilize the desired cache memory size.
- FIG. 6 is a block diagram of a computer system that uses a computer readable medium to load programmed instructions for performing methods in accordance with features and aspects hereof to provide substantially automated determination of a desired cache memory size in a storage controller based on cache memory usage information and to provide reconfiguration of the storage controller to utilize the desired cache memory size.
- FIG. 1 is a block diagram of an exemplary system 100 enhanced in accordance with features and aspects hereof to provide substantially automated determination and reconfiguration of a desired cache memory size in storage controller 102 .
- System 100 comprises storage controller 102 adapted for coupling with one or more host systems 104 via path 150 and adapted for coupling with one or more storage devices 106 via path 156 .
- Host systems 104 may be any suitable computing systems adapted to generate I/O write requests to store data on storage devices 106 of storage system 100 .
- host systems 104 may be personal computers, servers, workstations, etc.
- Storage devices 106 may be any device suitable for storing data received in host supplied I/O write requests including, for example, rotating magnetic and/or optical disk drives as well as semiconductor storage devices (e.g., flash drives, RAM disks, etc.).
- I/O processor 108 within storage controller 102 receives I/O requests from host systems 104 via front end interface 110 and communication paths 150 and 152 .
- Front end interface 110 may be any suitable circuitry and/or logic adapted for coupling storage controller 102 with host systems 104 .
- front end interface 110 may provide network interface capabilities, such as Ethernet using TCP/IP protocols, to couple storage system 100 , through storage controller 102 , to a network communication path 150 .
- I/O processor 108 may communicate with storage devices 106 through back end interface 112 and communication paths 154 and 156 .
- Back end interface 112 comprises any suitable circuits and/or logic for coupling storage controller 102 with storage devices 106 .
- back end interface 112 may provide suitable circuits and logic for coupling to storage devices 106 utilizing parallel or serial attached SCSI, Serial Advanced Technology Attachment (SATA), Fibre Channel, or any of several other well-known communication media and protocols suitable for use by storage devices 106 .
- SCSI Serial Advanced Technology Attachment
- Fibre Channel Fibre Channel
- I/O processor 108 may comprise any suitable general or special purpose processor (CPU) and associated program memory adapted to store programmed instructions to be executed by the CPU. In general, I/O processor 108 may utilize cache memory 114 via communication path 158 to improve performance in processing of I/O requests. Write data supplied by host systems 104 in association with I/O write requests may be stored in cache memory 114 as well as persistently stored in storage devices 106 . Subsequent I/O read requests may then be completed with reference to data stored in cache memory 114 rather than requiring slower access to storage devices 106 . Cache size analysis and configuration processing element 118 monitors, via path 162 , utilization of cache memory 114 by I/O processor 108 .
- CPU general or special purpose processor
- cache size analysis and configuration processing element 118 As data is initially added into cache memory 114 and as previously entered data is subsequently accessed in cache memory 114 (by I/O processor 108 ), cache size analysis and configuration processing element 118 generates and updates information in cache usage memory 116 . Specifically, cache history entries are created and updated in cache usage memory 116 responsive to utilization of cache memory 114 by I/O processor 108 . A plurality of such cache history entries are collectively referred to herein as “usage data” or “usage information”. In one exemplary embodiment, cache size analysis and configuration processing element 118 may be implemented as a function programmed in the programmable instructions executed by I/O processor 108 .
- cache size analysis and configuration processing element 118 may be implemented as suitably designed custom logic circuits adapted to monitor utilization of cache memory 114 by I/O processor 108 and to generate, update, and analyze cache history entries stored in cache usage memory 116 .
- cache size analysis and configuration processing element 118 may analyze the cache history entries stored in cache usage memory 116 to determine a desired cache memory size for ongoing use of cache memory 114 by storage controller 102 . Based on the analysis of cache history entries, the size of cache memory 114 may be increased or decreased accordingly. In one exemplary embodiment, the desired cache memory size may be determined by automated processing of storage controller 102 and the cache memory 114 may be automatically reconfigured in accordance with the desired cache memory size. In other exemplary embodiments, analysis results generated by cache size analysis and configuration processing element 118 may be output to an administrative user to permit the administrative user to determine the desired cache memory size. A cache memory size so determined by an administrative user may then be communicated back to storage controller 102 to permit cache size analysis and configuration processing element 118 to reconfigure the size of cache memory 114 .
- element 118 may be continually operable as a background function of controller 102 while I/O requests are processed. Any of several triggering events may initiate analytical processing by cache size analysis and configuration processing element 118 to determine and configure a new desired cache memory size. For example, element 118 may be operable on a periodic basis to evaluate historical utilization of cache memory 114 based on usage data (e.g. cache history entries) stored in cache usage memory 116 . In addition, element 118 may be operable responsive to a user request to analyze the utilization of cache memory 114 to determine a desired cache memory size. Still further, element 118 may be operable responsive to detection of thrashing in operation of cache memory of 114 by I/O processor 108 .
- usage data e.g. cache history entries
- thrashing in utilization of cache memory 114 refers to frequent replacement of cache blocks in accordance with the least recently used (LRU) cache block replacement algorithms typically performed by I/O processor 108 . Where LRU algorithms cause frequent replacement of cache blocks, it may be that analysis by element 118 could reveal that a different cache memory size would aid in reducing such thrashing
- LRU least recently used
- FIG. 2 is a flowchart describing an exemplary method for determining a desired cache memory size and for reconfiguring the size of a cache memory of the storage system responsive to the determination of a desired cache memory size of a storage controller in accordance with features and aspects hereof.
- the method of FIG. 2 may be operable, for example, in a storage controller such as storage controller 102 of FIG. 1 . More specifically, the method of FIG. 2 may be operable in a cache size analysis and configuration processing element of a storage controller such as element 118 of FIG. 1 .
- Such an element may be implemented as suitably programmed instructions to be executed by a general or special purpose processor of the storage controller and/or may be implemented as suitably designed custom logic circuits for providing the desired cache memory size analysis and configuration processing.
- usage data is gathered by tracking use of the cache memory of the storage system.
- the usage data may be gathered for a predetermined period of time or may be gathered as a continual background process of the storage system.
- the usage data may comprise cache history entries indicating usage of the cache memory over some period of time during which the usage data was gathered.
- the cache history entries may be similar in structure to other meta-data structures commonly utilized by the storage system.
- each cache history entry may be similar to the meta-data structure used for meta-data associated with each cache line or cache block of the cache memory. Additional information as discussed further herein below may be incorporated with each such cache history entry.
- the number of such cache history entries will be substantially larger than the number of meta-data entries used to describe the cache lines or cache blocks of the cache memory. This allows the cache history meta-data to accumulate a significant volume of historical information regarding utilization of the cache memory regardless of the current size of the cache memory.
- a desired cache memory size is determined based on the usage data gathered by step 200 .
- the desired cache memory size may be determined automatically by processing of the storage system and/or may be determined by interaction with an administrative user.
- the administrative user is provided with information derived from the usage data gathered at step 200 .
- the user then returns revised configuration information indicating the desired cache memory size.
- step 204 then utilizes the desired cache memory size so determined to reconfigure the present size of the cache memory.
- the cache memory size may be reconfigured by reconfiguring or designating portions of memory of the storage system to be used in caching.
- the reconfigured cache memory size may increase or decrease the size of the cache memory in accordance with the determined desired cache memory size.
- step 204 may comprise informing an administrative user that additional cache memory features may be purchased and may be beneficially utilized in the storage system.
- FIG. 3 is a flowchart describing exemplary additional details of the processing of step 200 of FIG. 2 to gather usage data by tracking use of the cache memory.
- a newly detected cache line access (by the I/O processor) is analyzed to determine whether it represents a “hit” on an existing cache line of the cache memory or rather represents the addition of a new cache line to cache memory. If the detected cache access represents the addition of a new cache line to cache memory, step 302 increments a system counter representing the count of new cache line hits.
- a new cache history entry is generated and added to the usage data. The new cache history entry comprises information identifying the newly added cache line.
- step 306 the present count value of the system counter indicating the number of new cache line hits is stored in the newly generated cache history entry. Processing of step 200 is then complete with respect to this cache line hit and processing of the storage system continues monitoring for further cache history information as an ongoing background task.
- step 310 locates the cache history entry corresponding to the cache line for the existing cache line hit.
- the cache history entries include information identifying the cache line to which the cache history entry corresponds. Such information may include, for example, starting block address, extent, etc. to indicate the data associated with the existing cache line.
- step 312 determines a “delta value” as the difference between the stored count value in the located cache history entry and the present value of the system counter for new cache line hits. This difference indicates the number of new cache lines added to the cache memory since the last “hit” on this cache line. This delta value is then stored in the located cache history entry at step 314 .
- the gathered cache history entries may be used as a basis for determining a desired cache memory size.
- FIG. 4 is a flowchart providing exemplary additional details of processing of step 202 of FIG. 2 to determine a desired cache memory size based on the gathered usage data.
- the usage data or information derived from the usage data is transmitted to an administrative user at step 400 .
- the administrative user determines (by any suitable means) a desired cache memory size based on the transmitted information.
- the storage system receives configuration information from the administrative user where the configuration information comprises the desired cache memory size.
- FIG. 5 is a flowchart describing exemplary additional details of another embodiment of the processing of step 202 to determine a desired cache memory size based on the gathered usage information.
- the storage system automatically analyzes all the cache history entries to thereby automatically determine a desired cache memory size without the need for administrative user intervention or input.
- Steps 500 through 506 are iteratively operable to process each of the plurality of cache history entries that comprise the usage data/information gathered over some period of time.
- a first/next cache history entry is retrieved.
- the current retrieved cache history entry is processed/analyzed and at step 506 the method determines whether additional cache history entries remain to be analyzed. If so, processing continues looping back to step 500 until all cache history entries have been processed by step 502 .
- step 502 counts the number of cache history entries that meet certain predefined threshold criteria relating to cache memory size. For example, the number of cache history entries having a delta value exceeding a predetermined threshold value may be determined as a percentage of the total number of cache history entries. The percentage of such cache history entries having delta values exceeding the predetermined threshold delta value may then be compared to a threshold percentage to determine whether an increase in the cache memory size is warranted and would be beneficial or, conversely, to determine that a reduction in cache memory size would not be harmful to performance of the storage system.
- step 508 determines a desired cache memory size based on the various counts and based on one or more threshold ranges of values for such counts as determined by step 502 .
- the storage system can determine a distribution of the delta values for all of the cache history entries.
- the delta values represent a number of blocks (e.g., cached lines) added to the cache memory in between accesses to a particular cache line or cache block. This distribution can then be used to determine the desired cache memory size for the storage system.
- the storage system may reasonably conclude that a larger cache memory size would be of little or no benefit. If, on the other hand, a significant portion (e.g., 90%) of the cache history entries has a delta value of approximately double the number of cache lines in the cache memory as presently sized, the storage system may determine that performance would be significantly benefited by doubling the current cache memory size.
- This exemplary heuristic rule may be easily applied by determining the number of cache history entries having a delta value exceeding a predetermined threshold and then determining whether that number of cache history entries represents a percentage of the total cache history entries that exceeds a predetermined threshold percentage.
- cache memory may be installed or configured in a storage system only in discrete quantum units of memory with a limited number of such configuration options available.
- a system may have eight possible cache sizes to be configured.
- the cache memory size analysis and configuration logic of the storage system may utilize a table associating each of the possible cache sizes with a corresponding threshold value.
- the analysis of the cache history entries may then determine a comparative value to be applied to the table entries to determine which of the possible eight cache memory sizes would be most beneficial.
- each table entry may specify a cache memory size and a corresponding range of values for the percentage of cache history entries having a delta value at or below some predetermined threshold value.
- the cache usage information comprising the plurality of cache history entries may also be utilized in conjunction with, or as an alternative to, LRU cache block replacement algorithms.
- the usage information e.g., plurality of cache history entries
- the usage information may be utilized to identify particular cache lines that are infrequently used even if the cache lines were very recently used. Such frequency of use analysis may therefore enhance or replace the simplistic LRU analysis in most cache management approaches.
- Embodiments of the invention can take the form of an entirely hardware (i.e., circuits) embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
- the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- FIG. 6 is a block diagram depicting a storage controller computer 600 adapted to provide features and aspects hereof by executing programmed instructions and accessing data stored on a computer readable storage medium 612 .
- embodiments of the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium 612 providing program code for use by, or in connection with, a computer or any instruction execution system.
- a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the computer, instruction execution system, apparatus, or device.
- the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
- Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
- a storage controller computer 600 suitable for storing and/or executing program code will include at least one processor 602 coupled directly or indirectly to memory elements 604 through a system bus 650 .
- the memory elements 604 can include local memory employed during actual execution of the program code, bulk storage, and cache memories that provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
- Cache and cache usage memory 610 may be accessed by processor 602 via bus 650 to manage usage data during processing of I/O requests.
- Input/output interface 606 couples the controller to I/O devices to be controlled (e.g., storage devices, etc.). Host system interface 608 may also couple the computer 600 to other data processing systems.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/948,321 US8458399B2 (en) | 2010-11-17 | 2010-11-17 | Methods and structure for determining cache size in a storage system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/948,321 US8458399B2 (en) | 2010-11-17 | 2010-11-17 | Methods and structure for determining cache size in a storage system |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120124295A1 US20120124295A1 (en) | 2012-05-17 |
US8458399B2 true US8458399B2 (en) | 2013-06-04 |
Family
ID=46048864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/948,321 Active 2031-08-04 US8458399B2 (en) | 2010-11-17 | 2010-11-17 | Methods and structure for determining cache size in a storage system |
Country Status (1)
Country | Link |
---|---|
US (1) | US8458399B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9697126B2 (en) | 2014-11-25 | 2017-07-04 | Qualcomm Incorporated | Generating approximate usage measurements for shared cache memory systems |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8782219B2 (en) * | 2012-05-18 | 2014-07-15 | Oracle International Corporation | Automated discovery of template patterns based on received server requests |
US9160650B2 (en) * | 2013-06-17 | 2015-10-13 | Futurewei Technologies, Inc. | Enhanced flow entry table cache replacement in a software-defined networking switch |
US9411814B2 (en) * | 2014-01-06 | 2016-08-09 | Dropbox, Inc. | Predictive caching and fetch priority |
WO2017052595A1 (en) * | 2015-09-25 | 2017-03-30 | Hewlett Packard Enterprise Development Lp | Variable cache for non-volatile memory |
US10191849B2 (en) * | 2015-12-15 | 2019-01-29 | Vmware, Inc. | Sizing cache data structures using fractal organization of an ordered sequence |
CN112241390B (en) * | 2020-10-22 | 2022-08-30 | 上海兆芯集成电路有限公司 | Host interconnection apparatus and method thereof |
KR20230075914A (en) * | 2021-11-23 | 2023-05-31 | 삼성전자주식회사 | Processing apparatus and operating method thereof and electronic apparatus |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5133060A (en) * | 1989-06-05 | 1992-07-21 | Compuadd Corporation | Disk controller includes cache memory and a local processor which limits data transfers from memory to cache in accordance with a maximum look ahead parameter |
US5257370A (en) * | 1989-08-29 | 1993-10-26 | Microsoft Corporation | Method and system for optimizing data caching in a disk-based computer system |
US5752255A (en) * | 1992-10-01 | 1998-05-12 | Digital Equipment Corporation | Dynamic non-coherent cache memory resizing mechanism |
US20050071599A1 (en) * | 2003-09-30 | 2005-03-31 | Modha Dharmendra Shantilal | Storage system and method for dynamically allocating cache space among different workload classes |
US20090083558A1 (en) | 2007-09-26 | 2009-03-26 | Hitachi, Ltd. | Storage apparatus and power saving method thereof |
-
2010
- 2010-11-17 US US12/948,321 patent/US8458399B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5133060A (en) * | 1989-06-05 | 1992-07-21 | Compuadd Corporation | Disk controller includes cache memory and a local processor which limits data transfers from memory to cache in accordance with a maximum look ahead parameter |
US5257370A (en) * | 1989-08-29 | 1993-10-26 | Microsoft Corporation | Method and system for optimizing data caching in a disk-based computer system |
US5752255A (en) * | 1992-10-01 | 1998-05-12 | Digital Equipment Corporation | Dynamic non-coherent cache memory resizing mechanism |
US20050071599A1 (en) * | 2003-09-30 | 2005-03-31 | Modha Dharmendra Shantilal | Storage system and method for dynamically allocating cache space among different workload classes |
US7107403B2 (en) | 2003-09-30 | 2006-09-12 | International Business Machines Corporation | System and method for dynamically allocating cache space among different workload classes that can have different quality of service (QoS) requirements where the system and method may maintain a history of recently evicted pages for each class and may determine a future cache size for the class based on the history and the QoS requirements |
US20090083558A1 (en) | 2007-09-26 | 2009-03-26 | Hitachi, Ltd. | Storage apparatus and power saving method thereof |
Non-Patent Citations (2)
Title |
---|
Chen et al., "Data Access History Cache and Associated Data Prefetching Mechanisms," Nov. 2007, (c) 2007 Association for Computing Machinery,SC07 Nov. 10-16, 2007, Reno, Nevada, USA. |
Megiddo et al., "ARC: A Self-Tuning, Lowoverhead Replacement Cache," USENIX File & Storage Technologies Conference (FAST), Mar. 31, 2003, San Francisco, CA. |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9697126B2 (en) | 2014-11-25 | 2017-07-04 | Qualcomm Incorporated | Generating approximate usage measurements for shared cache memory systems |
Also Published As
Publication number | Publication date |
---|---|
US20120124295A1 (en) | 2012-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8458399B2 (en) | Methods and structure for determining cache size in a storage system | |
US8972661B2 (en) | Dynamically adjusted threshold for population of secondary cache | |
US9760294B2 (en) | Computer system, storage management computer, and storage management method | |
US10061702B2 (en) | Predictive analytics for storage tiering and caching | |
US8261018B2 (en) | Managing data storage systems | |
US9514039B2 (en) | Determining a metric considering unallocated virtual storage space and remaining physical storage space to use to determine whether to generate a low space alert | |
US9317207B2 (en) | Cache migration | |
US9285999B2 (en) | Cache migration | |
EP2772853B1 (en) | Method and device for building memory access model | |
JP2013509658A (en) | Allocation of storage memory based on future usage estimates | |
JP2013511081A (en) | Method, system, and computer program for destaging data from a cache to each of a plurality of storage devices via a device adapter | |
US9448727B2 (en) | File load times with dynamic storage usage | |
US9971534B2 (en) | Authoritative power management | |
US20180267879A1 (en) | Management computer, performance monitoring method, and computer system | |
US20180121237A1 (en) | Life cycle management of virtualized storage performance | |
US20230129647A1 (en) | Distribution of quantities of an increased workload portion into buckets representing operations | |
US10146783B2 (en) | Using file element accesses to select file elements in a file system to defragment | |
US20160004465A1 (en) | Caching systems and methods with simulated nvdram | |
US20160077741A1 (en) | Capturing demand on storage capacity and performance capability | |
US20160077886A1 (en) | Generating workload windows | |
US11222004B2 (en) | Management of a database with relocation of data units thereof | |
US11989415B2 (en) | Enabling or disabling data reduction based on measure of data overwrites | |
CN117891416B (en) | Unmapping operation optimization method and device based on SCSI protocol and readable medium | |
JP2023044934A (en) | Storage device and cache control method | |
US9176854B2 (en) | Presenting enclosure cache as local cache in an enclosure attached server |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LSI CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUMLICEK, DONALD R.;SNIDER, TIMOTHY R.;MCKEAN, BRIAN D.;SIGNING DATES FROM 20101110 TO 20101115;REEL/FRAME:025369/0150 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AG Free format text: PATENT SECURITY AGREEMENT;ASSIGNORS:LSI CORPORATION;AGERE SYSTEMS LLC;REEL/FRAME:032856/0031 Effective date: 20140506 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LSI CORPORATION;REEL/FRAME:035390/0388 Effective date: 20140814 |
|
AS | Assignment |
Owner name: LSI CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039 Effective date: 20160201 Owner name: AGERE SYSTEMS LLC, PENNSYLVANIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS (RELEASES RF 032856-0031);ASSIGNOR:DEUTSCHE BANK AG NEW YORK BRANCH, AS COLLATERAL AGENT;REEL/FRAME:037684/0039 Effective date: 20160201 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:037808/0001 Effective date: 20160201 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041710/0001 Effective date: 20170119 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047230/0133 Effective date: 20180509 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 09/05/2018 PREVIOUSLY RECORDED AT REEL: 047230 FRAME: 0133. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047630/0456 Effective date: 20180905 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |