US7464246B2 - System and method for dynamic sizing of cache sequential list - Google Patents

System and method for dynamic sizing of cache sequential list Download PDF

Info

Publication number
US7464246B2
US7464246B2 US10/954,937 US95493704A US7464246B2 US 7464246 B2 US7464246 B2 US 7464246B2 US 95493704 A US95493704 A US 95493704A US 7464246 B2 US7464246 B2 US 7464246B2
Authority
US
United States
Prior art keywords
sequential
data list
cache
list
random
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/954,937
Other languages
English (en)
Other versions
US20060069871A1 (en
Inventor
Binny Sher Gill
Dharmendra Shantilal Modha
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GILL, BINNY SHER, MODHA, DHARMENDRA SHANTILAL
Priority to US10/954,937 priority Critical patent/US7464246B2/en
Priority to TW094133273A priority patent/TWI393004B/zh
Priority to CNB2005101070534A priority patent/CN100442249C/zh
Publication of US20060069871A1 publication Critical patent/US20060069871A1/en
Priority to US12/032,851 priority patent/US7509470B2/en
Priority to US12/033,105 priority patent/US7533239B2/en
Priority to US12/060,431 priority patent/US7793065B2/en
Priority to US12/060,945 priority patent/US7707382B2/en
Publication of US7464246B2 publication Critical patent/US7464246B2/en
Application granted granted Critical
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Definitions

  • the present invention relates generally to data caching.
  • Caching is a fundamental technique in hiding delays in writing data to and reading data from storage, such as hard disk drive storage. These delays can be referred to as input/output (I/O) latency. Because caching is effective in hiding I/O latency, it is widely used in storage controllers, databases, file systems, and operating systems.
  • I/O input/output
  • a cache thus may be defined as a high speed memory or storage device that is used to reduce the effective time required to read data from or write data to a lower speed memory or device.
  • a modern storage controller cache typically contains volatile memory used as a read cache and a non-volatile memory used as a write cache.
  • the effectiveness of a read cache depends upon its “hit” ratio, that is, the fraction of requests that are served from the cache without necessitating a disk trip (which represents a “miss” in finding data in the cache).
  • the present invention is focused on improving the performance of a read cache, i.e., increasing the hit ratio or equivalently minimizing the miss ratio.
  • cache is managed in uniformly sized units called pages.
  • demand paging requires a page to be copied into cache from the slower memory (e.g., a disk) only in the event of a cache miss of the page, i.e., only if the page was required by the host and it could not be found in cache, necessitating a relatively slower disk access.
  • cache management is relatively simple, and seeks to intelligently select a page from cache for replacement when the cache is full and a new page is to be stored in cache owing to a “miss”.
  • One well-known policy simply replaces the page whose next access is farthest in the future with the new page.
  • Another policy (least recently used, or LRU) replaces the least recently used page with the new page.
  • the present invention understands that a simpler approach to speculative prefetching can be employed that uses the principle of sequentiality, which is a characteristic of demanded data (data to be read) in which consecutively numbered pages in ascending order without gaps are often required.
  • Sequential file access arises in many contexts, including video-on-demand, database scans, copy, backup, and recovery.
  • detecting sequentiality is easy, requiring very little history information, and can attain nearly 100% predictive accuracy.
  • the present invention critically observes that when synchronous and asynchronous prefetching strategies are used along with the LRU-based caching, and an asynchronous trigger track is accessed, an asynchronous prefetch of the next group of tracks occurs.
  • an LRU-based cache these newly fetched group of tracks along with the asynchronous trigger track are placed at the MRU end of the list, with the unaccessed tracks within the current prefetch group remaining where they were in the LRU list, hence, potentially near the LRU end of the list.
  • the resulting algorithm can violate the so-called stack property, and as a result, when the amount of cache space given to sequentially prefetched data increases, sequential misses do not necessarily decrease.
  • stack property can be a crucial ingredient in proper cache management.
  • both of the above problems can be hidden if (i) only synchronous prefetching is used or (ii) if both synchronous and asynchronous prefetching are used, setting the asynchronous trigger to always be the last track in a prefetched group, but of course the first approach amounts to foregoing all potential benefits of asynchronous prefetching, while the second approach can result in a sequential miss if the track being prefetched is accessed before it is in the cache.
  • one purpose of the present invention is to avoid violation of the stack property without incurring additional sequential misses. More generally, the present invention represents a significant improvement in cache management when sequential prefetch is used.
  • a general purpose computer is programmed according to the inventive steps herein.
  • the invention can also be embodied as an article of manufacture—a machine component—that is used by a digital processing apparatus and which tangibly embodies a program of instructions that is executable by the digital processing apparatus to execute the present logic.
  • This invention may be realized in a critical machine component that causes a digital processing apparatus to perform the inventive method steps herein.
  • a method for caching data includes maintaining a random data list and a sequential data list, and dynamically establishing a desired size for the sequential data list.
  • the establishing act can include determining whether a least recently used (LRU) portion of the sequential data list is more valuable in terms of cache misses than a least recently used (LRU) portion of the random list and if so, increasing the desired size, otherwise decreasing the desired size.
  • the method includes computing a marginal utility of adding space to the sequential data list in terms of sequential misses, and empirically determining a marginal utility of adding space to the random data list. Based on the computing and determining acts, the desired size of the sequential data list is established.
  • the computing act can determine the marginal utility to be equal to a number between s/L and 2 s/L, inclusive, wherein s represents the rate of sequential misses for synchronous and asynchronous prefetching and L represents the length of the sequential data list.
  • the empirically determining act may include determining the number of sequential misses between two successive cache hits in the bottom ⁇ L portion of the random data list. The desired size is increased if the computed marginal utility of adding space to the sequential data list exceeds the empirically determined marginal utility of adding space to the random data list, and otherwise is decreased.
  • the method may further include moving at least one newly prefetched group of tracks along with an asynchronous trigger track in the group of tracks to a most recently used (MRU) portion of the sequential data list.
  • MRU most recently used
  • the method may include moving at least some unaccessed tracks in the group of tracks to the MRU portion of the sequential data list. Also, if desired the method may include determining whether the size of at least a portion of the sequential data list exceeds the desired size, and replacing at least one cache element, based thereon.
  • a data system uses a least recently used (LRU) caching scheme which in turn uses synchronous and asynchronous prefetch of data from a data storage device.
  • LRU least recently used
  • a computer program product includes means for computing a marginal utility of adding space to a sequential list of a cache in terms of sequential misses. Means are also provided for empirically determining a marginal utility of adding space to a random list in cache. The computer program product has means for, responsive to the computing and determining means, establishing a desired size of the sequential list.
  • LRU least recently used
  • a processor is associated with a cache for executing logic to move a newly prefetched group of tracks along with an asynchronous trigger track in the group of tracks to a most recently used (MRU) portion of a sequential cache list.
  • the logic also moves unaccessed tracks in the group of tracks to the MRU portion of the sequential cache list.
  • MRU most recently used
  • a processor associated with a cache executes logic for determining whether the size of a sequential cache list exceeds a desired size, and based thereon, replacing at least one cache element.
  • FIG. 1 is a block diagram of one non-limiting system in which the present cache management policy can be used
  • FIG. 2 is a schematic illustration of the random page list and sequential page list in cache
  • FIG. 3 is a flow chart of the cache size management logic
  • FIG. 4 is a flow chart of the cache replacement logic.
  • FIG. 1 a system is shown, generally designated 10 , that illustrates one non-limiting environment in which the present invention can be used.
  • the present invention is a system for managing a data cache that caches data from a slower memory.
  • the present invention may be implemented in database systems such as DB2 and Oracle, as well as RAID-based systems such as the present assignee's “Shark” system, as well as other systems, such as individual hard disk drives, etc. Accordingly, it is to be understood that while FIG. 1 illustrates one non-limiting implementation that has a “Shark” architecture, it is but representative of the environments that the present invention finds use.
  • the present invention may be implemented in a file system, database system, or other system that must allocate space for variable-sized data objects.
  • the processor or processors (computers) of the present invention may be personal computers made by International Business Machines Corporation (IBM) of Armonk, N.Y., or any computers, including computers sold under trademarks such as AS400, with accompanying IBM Network Stations.
  • IBM International Business Machines Corporation
  • AS400 IBM Network Stations
  • processors 12 may communicate with one or more host computers 14 through an array 16 of host adapters with associated connectors 18 . Also, the processor or processors 12 may communicate with slower storage, such as a RAID-configured disk storage system 20 , through respective device adapters 22 . The processors 12 may have respective non-volatile storages (NVS) 24 for receiving communication from the other processor, as well as a respective, preferably solid state implemented, data cache 26 . One or both processors are programmed to execute the logic herein.
  • NVS non-volatile storages
  • the flow charts and pseudo code herein illustrate the structure of the present logic executed by the processor(s) 12 as embodied in computer program software.
  • the flow charts and pseudo code illustrate the structures of logic elements, such as computer program code elements or electronic logic circuits, that function according to this invention.
  • the invention is practiced in its essential embodiment by a machine component that renders the logic elements in a form that instructs a digital processing apparatus (that is, a computer) to perform a sequence of function steps corresponding to those shown.
  • the flow charts and/or pseudo code may be embodied in a computer program that is executed by a processor as a series of computer-executable instructions. These instructions may reside, for example, in a program storage device of the system 10 .
  • the program storage device may be RAM, or a magnetic or optical disk or diskette, DASD array, magnetic tape, electronic read-only memory, or other appropriate data storage device.
  • the computer-executable instructions may be lines of compiled C/C++ compatible code.
  • each cache 26 has a capacity of eight gigabytes (GB) (per cluster)
  • each NVS 24 has a capacity of two GB (per cluster)
  • four 600 MHz PowerPC/RS64IV CPUs per cluster
  • sixteen RAID-5 (6+parity+spare) arrays with 72 GB, 10K rpm drives may be used in the data storage 20 .
  • An AIX computer can implement the host computer 14 with the following configuration: sixteen GB RAM, two-way SMP with one GHz PowerPC/Power4 CPUs.
  • the host computer 14 may be connected to the processors 12 through two fiber channel cards implementing the host adaptor array 16 .
  • each cache 26 may include two stacked lists, namely, a RANDOM list 28 and a sequential (“SEQ”) list 30 .
  • the random list 28 lists cached pages that may have been randomly accessed pursuant to, e.g., a read demand, while the SEQ list 30 maintains a list of pages that were cached pursuant to speculative sequential caching or sequential read demands, the principles of which are set forth herein.
  • MRU most recently used
  • LRU least recently used
  • a respective portion 32 , 34 of each list at its bottom e.g., 2%, can be thought of as an LRU portion.
  • a desired size 36 of its LRU portion is dynamically determined and adjusted in accordance with the logic set forth below, to optimize cache performance.
  • a newly prefetched group of tracks along with the asynchronous trigger track in the current group of tracks are placed at the MRU (top) end of the SEQ list 30 .
  • all unaccessed tracks in the current group of tracks are also moved to the MRU end of the list 30 , to retain the benefits of asynchronous prefetching while ridding it of the anomalous behavior noted above.
  • the inventive adaptive, self-tuning, low overhead algorithm of the present invention is shown for dynamically partitioning the amount of cache space among the SEQ list 30 and RANDOM list 28 to minimize the overall miss rate.
  • the marginal utility of adding space to the SEQ list 30 is computed.
  • the marginal utility at block 38 is computed to be between s/L and 2 s/L, and may be chosen for convenience to be the latter, wherein “s” represents the rate of misses from sequential cache for synchronous and asynchronous prefetching and “L” represents the length of the SEQ list 30 (e.g., in 4 KB pages).
  • “s” is the sum over potentially multiple streams of the respective rates of sequential cache misses for synchronous and asynchronous prefetching, which is straightforward to observe.
  • This “marginal utility” may be regarded as a measure of how the rate of sequential cache misses changes as the size of the list changes.
  • block 40 shows that the marginal utility of adding space to the RANDOM list 28 is also determined empirically.
  • this marginal utility is determined to be 1/ ⁇ L, wherein ⁇ L is the length of the bottom-most portion of the RANDOM cache 28 during a time period defined by two successive cache hits in the bottom-most portion of the random list 28 .
  • the time period for sampling for undertaking the empirical determination at block 40 may itself be adapted to actual operating conditions, although some fixed time period may also be used.
  • the logic flows to decision diamond 42 to determine which marginal utility is greater. If the marginal utility of increasing the SEQ cache exceeds the marginal utility of increasing the RANDOM cache, the logic increases the desired size 36 ( FIG. 2 ) of the SEQ list 30 at block 44 ; otherwise, it decreases the desired size 36 at block 46 .
  • FIG. 4 shows an exemplary non-limiting logic flow for replacing pages in cache.
  • decision diamond 48 it can be determined whether a boundary condition exists.
  • boundary conditions include the RANDOM list 28 falling below 2% or exceeding 98% of overall cache size. Another boundary condition may exist if the initialization shown in the pseudocode below has not been executed.
  • the logic can flow to decision diamond 50 to determine whether the LRU track in the SEQ list 30 is older than the LRU track in the RANDOM list 28 , it being understood that this test uses timestamps given to cached data in accordance with principles known in the art. If the test at decision diamond 50 is negative, the logic replaces a page from the RANDOM list 28 at block 52 ; otherwise, it replaces from the SEQ list 30 at block 54 .
  • the logic flows to decision diamond 56 to determine whether the size of at least a portion of the SEQ list 30 , e.g., the LRU portion 34 shown in FIG. 2 , exceeds the desired size 36 . If it does, the logic flows to block 54 , and otherwise flows to block 52 .
  • the logic shown in FIG. 3 continuously or periodically loops back to block 38 , such that the desired size of the LRU portion of the SEQ list 30 is dynamically established during operation, changing as cache use dictates.
  • Lines 1 - 3 are used during the initialization phase only.
  • the counter “seqmiss” tracks the number of sequential misses between two consecutive bottom hits in RANDOM, and is initialized to zero.
  • the variable “desiredSeqListSize” is the desired size 36 of the SEQ list 30 , and is initially set to zero meaning a boundary condition initially exists. The remaining logic thus starts only after SEQ is populated (see lines 69 - 73 ).
  • the variable “adapt” determines the instantaneous magnitude and direction of the adaptation to desiredSeqListSize.
  • Lines 4 - 50 describe the cache management policy.
  • the quantity ratio in line 4 can be set to (2*s* ⁇ L)/L in some non-limiting embodiments.
  • line 70 (which carries out the actual adaptation) is executed only when a track is actually evicted from one of the lists.
  • Lines 11 - 27 deal with the case when a track in SEQ is hit. If the hit is in the bottom portion of the SEQ list (line 12 ) and “ratio” has become large (line 13 ), in other words, no hit has been observed in the bottom of the RANDOM list for a while, then “adapt” is set to 1 (line 14 ) meaning that “desiredSeqListSize” is increased at the fastest rate possible. If the hit track is an asynchronous trigger track (line 17 ), then line 18 asynchronously reads ahead the next sequential group of tracks. Lines 21 - 27 describe the implementation of a non-limiting way to detect sequential access patterns.
  • Lines 28 - 40 deal with a cache miss.
  • a sequential miss lines 29 - 31
  • a group of tracks is synchronously read-ahead at line 32 .
  • the remaining lines deal with the detection of sequential access patterns.
  • Lines 41 - 50 (i) read the missing tracks from a given range of tracks; (ii) places all tracks in the given range at the MRU position; and (iii) set the asynchronous trigger.
  • Lines 51 - 73 implement the cache replacement policy of FIG. 4 above and execute out the adaptation. As is typical in multithreaded systems, the present invention assumes that these lines may run on a separate thread (line 51 ). If the size of the free queue drops below some predetermined threshold (line 52 ), then tracks are evicted from SEQ if it exceeds desiredSeqListSize and tracks are evicted from RANDOM otherwise. In either case, the evicted tracks are placed on the free queue. Finally, lines 68 - 73 evict the LRU track from the desired list, and effect an adaption as described above.
  • the present invention combines caching along with sequential prefetching, and does not require a history to be kept.
US10/954,937 2004-09-30 2004-09-30 System and method for dynamic sizing of cache sequential list Expired - Fee Related US7464246B2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
US10/954,937 US7464246B2 (en) 2004-09-30 2004-09-30 System and method for dynamic sizing of cache sequential list
TW094133273A TWI393004B (zh) 2004-09-30 2005-09-26 用於動態改變快取記憶體順序清單大小之系統以及方法
CNB2005101070534A CN100442249C (zh) 2004-09-30 2005-09-29 用于高速缓存器顺序列表的动态尺寸确定的系统和方法
US12/032,851 US7509470B2 (en) 2004-09-30 2008-02-18 System and method for dynamic sizing of cache sequential list
US12/033,105 US7533239B2 (en) 2004-09-30 2008-02-19 System and method for dynamic sizing of cache sequential list
US12/060,431 US7793065B2 (en) 2004-09-30 2008-04-01 System and method for dynamic sizing of cache sequential list
US12/060,945 US7707382B2 (en) 2004-09-30 2008-04-02 System and method for dynamic sizing of cache sequential list

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/954,937 US7464246B2 (en) 2004-09-30 2004-09-30 System and method for dynamic sizing of cache sequential list

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US12/032,851 Continuation US7509470B2 (en) 2004-09-30 2008-02-18 System and method for dynamic sizing of cache sequential list
US12/033,105 Division US7533239B2 (en) 2004-09-30 2008-02-19 System and method for dynamic sizing of cache sequential list
US12/033,105 Continuation US7533239B2 (en) 2004-09-30 2008-02-19 System and method for dynamic sizing of cache sequential list

Publications (2)

Publication Number Publication Date
US20060069871A1 US20060069871A1 (en) 2006-03-30
US7464246B2 true US7464246B2 (en) 2008-12-09

Family

ID=36100559

Family Applications (5)

Application Number Title Priority Date Filing Date
US10/954,937 Expired - Fee Related US7464246B2 (en) 2004-09-30 2004-09-30 System and method for dynamic sizing of cache sequential list
US12/032,851 Expired - Fee Related US7509470B2 (en) 2004-09-30 2008-02-18 System and method for dynamic sizing of cache sequential list
US12/033,105 Expired - Fee Related US7533239B2 (en) 2004-09-30 2008-02-19 System and method for dynamic sizing of cache sequential list
US12/060,431 Expired - Fee Related US7793065B2 (en) 2004-09-30 2008-04-01 System and method for dynamic sizing of cache sequential list
US12/060,945 Expired - Fee Related US7707382B2 (en) 2004-09-30 2008-04-02 System and method for dynamic sizing of cache sequential list

Family Applications After (4)

Application Number Title Priority Date Filing Date
US12/032,851 Expired - Fee Related US7509470B2 (en) 2004-09-30 2008-02-18 System and method for dynamic sizing of cache sequential list
US12/033,105 Expired - Fee Related US7533239B2 (en) 2004-09-30 2008-02-19 System and method for dynamic sizing of cache sequential list
US12/060,431 Expired - Fee Related US7793065B2 (en) 2004-09-30 2008-04-01 System and method for dynamic sizing of cache sequential list
US12/060,945 Expired - Fee Related US7707382B2 (en) 2004-09-30 2008-04-02 System and method for dynamic sizing of cache sequential list

Country Status (3)

Country Link
US (5) US7464246B2 (zh)
CN (1) CN100442249C (zh)
TW (1) TWI393004B (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195834A1 (en) * 2004-09-30 2008-08-14 International Business Machines Corporation System and method for dynamic sizing of cache sequential list
US20140095802A1 (en) * 2012-09-28 2014-04-03 Oracle International Corporation Caching Large Objects In A Computer System With Mixed Data Warehousing And Online Transaction Processing Workload
US9268692B1 (en) 2012-04-05 2016-02-23 Seagate Technology Llc User selectable caching
US9542324B1 (en) 2012-04-05 2017-01-10 Seagate Technology Llc File associated pinning
US9996476B2 (en) 2015-09-03 2018-06-12 International Business Machines Corporation Management of cache lists via dynamic sizing of the cache lists
US20180239548A1 (en) * 2017-02-23 2018-08-23 SK Hynix Inc. Operating method of memory system
US10282543B2 (en) * 2017-05-03 2019-05-07 International Business Machines Corporation Determining whether to destage write data in cache to storage based on whether the write data has malicious data
US10445497B2 (en) 2017-05-03 2019-10-15 International Business Machines Corporation Offloading processing of writes to determine malicious data from a first storage system to a second storage system
US10613982B1 (en) 2012-01-06 2020-04-07 Seagate Technology Llc File-aware caching driver
US11188641B2 (en) 2017-04-07 2021-11-30 International Business Machines Corporation Using a characteristic of a process input/output (I/O) activity and data subject to the I/O activity to determine whether the process is a suspicious process

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7260679B2 (en) * 2004-10-12 2007-08-21 International Business Machines Corporation Apparatus and method to manage a data cache using a first and second least recently used list
US8250316B2 (en) * 2006-06-06 2012-08-21 Seagate Technology Llc Write caching random data and sequential data simultaneously
CN100426258C (zh) * 2006-09-08 2008-10-15 威盛电子股份有限公司 嵌入式系统及其缓冲器尺寸决定方法
US7783839B2 (en) 2007-01-08 2010-08-24 International Business Machines Corporation Using different algorithms to destage different types of data from cache
US7702857B2 (en) * 2007-08-22 2010-04-20 International Business Machines Corporation Adjusting parameters used to prefetch data from storage into cache
US8239640B2 (en) * 2008-10-09 2012-08-07 Dataram, Inc. System for controlling performance aspects of a data storage and access routine
KR101570179B1 (ko) * 2008-12-08 2015-11-18 삼성전자주식회사 빠른 파워-오프를 위한 캐시 동기화 방법 및 시스템
US8914417B2 (en) 2009-01-07 2014-12-16 International Business Machines Corporation Apparatus, system, and method for maintaining a context stack
US8095738B2 (en) * 2009-06-15 2012-01-10 International Business Machines Corporation Differential caching mechanism based on media I/O speed
US8510785B2 (en) * 2009-10-19 2013-08-13 Motorola Mobility Llc Adaptive media caching for video on demand
WO2011106458A1 (en) * 2010-02-24 2011-09-01 Marvell World Trade Ltd. Caching based on spatial distribution of accesses to data storage devices
CN101894048B (zh) * 2010-05-07 2012-11-14 中国科学院计算技术研究所 一种基于阶段分析的缓存动态划分方法和系统
CN101853218B (zh) * 2010-05-12 2015-05-20 中兴通讯股份有限公司 用于磁盘阵列的读取方法和系统
US9043533B1 (en) * 2010-06-29 2015-05-26 Emc Corporation Sizing volatile memory cache based on flash-based cache usage
US8590001B2 (en) * 2010-08-20 2013-11-19 Promise Technology, Inc. Network storage system with data prefetch and method of operation thereof
US8812788B2 (en) 2010-11-09 2014-08-19 Lsi Corporation Virtual cache window headers for long term access history
US8533393B1 (en) * 2010-12-14 2013-09-10 Expedia, Inc. Dynamic cache eviction
US8650354B2 (en) * 2011-07-22 2014-02-11 International Business Machines Corporation Prefetching tracks using multiple caches
US8631190B2 (en) * 2011-07-22 2014-01-14 International Business Machines Corporation Prefetching data tracks and parity data to use for destaging updated tracks
US8566530B2 (en) 2011-07-22 2013-10-22 International Business Machines Corporation Prefetching source tracks for destaging updated tracks in a copy relationship
US9069678B2 (en) 2011-07-26 2015-06-30 International Business Machines Corporation Adaptive record caching for solid state disks
CN102298508B (zh) * 2011-09-07 2014-08-06 记忆科技(深圳)有限公司 基于流的固态硬盘预读取的方法及装置
US9110810B2 (en) * 2011-12-06 2015-08-18 Nvidia Corporation Multi-level instruction cache prefetching
CN103778069B (zh) * 2012-10-18 2017-09-08 深圳市中兴微电子技术有限公司 高速缓冲存储器的高速缓存块长度调整方法及装置
US9497489B2 (en) 2013-03-12 2016-11-15 Google Technology Holdings LLC System and method for stream fault tolerance through usage based duplication and shadow sessions
DE102013204469A1 (de) * 2013-03-14 2014-09-18 Robert Bosch Gmbh Mikroelektrochemischer Sensor und Verfahren zum Betreiben eines mikroelektrochemischen Sensors
US9652406B2 (en) * 2015-04-30 2017-05-16 International Business Machines Corporation MRU batching to reduce lock contention
US20170116127A1 (en) * 2015-10-22 2017-04-27 Vormetric, Inc. File system adaptive read ahead
KR102429903B1 (ko) * 2015-12-03 2022-08-05 삼성전자주식회사 비휘발성 메인 메모리 시스템의 페이지 폴트 처리 방법
KR102415875B1 (ko) * 2017-07-17 2022-07-04 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
US20190303037A1 (en) * 2018-03-30 2019-10-03 Ca, Inc. Using sequential read intention to increase data buffer reuse
US11169919B2 (en) 2019-05-12 2021-11-09 International Business Machines Corporation Cache preference for selected volumes within a storage system
US11237730B2 (en) 2019-05-12 2022-02-01 International Business Machines Corporation Favored cache status for selected volumes within a storage system
US11176052B2 (en) 2019-05-12 2021-11-16 International Business Machines Corporation Variable cache status for selected volumes within a storage system
US11151035B2 (en) 2019-05-12 2021-10-19 International Business Machines Corporation Cache hit ratios for selected volumes within a storage system
US11163698B2 (en) * 2019-05-12 2021-11-02 International Business Machines Corporation Cache hit ratios for selected volumes using synchronous I/O

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761715A (en) * 1995-08-09 1998-06-02 Kabushiki Kaisha Toshiba Information processing device and cache memory with adjustable number of ways to reduce power consumption based on cache miss ratio
US6141731A (en) 1998-08-19 2000-10-31 International Business Machines Corporation Method and system for managing data in cache using multiple data structures
US6260115B1 (en) * 1999-05-13 2001-07-10 Storage Technology Corporation Sequential detection and prestaging methods for a disk storage subsystem
US6327644B1 (en) 1998-08-18 2001-12-04 International Business Machines Corporation Method and system for managing data in cache
US20030105928A1 (en) 2001-12-04 2003-06-05 International Business Machines Corporation Method, system, and program for destaging data in cache
US20040098541A1 (en) 2002-11-14 2004-05-20 International Business Machines Corporation System and method for implementing an adaptive replacement cache policy

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5432919A (en) * 1989-07-06 1995-07-11 Digital Equipment Corporation Sequential reference management for cache memories
US5732242A (en) * 1995-03-24 1998-03-24 Silicon Graphics, Inc. Consistently specifying way destinations through prefetching hints
US5940877A (en) * 1997-06-12 1999-08-17 International Business Machines Corporation Cache address generation with and without carry-in
US6684294B1 (en) * 2000-03-31 2004-01-27 Intel Corporation Using an access log for disk drive transactions
US6957305B2 (en) * 2002-08-29 2005-10-18 International Business Machines Corporation Data streaming mechanism in a microprocessor
US6961821B2 (en) * 2002-10-16 2005-11-01 International Business Machines Corporation Reconfigurable cache controller for nonuniform memory access computer systems
CN1849591A (zh) * 2002-11-22 2006-10-18 皇家飞利浦电子股份有限公司 使用高速缓存未命中模式来寻址跨距预测表
US7464246B2 (en) 2004-09-30 2008-12-09 International Business Machines Corporation System and method for dynamic sizing of cache sequential list

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761715A (en) * 1995-08-09 1998-06-02 Kabushiki Kaisha Toshiba Information processing device and cache memory with adjustable number of ways to reduce power consumption based on cache miss ratio
US6327644B1 (en) 1998-08-18 2001-12-04 International Business Machines Corporation Method and system for managing data in cache
US6141731A (en) 1998-08-19 2000-10-31 International Business Machines Corporation Method and system for managing data in cache using multiple data structures
US6260115B1 (en) * 1999-05-13 2001-07-10 Storage Technology Corporation Sequential detection and prestaging methods for a disk storage subsystem
US20030105928A1 (en) 2001-12-04 2003-06-05 International Business Machines Corporation Method, system, and program for destaging data in cache
US20040098541A1 (en) 2002-11-14 2004-05-20 International Business Machines Corporation System and method for implementing an adaptive replacement cache policy

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARC: A Self-Tuning, Low Overhead Replacement Cache by Megiddo & Modha; USENIX File & Storage Tech. Conf.; Mar. 31, 2003, San Francisco, CA.
IBM Dossier ARC920020050; Method and System for Adaptive Replacement Cache; Modha and Megiddo, Jan. 29, 2003.

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195834A1 (en) * 2004-09-30 2008-08-14 International Business Machines Corporation System and method for dynamic sizing of cache sequential list
US7793065B2 (en) 2004-09-30 2010-09-07 International Business Machines Corporation System and method for dynamic sizing of cache sequential list
US10698826B1 (en) 2012-01-06 2020-06-30 Seagate Technology Llc Smart file location
US10613982B1 (en) 2012-01-06 2020-04-07 Seagate Technology Llc File-aware caching driver
US9542324B1 (en) 2012-04-05 2017-01-10 Seagate Technology Llc File associated pinning
US9268692B1 (en) 2012-04-05 2016-02-23 Seagate Technology Llc User selectable caching
US10339069B2 (en) * 2012-09-28 2019-07-02 Oracle International Corporation Caching large objects in a computer system with mixed data warehousing and online transaction processing workload
US20140095802A1 (en) * 2012-09-28 2014-04-03 Oracle International Corporation Caching Large Objects In A Computer System With Mixed Data Warehousing And Online Transaction Processing Workload
US9996476B2 (en) 2015-09-03 2018-06-12 International Business Machines Corporation Management of cache lists via dynamic sizing of the cache lists
US20180239548A1 (en) * 2017-02-23 2018-08-23 SK Hynix Inc. Operating method of memory system
US10656846B2 (en) * 2017-02-23 2020-05-19 SK Hynix Inc. Operating method of memory system
US11188641B2 (en) 2017-04-07 2021-11-30 International Business Machines Corporation Using a characteristic of a process input/output (I/O) activity and data subject to the I/O activity to determine whether the process is a suspicious process
US11651070B2 (en) 2017-04-07 2023-05-16 International Business Machines Corporation Using a characteristic of a process input/output (I/O) activity and data subject to the I/O activity to determine whether the process is a suspicious process
US10282543B2 (en) * 2017-05-03 2019-05-07 International Business Machines Corporation Determining whether to destage write data in cache to storage based on whether the write data has malicious data
US10445497B2 (en) 2017-05-03 2019-10-15 International Business Machines Corporation Offloading processing of writes to determine malicious data from a first storage system to a second storage system
US11120128B2 (en) 2017-05-03 2021-09-14 International Business Machines Corporation Offloading processing of writes to determine malicious data from a first storage system to a second storage system
US11144639B2 (en) 2017-05-03 2021-10-12 International Business Machines Corporation Determining whether to destage write data in cache to storage based on whether the write data has malicious data

Also Published As

Publication number Publication date
US20080183969A1 (en) 2008-07-31
US20080140940A1 (en) 2008-06-12
CN100442249C (zh) 2008-12-10
US7793065B2 (en) 2010-09-07
TWI393004B (zh) 2013-04-11
US20080195834A1 (en) 2008-08-14
US7509470B2 (en) 2009-03-24
US20060069871A1 (en) 2006-03-30
CN1755652A (zh) 2006-04-05
US7707382B2 (en) 2010-04-27
US20080140939A1 (en) 2008-06-12
TW200627144A (en) 2006-08-01
US7533239B2 (en) 2009-05-12

Similar Documents

Publication Publication Date Title
US7464246B2 (en) System and method for dynamic sizing of cache sequential list
US6141731A (en) Method and system for managing data in cache using multiple data structures
US6823428B2 (en) Preventing cache floods from sequential streams
US8601216B2 (en) Method and system for removing cache blocks
US9971513B2 (en) System and method for implementing SSD-based I/O caches
Gill et al. SARC: Sequential Prefetching in Adaptive Replacement Cache.
US6381677B1 (en) Method and system for staging data into cache
US6327644B1 (en) Method and system for managing data in cache
US20030105926A1 (en) Variable size prefetch cache
US8095738B2 (en) Differential caching mechanism based on media I/O speed
US9158706B2 (en) Selective space reclamation of data storage memory employing heat and relocation metrics
US8874840B2 (en) Adaptive prestaging in a storage controller
US20040123043A1 (en) High performance memory device-state aware chipset prefetcher
US20090204765A1 (en) Data block frequency map dependent caching
US7120759B2 (en) Storage system and method for prestaging data in a cache for improved performance
US9996476B2 (en) Management of cache lists via dynamic sizing of the cache lists
US20220365878A1 (en) Prefetching operations in storage devices
Fedorov et al. Speculative paging for future NVM storage
Ou et al. Clean first or dirty first? a cost-aware self-adaptive buffer replacement policy
CN109086224B (zh) 一种自适应分类重用距离来捕捉热数据的缓存方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GILL, BINNY SHER;MODHA, DHARMENDRA SHANTILAL;REEL/FRAME:015863/0929

Effective date: 20040929

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20201209