GB2276962A - User-defined priority for cache replacement mechanism. - Google Patents

User-defined priority for cache replacement mechanism. Download PDF

Info

Publication number
GB2276962A
GB2276962A GB9404866A GB9404866A GB2276962A GB 2276962 A GB2276962 A GB 2276962A GB 9404866 A GB9404866 A GB 9404866A GB 9404866 A GB9404866 A GB 9404866A GB 2276962 A GB2276962 A GB 2276962A
Authority
GB
United Kingdom
Prior art keywords
cache
priority
user
replacement
cache line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9404866A
Other versions
GB9404866D0 (en
GB2276962B (en
Inventor
Albert Stephen Hilditch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Services Ltd
Original Assignee
Fujitsu Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB939307359A external-priority patent/GB9307359D0/en
Application filed by Fujitsu Services Ltd filed Critical Fujitsu Services Ltd
Priority to GB9404866A priority Critical patent/GB2276962B/en
Publication of GB9404866D0 publication Critical patent/GB9404866D0/en
Publication of GB2276962A publication Critical patent/GB2276962A/en
Application granted granted Critical
Publication of GB2276962B publication Critical patent/GB2276962B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • G06F12/127Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

An n-way set-associative cache (where n is an integer greater than 1), includes a replacement mechanism for selecting a cache line for replacement. Each cache line has an associated priority tag indicating a user-defined priority for that cache line. The replacement mechanism comprises means for selecting a cache line with the lowest user-defined priority in a current set of cache lines, and means (e.g. based on recency of usage) for choosing between cache lines of equal priority if there is more than one cache line with the lowest user-defined priority in the set. <IMAGE>

Description

CACHE REPLACEMENT MECHANISM Background to the Invention This invention relates to set-associative cache memories.
In computer systems, it is well known to employ one or more cache memories of various sizes. The aim is to keep the most useful data in a small, fast cache in order to avoid having to retrieve the data from the larger, slower RAM. It is common to design levels of caching of different sizes and speeds.
If the required data is in a cache, it is said that a "hit" has occurred, otherwise a "miss" has occurred. The percentage of misses is called the "miss rate".
Apart from the cache size, there are two major design decisions when implementing a cache: (1) The number of cache elements scanned simultaneously, sometimes called the "set associativity" of the cache. If just one element at a time is scanned, the cache is referred to as direct mapped. If n elements at a time are scanned (where n is greater than 1) the cache is referred to as an n-way set-associative cache. The usual choice for the value of n is 2 or 4. If all the cache is scanned simultaneously, it is referred to as fully associative. The miss rates decrease, in general, as the set associativity increases.
However, the cost of implementation increases as set associativity increases.
(2) The method used to decide which cache element scanned within the cache to replace with the desired data on a cache miss, called the cache replacement policy. This has no meaning for a direct-mapped cache since there is only one place to put the desired data. The two standard replacement methods used are: random replacement", when the desired data is placed in one of the scanned cache elements at random, and "least recently used (LRU) replacement", when the scanned element which has been accessed least recently is replaced by the desired data. LRU replacement usually delivers the smallest miss rate but is more expensive to implement.
The object of the present invention is to provide a new cache replacement mechanism, which is potentially more efficient than these known replacement mechanisms.
Summary of the Invention According to the invention there is provided an n-way set-associative cache (where n is an integer greater than 1), including a replacement mechanism for selecting a cache line for replacement, characterised in that each cache line has an associated priority tag indicating a user-defined priority for that cache line, and the replacement mechanism comprises means for selecting a cache line with the lowest user-defined priority in a current set of cache lines, and means for choosing between cache lines of equal priority if there is more than one cache line with said lowest user-defined priority in said set.
The invention thus provides a priority replacement policy (PRP), which replaces data in cache lines primarily according to user-defined priorities and secondarily according to an alternative replacement policy. The alternative replacement policy is used to choose between the data in two cache lines that have the same user-defined priority. The alternative replacement policy is said to resolve the replacement choice between the equal priority data within the currently addressed cache lines. This alternative replacement policy may, for example, be a least-recently-used replacement policy, or a random selection.
This invention enables the data or instructions associated with a given process, application, user or user group to have relative priority within the cache. The highest priority data or instructions stay in the cache for as long as possible.
Brief Description of the Drawings Figure 1 shows a cache system with priority tags.
Figure 2 is a flow chart indicating the operation of the cache system on a cache hit.
Figure 3 is a flow chart indicating the operation of the cache system on a cache miss.
Description of an Embodiment of the Invention One cache system in accordance with the invention will now be described by way of example with reference to the accompanying drawings.
Referring to Figure 1, the cache system comprises a 4-way set associative cache comprising four cache data RAMs 10 and four priority tag RAMs 12. The tag RAMs contain a user-defined priority tag for each line of data in the cache. The priority may be defined explicitly, or inherited implicitly from the data's process, application, user or user's group.
The cache system also comprises a least-recently-used (LRU) replacement mechanism 14 and a priority replacement policy (PRP) mechanism 16. The LRU mechanism keeps recency of usage information relating to each cache line, and may be conventional. The operation of the PRP mechanism will be described below.
An input memory address is received in an address register 18. This address is hashed by a hashing circuit 19 and then applied in parallel to the four cache data RAMs, so as to address one line from each RAM. The contents of the four addressed cache lines are examined to see if the desired data is resident in the cache.
Referring to Figure 2, if one of the addressed cache lines contains the desired data, then there is a hit (20) and the desired data can be immediately accessed (21) from the cache.
The LRU mechanism 14 is informed (22) so that it can update the recency-of-usage information for that cache line.
Referring to Figure 3, if there is a cache miss (30) the desired data is requested (31) from slower memory. The PRP mechanism 16 then compares (32) the priority tags associated with the four addressed cache lines, to determine which of the four addressed cache lines is of lowest priority. If only one of the four addressed cache lines has this lowest priority, that line is chosen to receive the desired data from the slower memory. If on the other hand more than one data line has this lowest priority, the LRU mechanism 14 is invoked (33) to resolve the replacement choice.
When the required data is received (34) from slower memory it is written into the cache line selected for replacement. The value of the priority tag of the data is then determined (35), for example from the priority of its process, stored in a process block. This priority tag is written into the corresponding location of the priority tag RAM 12. The LRU mechanism is then informed (36) of the identity of the cache line into which the new data has been written, so that the LRU mechanism can update the recency-of-usage information for that line.

Claims (3)

1. An n-way set-associative cache (where n is an integer greater than 1), including a replacement mechanism for selecting a cache line for replacement, characterised in that each cache line has an associated priority tag indicating a user-defined priority for that cache line, and the replacement mechanism comprises means for selecting a cache line with the lowest user-defined priority in a current set of cache lines, and means for choosing between cache lines of equal priority if there is more than one cache line with said lowest user-defined priority in said set.
2. A cache according to claim 1 wherein said means for choosing between cache lines of equal priority comprises a least-recently-used replacement mechanism.
3. A cache system substantially as hereinbefore described with reference to the accompanying drawings.
GB9404866A 1993-04-08 1994-03-14 Cache replacement mechanism Expired - Fee Related GB2276962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9404866A GB2276962B (en) 1993-04-08 1994-03-14 Cache replacement mechanism

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB939307359A GB9307359D0 (en) 1993-04-08 1993-04-08 Cache replacement mechanism
GB9404866A GB2276962B (en) 1993-04-08 1994-03-14 Cache replacement mechanism

Publications (3)

Publication Number Publication Date
GB9404866D0 GB9404866D0 (en) 1994-04-27
GB2276962A true GB2276962A (en) 1994-10-12
GB2276962B GB2276962B (en) 1997-05-28

Family

ID=26302726

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9404866A Expired - Fee Related GB2276962B (en) 1993-04-08 1994-03-14 Cache replacement mechanism

Country Status (1)

Country Link
GB (1) GB2276962B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0817118A2 (en) * 1996-06-27 1998-01-07 Cirrus Logic, Inc. Memory management of texture maps
US5897651A (en) * 1995-11-13 1999-04-27 International Business Machines Corporation Information handling system including a direct access set associative cache and method for accessing same
GB2365587A (en) * 2000-02-09 2002-02-20 Nec Corp Removing data from a cache memory using user and device attribute information
WO2006032508A1 (en) * 2004-09-23 2006-03-30 Sap Ag Cache eviction
WO2008043670A1 (en) * 2006-10-10 2008-04-17 International Business Machines Corporation Managing cache data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0391871A2 (en) * 1989-04-03 1990-10-10 International Business Machines Corporation Method for managing a prioritized cache

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0391871A2 (en) * 1989-04-03 1990-10-10 International Business Machines Corporation Method for managing a prioritized cache

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5897651A (en) * 1995-11-13 1999-04-27 International Business Machines Corporation Information handling system including a direct access set associative cache and method for accessing same
EP0817118A2 (en) * 1996-06-27 1998-01-07 Cirrus Logic, Inc. Memory management of texture maps
EP0817118A3 (en) * 1996-06-27 1999-06-23 Cirrus Logic, Inc. Memory management of texture maps
GB2365587A (en) * 2000-02-09 2002-02-20 Nec Corp Removing data from a cache memory using user and device attribute information
GB2365587B (en) * 2000-02-09 2002-08-21 Nec Corp Information providing system information providing method and client apparatus
WO2006032508A1 (en) * 2004-09-23 2006-03-30 Sap Ag Cache eviction
WO2008043670A1 (en) * 2006-10-10 2008-04-17 International Business Machines Corporation Managing cache data

Also Published As

Publication number Publication date
GB9404866D0 (en) 1994-04-27
GB2276962B (en) 1997-05-28

Similar Documents

Publication Publication Date Title
US5737752A (en) Cache replacement mechanism
US7844778B2 (en) Intelligent cache replacement mechanism with varying and adaptive temporal residency requirements
EP1370946B1 (en) Cache way prediction based on instruction base register
EP1654660B1 (en) A method of data caching
EP1149342B1 (en) Method and apparatus for managing temporal and non-temporal data in a single cache structure
US6282617B1 (en) Multiple variable cache replacement policy
US6826651B2 (en) State-based allocation and replacement for improved hit ratio in directory caches
US7020748B2 (en) Cache replacement policy to mitigate pollution in multicore processors
US20080215816A1 (en) Apparatus and method for filtering unused sub-blocks in cache memories
US20080046736A1 (en) Data Processing System and Method for Reducing Cache Pollution by Write Stream Memory Access Patterns
US10725923B1 (en) Cache access detection and prediction
US10628318B2 (en) Cache sector usage prediction
US6625694B2 (en) System and method for allocating a directory entry for use in multiprocessor-node data processing systems
EP0604015A2 (en) Cache control system
US5809526A (en) Data processing system and method for selective invalidation of outdated lines in a second level memory in response to a memory request initiated by a store operation
US5530834A (en) Set-associative cache memory having an enhanced LRU replacement strategy
GB2546245A (en) Cache memory
US6145057A (en) Precise method and system for selecting an alternative cache entry for replacement in response to a conflict between cache operation requests
US8473686B2 (en) Computer cache system with stratified replacement
US6311253B1 (en) Methods for caching cache tags
KR100395768B1 (en) Multi-level cache system
US6671780B1 (en) Modified least recently allocated cache replacement method and apparatus that allows skipping a least recently allocated cache block
GB2276962A (en) User-defined priority for cache replacement mechanism.
US6397298B1 (en) Cache memory having a programmable cache replacement scheme
US11334488B2 (en) Cache management circuits for predictive adjustment of cache control policies based on persistent, history-based cache control information

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20130314