WO2008043670A1 - Gestion de données de cache - Google Patents

Gestion de données de cache Download PDF

Info

Publication number
WO2008043670A1
WO2008043670A1 PCT/EP2007/060264 EP2007060264W WO2008043670A1 WO 2008043670 A1 WO2008043670 A1 WO 2008043670A1 EP 2007060264 W EP2007060264 W EP 2007060264W WO 2008043670 A1 WO2008043670 A1 WO 2008043670A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
cache
priority level
implemented method
priority
Prior art date
Application number
PCT/EP2007/060264
Other languages
English (en)
Inventor
William Maron
Greg Mewhinney
Mysore Sathyanarayana Srinivas
David Blair Whitworth
Original Assignee
International Business Machines Corporation
Ibm United Kingdom Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/539,889 external-priority patent/US20080086598A1/en
Priority claimed from US11/539,894 external-priority patent/US20080086599A1/en
Application filed by International Business Machines Corporation, Ibm United Kingdom Limited filed Critical International Business Machines Corporation
Publication of WO2008043670A1 publication Critical patent/WO2008043670A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/126Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
    • G06F12/127Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms

Definitions

  • the present invention relates generally to an improved data processing system and in particular to a computer implemented method and apparatus for processing data. Still more particularly, the present invention relates to a computer implemented method, apparatus, and computer usable program code for managing data in a cache.
  • a cache is a section of memory used to store data that is used more frequently than those in storage locations that may take longer to access. Processors typically use caches to reduce the average time required to access memory.
  • the processor first checks to see whether that memory location is present in the cache. If the processor finds that the memory location is present in the cache, a cache hit has occurred. Otherwise, a cache miss is present. As a result of a cache miss, a processor immediately reads or writes the data in the cache line.
  • a cache line is a location in the cache that has a tag containing the index of the data in main memory that is stored in the cache. This cache line is also called a cache block.
  • Ll level one
  • L2 level two
  • Streaming is data accessed sequentially, perhaps modified, and then never referred to again.
  • Locking is especially associative data that may be referenced multiple times or after long periods of idle time. Allocation and replacement are usually handled by some random, round robin, or least recently used (LRU) algorithms.
  • Software could detect the type of data pattern it is using and should use a resource management algorithm concept to help hardware minimize memory latencies.
  • Software directed set allocation and replacement methods in a set associative cache will create "virtual" operating spaces for each application.
  • a cache may divide a way into multiple sets for storing data in one of multiple ways for an entry. A way is also referred to as a set.
  • Opportunistic describes random data accesses.
  • Pseudo-LRU is an approximated replacement policy to keep track of the order in which lines within a cache congruence class are accessed, so that only the least recently accessed line is replaced by new data when there is a cache miss.
  • the p-LRU is updated such that the last item accessed is now most recently used, and the second to least recently used now becomes the least recently used data.
  • the illustrative embodiments provide a computer implemented method, apparatus, and computer usable program code for managing data in a cache and for establishing a priority level for said data in the cache.
  • a determination is made whether data is designated for slower aging within the cache during execution of instructions for an application.
  • the priority level for the data in the cache is set in response to a determination that the data is designated for slower aging.
  • the priority level indicates that the data is aged slower than other data without the priority level .
  • Data which has been so designated by an application can be identified and then aged in the cache at a slower rate than other data in the cache that is undesignated for slower aging in response to identifying the data in the cache.
  • FIG. 1 is a block diagram of a data processing system in which the illustrative embodiments may be implemented
  • FIG. 2 is a diagram illustrating a processor system in accordance with the illustrative embodiments
  • FIG. 3 is a typical software architecture for a server-client system in accordance with the illustrative embodiments
  • Figure 4 is an exemplary cache priority table in accordance with the illustrative embodiments.
  • FIG. 5 is a flowchart for a process for establishing cache priority information in accordance with the illustrative embodiments.
  • FIG. 6 is a flowchart for a process for establishing a cache priority level in accordance with the illustrative embodiments.
  • Data processing system 100 is an example of a computer in which processes and an apparatus of the illustrative embodiments may be located.
  • data processing system 100 employs a hub architecture including a north bridge and memory controller hub (MCH) 102 and a south bridge and input/output (I/O) controller hub (ICH) 104.
  • MCH north bridge and memory controller hub
  • I/O input/output controller hub
  • Processor unit 106, main memory 108, and graphics processor 110 are connected to north bridge and memory controller hub 102.
  • Graphics processor 110 may be connected to the MCH through an accelerated graphics port (AGP), for example.
  • Processor unit 106 contains a set of one or more processors. When more than one processor is present, these processors may be separate processors in separate packages. Alternatively, the processors may be multiple cores in a package. Further, the processors may be multiple multi-core units.
  • a Cell Broadband EngineTM processor which is a heterogeneous processor.
  • This process has a processor architecture that is directed toward distributed processing.
  • This structure enables implementation of a wide range of single or multiple processor and memory configurations, in order to optimally address many different systems and application requirements.
  • This type of processor can consist of a single chip, a multi-chip module (or modules), or multiple single-chip modules on a motherboard or other second-level package, depending on the technology used and the cost/performance characteristics of the intended implementation.
  • a Cell Broadband EngineTM has a PowerPC
  • PPE Processor Element
  • SPU Synergistic Processor Units
  • the PPE is a general purpose processing unit that can perform system management functions, like addressing memory-protection tables. SPUs are less complex computation units that do not have the system management functions. Instead, the SPUs provide computational processing to applications and are managed by the PPE.
  • local area network (LAN) adapter 112 connects to south bridge and I/O controller hub 104 and audio adapter 116, keyboard and mouse adapter 120, modem 122, read only memory (ROM) 124, hard disk drive (HDD) 126, CD-ROM drive 130, universal serial bus (USB) ports and other communications ports 132, and PCI/PCIe devices 134 connect to south bridge and I/O controller hub 104 through bus 138 and bus 140.
  • PCI/PCIe devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. PCI uses a card bus controller, while PCIe does not.
  • ROM 124 may be, for example, a flash binary input/output system (BIOS).
  • BIOS serial advanced technology attachment
  • a super I/O (SIO) device 136 may be connected to south bridge and I/O controller hub 104.
  • An operating system runs on processor unit 106 and coordinates and provides control of various components within data processing system 100 in Figure 1.
  • the operating system may be a commercially available operating system such as Microsoft® Windows® XP
  • An object oriented programming system such as the JavaTM programming system, may run in conjunction with the operating system and provides calls to the operating system from Java programs or applications executing on data processing system 100 (Java is a trademark of Sun Microsystems, Inc. in the United States, other countries, or both).
  • Instructions for the operating system, the object-oriented programming system, and applications or programs are located on storage devices, such as hard disk drive 126, and may be loaded into main memory 108 for execution by processor unit 106.
  • the processes of the illustrative embodiments are performed by processor unit 106 using computer implemented instructions, which may be located in a memory such as, for example, main memory 108, read only memory 124, or in one or more peripheral devices.
  • the hardware may vary depending on the implementation.
  • Other internal hardware or peripheral devices such as flash memory, equivalent non- volatile memory, or optical disk drives and the like, may be used in addition to or in place of the hardware.
  • the processes of the illustrative embodiments may be applied to a multiprocessor data processing system.
  • data processing system 100 may be a personal digital assistant (PDA), which is configured with flash memory to provide non- volatile memory for storing operating system files and/or user-generated data.
  • PDA personal digital assistant
  • a bus system may be comprised of one or more buses, such as a system bus, an I/O bus and a PCI bus. Of course the bus system may be implemented using any type of communications fabric or architecture that provides for a transfer of data between different components or devices attached to the fabric or architecture.
  • a communications unit may include one or more devices used to transmit and receive data, such as a modem or a network adapter.
  • a memory may be, for example, main memory 108 or a cache such as found in north bridge and memory controller hub 102.
  • a processing unit may include one or more processors or CPUs.
  • FIG. 1 are not meant to imply architectural limitations.
  • data processing system 100 also may be a tablet computer, laptop computer, or telephone device in addition to taking the form of a PDA.
  • the illustrative embodiments provide a computer implemented method, apparatus, and computer usable program code for managing data in a cache.
  • a cache priority level or priority level is set for critical data structures within an application.
  • the cache priority level is a designation, value, or other indicator that prolongs the time that the data of the critical data structures remains in the cache. In other words, the addresses of the critical data structures are aged more slowly to ensure that critical data remains cached longer.
  • Critical data structures are data structures that are critical for the performance of the application. Critical data structures may include data that is frequently used or data that needs to be accessed efficiently at any given time. By keeping the data from the critical data structures in cache for prolonged amounts of time, the application is able to achieve optimal performance.
  • Processor system 200 is an example of a processor that may be found in processor unit 106 in Figure 1.
  • processor system 200 contains fetch unit 202, decode unit 204, issue unit 206, branch unit 208, execution unit 210, and completion unit 212.
  • Processor system 200 also contains memory subsystem 214.
  • Memory subsystem 214 contains cache array 216, least recently used (LRU) array 218, LRU control 220, L2 load and store queue control 222, directory array 224, and critical structure logic 226.
  • Processor system 200 connects to host bus 228. Additionally, main memory unit
  • bus control unit 232 also connect to host bus 228.
  • processors and external devices 234 also connect to host bus 228.
  • fetch unit 202 fetches instructions from memory subsystem 214 or main memory unit 230 to speed up execution of a program.
  • Fetch unit 202 retrieves an instruction from memory before that instruction is needed to avoid the processor having to wait for the memory, such as memory subsystem 214 or main memory unit 230 to answer a request for the instruction.
  • Decode unit 204 decodes an instruction for execution. In other words, decode unit 204 identifies the command to be performed, as well as operands on which the command is to be applied.
  • Issue unit 206 sends the decoded instruction to a unit for execution such as, for example, execution unit 210.
  • Execution unit 210 is an example of a unit that executes the instruction received from issue unit 206. Execution unit 210 performs operations and calculations called for by the instruction. For example, execution unit 210 may include internal units, such as a floating point unit, an arithmetic logic unit (ALU), or some other unit. Completion unit 212 validates the operations in the program order for instructions that may be executed out of order by execution unit 210. Branch unit 208 handles branches in instructions.
  • ALU arithmetic logic unit
  • Cache array 216 contains sets for data needed by processor system 200. These sets are also called ways and are also like columns in the array.
  • cache array 216 is an L2 cache.
  • LRU array 218 holds bits for an N-way set associative cache.
  • Set associative cache is a cache that has different data in a secondary memory that can map to the same cache entry.
  • 8-way set associative cache there are 8 different ways or sets per entry. Therefore, there can be 8 different data that map to the same entry.
  • LRU control 220 controls aspects of the illustrative embodiments used to manage the data stored in cache array 216.
  • Critical structure logic 226 contains the cache priority table which lists the critical data structures address, size, and starting priority.
  • LRU array 218 includes a priority level or value, which starts at zero for non-critical data structures and uses the starting value from the cache priority table for critical data structures. For the addresses identified as critical by an application, LRU control 220 ages the data more slowly than normal. As a result, the critical data remains in cache array 216 longer than if the age of the data was increased at the normal rate.
  • the information used to age critical data structures may be specified by a cache priority subroutine and cache priority table as described in Figure 4.
  • the cache priority subroutine may be called by the operating system or by an individual application.
  • the priority level may be used as a factor to proportionately age the critical data.
  • the starting priority of a critical structure may be 8, indicating that the portion of the cache that stores the critical structure is aged at 1/8 the normal aging speed.
  • the priority level may also represent a percentage of the normal aging speed, such as eighty percent of the normal aging speed.
  • Directory array 224 stores the cache coherence information, real address, and valid bit for the data in the corresponding cache entry in cache array 216.
  • This array also has the same set-associative structure as cache array 216.
  • directory array 224 also has 8 ways. A way is also referred to as a set. This directory has a one-to-one match. Each time cache array 216 is accessed, directory array 224 will be accessed at the same time to determine if a cache hit or miss occurs and if the entry is valid.
  • Main memory unit 230 contains instructions and data that may be fetched or retrieved by processor system 200 for execution.
  • bus control unit 232 performs as the traffic controller for the bus to arbiter requests and responses from the devices attached to the bus.
  • execution unit 210 may send a request and an address to memory subsystem 214 when a miss occurs in a Ll data cache (not shown) in execution unit 210.
  • Ll data cache not shown
  • execution unit 210 causes L2 load and store queue control 222 to access LRU array 218, directory array 224 and cache array 216.
  • the data in directory array 224 can be brought in by a cache miss in the Ll cache.
  • Directory array 224 returns the data to indicate whether the data requested in the miss in the Ll cache is located in cache array 216, which serves as an L2 cache in this example.
  • the data returned from directory array 224 includes a hit or miss; the data in the way of the cache entry is valid or invalid; and what memory coherence state of the entry, such as share, exclusive, modify.
  • LRU array 218 returns LRU data to LRU control 220.
  • LRU control 220 updates the LRU data stored in LRU array 218.
  • cache array 216 contains the data and has no other information.
  • Directory array 224 can be viewed as the array holding all other information in the cache array, such as address, validity, and cache coherence state.
  • LRU control 220 updates the LRU data from a binary tree scheme, described herein, by writing back to LRU array 218.
  • Cache array 216 returns data to execution unit 210 in response to the hit on directory array 224.
  • a miss in directory array 224 results in execution unit 210 placing the request into L2 load and store control 222. Requests remain in this component until L2 load and store queue control 222 retrieves data from host bus 228.
  • LRU control 220 updates the LRU data from the binary tree scheme by writing back to LRU array 218. This update of LRU data contains the most and least recently used cache set in cache array 216.
  • LRU control 220 also forwards this data back to the Ll cache and execution unit 210.
  • LRU control 220 and critical structure logic 226 may be implemented in a single LRU control element.
  • Software architecture 300 is an exemplary software system including various modules.
  • the server or client may be a data processing system, such as data processing system 100 of Figure 1.
  • operating system 302 is utilized to provide high-level functionality to the user and to other software.
  • Such an operating system typically includes a basic input output system (BIOS).
  • BIOS basic input output system
  • Communication software 304 provides communications through an external port to a network, such as the Internet via a physical communications link by either directly invoking operating system functionality or indirectly bypassing the operating system to access the hardware for communications over the network.
  • API 306 allows the user of the system, an individual, or a software routine to invoke system capabilities using a standard consistent interface without concern for how the particular functionality is implemented.
  • Network access software 308 represents any software available for allowing the system to access a network.
  • This access may be to a network, such as a local area network (LAN), wide area network (WAN), or the Internet.
  • this software may include programs, such as Web browsers.
  • Application software 310 represents any number of software applications designed to react to data through the communications port to provide the desired functionality the user seeks. Applications at this level may include those necessary to handle data, video, graphics, photos, or text which can be accessed by users of the Internet.
  • the mechanism of the illustrative embodiments may be implemented within communication software 304 in these examples.
  • Application software 310 includes data structures 312. Some of data structures 312 are critical data structures 314.
  • Critical data structures 314 are data structures that are critical for the performance of application software 310. As a result, critical data structures 314 need to stay in a cache, such as cache array 216 of Figure 2, to ensure that application software 310 achieves optimal performance.
  • Critical data structures 314 may include data that is frequently accessed or data that needs to be accessed efficiently at any given time. Keeping data that is frequently accessed in cache longer improves performance because that data is supplied to the central processing unit more quickly from cache than from main memory.
  • a software application developer may specify critical data structures 314 within data structures 312 of application software 310. Information regarding the address, size, and priority level or critical rating of each of critical data structures 314 is stored in cache priority table 316.
  • Application software 310 also includes a code or call to initiate cache priority subroutine 318 when application software 310 is started so that the values of cache priority table 316 may be stored in a hardware cache priority table.
  • the hardware cache priority table may be part of LRU control 220 or critical structure logic 226 of Figure 2.
  • Operating system 302 includes cache priority subroutine 320 for calling the new cache priority hardware instruction. Syntax for cache priority subroutines 318 and 320 may be specified by:
  • the parameters of cache priority subroutines 318 and 320 may include address, size, and starting_priority, information which may be stored in cache priority table 316, which is further described in Figure 4.
  • FIG. 4 is an exemplary cache priority table in accordance with the illustrative embodiments.
  • Cache priority table 400 is a table, such as cache priority table 316 of Figure
  • Cache priority table 400 may be part of application software 310 and includes information that may be used by a cache priority subroutine, such as cache priority subroutines 318 and 320, all of Figure 3.
  • Cache priority table 400 may include columns for data structure address 402, data structure size 404, and starting priority 406.
  • Data structure address 402 is the starting address of a critical data structure, such as critical data structures 314 of Figure 3.
  • Data structure size 404 is the size, in bytes, of the critical data structure.
  • Starting priority 406 is the initial cache priority level of the critical data structure and indicates how critical the data is. In one example, the minimum starting priority is zero and the maximum starting priority is ten. Starting priority 406 may be modified as needed.
  • the cache would age the data at half the rate as non-critical data. If the data were given a starting priority 406 or critical rating often, the cache would age the data at l/10 th the rate of non-critical data. If the data is assigned a starting priority 406 of one, the data may be aged like all other data in the cache without any preferential aging treatment. Correspondingly, cache priority level of zero may be used to indicate that the cache will be aged according to normal or default settings.
  • FIG. 5 is a flowchart for a process for establishing cache priority information in accordance with the illustrative embodiments.
  • the process begins by establishing the data to be loaded into cache (step 502).
  • the data to be loaded into cache is established by fetch unit 202 of Figure 2.
  • the data may be received from application software 310 of Figure 3.
  • the cache may be cache array 216 of Figure 2.
  • the process determines whether the data address is in a cache priority table (step 504).
  • the determination of step 504 is performed by critical structure logic 226 of Figure 2 based on cache priority table 316 of Figure 3 stored in the critical structure logic.
  • the process determines the data address is not in the cache priority table, the process sets the cache priority level for the data equal to zero (step 506). Zero indicates that the data is of the lowest priority and ages according to normal or default settings. If the process determines the data address is in the cache priority table in step 504, the process retrieves the cache priority level for the data from the cache priority table (step 508). Step 508 may be performed by critical structure logic 226 of Figure 2 based on starting priority 406 of cache priority table 400, both of Figure 4.
  • Step 510 determines whether the cache has an empty slot for the data (step 510).
  • the slot is a designated portion of the cache. Slots of the cache are used to store the data and the summed capacity of each slot indicates how much data the cache may hold. For example, a 1 Mb cache may include slots of 128 kb.
  • Step 510 may be performed by LRU control 220 based on data and available slots in LRU array 218, both of Figure 2. If the process determines the cache has an empty slot for data, the process inserts the data in cache and sets the cache priority field (step 512) with the process terminating thereafter.
  • the cache priority field stores the cache priority level for the data, similarly to starting priority 406 of Figure 4.
  • the data is inserted by LRU control 220 into cache array 216, both of Figure 2. If the process determines the cache does not have an empty slot for the data in step 510, the process finds the least recently used slot in the cache (step 514).
  • the process determines whether the cache priority level of the least recently used (LRU) slot is greater than zero (step 516).
  • Step 516 is performed by critical structure logic 226 of Figure 2. If the process determines the cache priority level of the least recently used slot is greater than zero, the process decrements the cache priority level of the slot and marks the slot as most recently used (MRU) (step 518). Step 518 is performed by critical structure logic 226 of Figure 2.
  • the process finds the least recently used slot in the cache (step 514). Step 514 is performed by LRU control 220 of Figure 2. Steps 518 and 514 are repeated until the cache priority level of the least recently used slot is not greater than zero in step 516. If the process determines the cache priority level of the least recently used slot is not greater than zero, then the process inserts the data in the least recently used slot and sets the cache priority field (step 520) with the process terminating thereafter.
  • Figure 6 is a flowchart for a process for establishing a cache priority level in accordance with the illustrative embodiments. Unless otherwise specified, the process of Figure 6 is implemented by application software 310 of Figure 3. The process in Figure 6 begins by initiating the application (step 602). The application may be initiated in step 602 by operating system 302 of Figure 3 based on user input. The process in Figure 6 may also occur at any time during execution including initiation or standard execution of the application.
  • Step 604 determines whether any of the data is in an application cache priority table (step 604).
  • the cache priority subroutine sets the cache priority level for all critical data structures, such as critical data structures 314 of Figure 3.
  • the cache priority level established by the subroutine establishes the cache priority levels to be used by components, such as cache array 216, least recently used array 218, least recently used control 220, and critical structure logic 226 all of Figure 2.
  • Step 610 may be implemented by application software 310 using cache priority subroutine 318, both of Figure 3.
  • Hardware instructions specify critical data structures within an application.
  • the hardware instructions particularly specify the addresses and sizes of data that is to be aged differently.
  • the hardware instruction may also include a cache priority field specifying how the critical data is to be aged.
  • the illustrative embodiments can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
  • the illustrative embodiments are implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the illustrative embodiments can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any tangible apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk - read only memory (CD-ROM), compact disk - read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc.
  • I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • the description of the illustrative embodiments have been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the illustrative embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the illustrative embodiments, the practical application, and to enable others of ordinary skill in the art to understand the illustrative embodiments for various embodiments with various modifications as are suited to the particular use contemplated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur, un appareil et un code de programme utilisable par ordinateur pour traiter des données dans un cache et pour établir un niveau de priorité pour lesdites données dans le cache. On détermine si des données sont désignées pour un vieillissement plus lent dans le cache pendant l'exécution d'instructions pour une application. Le niveau de priorité pour les données dans le cache est fixé en réponse à une détermination du fait que les données sont désignées pour un vieillissement plus lent. Le niveau de priorité 10 indique que les données vieillissent plus lentement que d'autres données sans le niveau de priorité. Les données qui ont été ainsi désignées par une application peuvent être identifiées puis vieillies dans le cache à une vitesse plus lente que d'autres données dans le cache qui ne sont pas désignées pour un vieillissement plus lent en réponse à l'identification des données dans le cache.
PCT/EP2007/060264 2006-10-10 2007-09-27 Gestion de données de cache WO2008043670A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11/539,889 2006-10-10
US11/539,889 US20080086598A1 (en) 2006-10-10 2006-10-10 System and method for establishing cache priority for critical data structures of an application
US11/539,894 US20080086599A1 (en) 2006-10-10 2006-10-10 Method to retain critical data in a cache in order to increase application performance
US11/539,894 2006-10-10

Publications (1)

Publication Number Publication Date
WO2008043670A1 true WO2008043670A1 (fr) 2008-04-17

Family

ID=38984241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2007/060264 WO2008043670A1 (fr) 2006-10-10 2007-09-27 Gestion de données de cache

Country Status (1)

Country Link
WO (1) WO2008043670A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2508962A (en) * 2012-12-13 2014-06-18 Advanced Risc Mach Ltd Retention priority based cache replacement policy
WO2015046991A1 (fr) * 2013-09-27 2015-04-02 삼성전자 주식회사 Procédé de décodage multiple et décodeur multiple pour effectuer le procédé

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2276962A (en) * 1993-04-08 1994-10-12 Int Computers Ltd User-defined priority for cache replacement mechanism.
US6338115B1 (en) * 1999-02-16 2002-01-08 International Business Machines Corporation Advanced read cache management
US20040078524A1 (en) * 2002-10-16 2004-04-22 Robinson John T. Reconfigurable cache controller for nonuniform memory access computer systems
US20040083341A1 (en) * 2002-10-24 2004-04-29 Robinson John T. Weighted cache line replacement
US20050188158A1 (en) * 2004-02-25 2005-08-25 Schubert Richard P. Cache memory with improved replacement policy

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2276962A (en) * 1993-04-08 1994-10-12 Int Computers Ltd User-defined priority for cache replacement mechanism.
US6338115B1 (en) * 1999-02-16 2002-01-08 International Business Machines Corporation Advanced read cache management
US20040078524A1 (en) * 2002-10-16 2004-04-22 Robinson John T. Reconfigurable cache controller for nonuniform memory access computer systems
US20040083341A1 (en) * 2002-10-24 2004-04-29 Robinson John T. Weighted cache line replacement
US20050188158A1 (en) * 2004-02-25 2005-08-25 Schubert Richard P. Cache memory with improved replacement policy

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2508962A (en) * 2012-12-13 2014-06-18 Advanced Risc Mach Ltd Retention priority based cache replacement policy
GB2508962B (en) * 2012-12-13 2020-12-02 Advanced Risc Mach Ltd Retention priority based cache replacement policy
WO2015046991A1 (fr) * 2013-09-27 2015-04-02 삼성전자 주식회사 Procédé de décodage multiple et décodeur multiple pour effectuer le procédé
US9761232B2 (en) 2013-09-27 2017-09-12 Samusng Electronics Co., Ltd. Multi-decoding method and multi-decoder for performing same

Similar Documents

Publication Publication Date Title
US20080086599A1 (en) Method to retain critical data in a cache in order to increase application performance
US20080086598A1 (en) System and method for establishing cache priority for critical data structures of an application
US10896128B2 (en) Partitioning shared caches
US7516275B2 (en) Pseudo-LRU virtual counter for a locking cache
US6219760B1 (en) Cache including a prefetch way for storing cache lines and configured to move a prefetched cache line to a non-prefetch way upon access to the prefetched cache line
US8180981B2 (en) Cache coherent support for flash in a memory hierarchy
JP4486750B2 (ja) テンポラリ命令及び非テンポラリ命令用の共用キャッシュ構造
US7437517B2 (en) Methods and arrangements to manage on-chip memory to reduce memory latency
US8935478B2 (en) Variable cache line size management
CN110865968B (zh) 多核处理装置及其内核之间数据传输方法
JP6009589B2 (ja) マルチレベルのキャッシュ階層におけるキャストアウトを低減するための装置および方法
US7752350B2 (en) System and method for efficient implementation of software-managed cache
US8095734B2 (en) Managing cache line allocations for multiple issue processors
US20180300258A1 (en) Access rank aware cache replacement policy
JP2012522290A (ja) キャッシュにおけるウエイ割り当て及びウエイロックのための方法
WO2015075076A1 (fr) Unité de mémoire et procédé
JP2005174341A (ja) 種々のキャッシュ・レベルにおける連想セットの重畳一致グループを有するマルチレベル・キャッシュ
US6715035B1 (en) Cache for processing data in a memory controller and a method of use thereof to reduce first transfer latency
CN115292214A (zh) 页表预测方法、存储访问操作方法、电子装置和电子设备
CN114217861A (zh) 数据处理方法及装置、电子装置和存储介质
US7882309B2 (en) Method and apparatus for handling excess data during memory access
EP2339472A2 (fr) Unité de traitement arithmétique, dispositif de traitement d'informations et procédé de contrôle de mémoire cache
US8661169B2 (en) Copying data to a cache using direct memory access
US10754791B2 (en) Software translation prefetch instructions
JP7376019B2 (ja) 命令キャッシュにおけるプリフェッチの強制終了及び再開

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07820652

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07820652

Country of ref document: EP

Kind code of ref document: A1