CN118295936B - Management method and device of cache replacement policy and electronic equipment - Google Patents

Management method and device of cache replacement policy and electronic equipment Download PDF

Info

Publication number
CN118295936B
CN118295936B CN202410726081.7A CN202410726081A CN118295936B CN 118295936 B CN118295936 B CN 118295936B CN 202410726081 A CN202410726081 A CN 202410726081A CN 118295936 B CN118295936 B CN 118295936B
Authority
CN
China
Prior art keywords
access
cache
replacement
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410726081.7A
Other languages
Chinese (zh)
Other versions
CN118295936A (en
Inventor
刘宇航
周嘉鹏
陈泓佚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Open Source Chip Research Institute
Original Assignee
Beijing Open Source Chip Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Open Source Chip Research Institute filed Critical Beijing Open Source Chip Research Institute
Priority to CN202410726081.7A priority Critical patent/CN118295936B/en
Publication of CN118295936A publication Critical patent/CN118295936A/en
Application granted granted Critical
Publication of CN118295936B publication Critical patent/CN118295936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the invention provides a method and a device for managing a cache replacement strategy and electronic equipment, and relates to the technical field of computers, wherein the method comprises the following steps: acquiring access information of a target cache group under the condition of adopting a first replacement strategy from a simulator; the simulator is used for simulating the running process of the processor system; classifying the access information according to the cache block address, and sorting the classified access information according to the time stamp; drawing a first access observation diagram corresponding to the first replacement strategy based on the sequencing result; and evaluating the decision performance of the first replacement strategy according to the first access observation diagram. The embodiment of the invention can provide feedback for the design of the replacement strategy, realizes the perspective of the operation process of the cache, increases the interpretability of the strategy and is beneficial to auxiliary debugging and design.

Description

Management method and device of cache replacement policy and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for managing a cache replacement policy, and an electronic device.
Background
In modern processors, caches (caches) often occupy a large percentage (up to 80% or more) of the chip area. The cache can take advantage of temporal and spatial locality of application memory accesses, leaving frequently accessed blocks in the high-speed storage medium, thereby improving program performance. However, due to factors such as filtering of the upper layer (L1 and L2) caches, the locality of data access of the L3 cache is poor, the utilization rate is low, and Dead blocks (Dead blocks) are more. Dead blocks are not accessed again before being replaced, resulting in waste of chip area and power consumption. At the same time, however, some data blocks that have a multiplexing opportunity are replaced with a cache before multiplexing for capacity reasons. Cache replacement policy is one of the important ways to manage cache, and by reasonably planning the retention time of each cache block, the cache replacement policy can play a role in increasing the equivalent capacity of the cache.
To evaluate the performance and power consumption of a processor, and corresponding optimization methods, a clock accurate system-wide simulator is typically employed to simulate the various components of the processor core and cache. The simulator can output more accurate counter values for performance and power consumption assessment. Processor performance is typically evaluated with instructions per cycle (Instructions Per Cycle, IPC). In the cache hierarchy (CACHE HIERARCHY) subsystem, commonly used evaluation indexes are Miss Rate (Miss Rate), miss delay (MISS LATENCY), and the like. However, these indexes are merely evaluation of the result of the replacement policy, and cannot reflect the decision process of the replacement policy and the access characteristics of the application, so that sufficient feedback cannot be provided to the design of the replacement policy.
Disclosure of Invention
The embodiment of the invention provides a method and a device for managing a cache replacement policy and electronic equipment, which can solve the problem that the decision process and the applied access characteristic of the replacement policy cannot be reflected in the related technology, and enough feedback cannot be provided for the cache replacement policy.
In order to solve the above problems, an embodiment of the present invention discloses a method for managing a cache replacement policy, where the method includes:
Acquiring access information of a target cache group under the condition of adopting a first replacement strategy from a simulator; the simulator is used for simulating the running process of the processor system;
Classifying the access information according to the cache block address, and sorting the classified access information according to the time stamp;
drawing a first access observation diagram corresponding to the first replacement strategy based on the sequencing result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
And evaluating the decision performance of the first replacement strategy according to the first access observation diagram.
Optionally, the access information includes information related to a cache block access event; the drawing the first access observation diagram corresponding to the first replacement policy based on the sorting result includes:
Determining the ordinate of the access point according to the buffer block address corresponding to the buffer block access event, and determining the abscissa of the access point according to the ordering result corresponding to the buffer block access event;
Determining an identifier of the access point according to the access type of the cache block access event;
Determining the color of the identifier according to the hit condition of the cache block access event;
And drawing a first access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
Optionally, the access information further includes backfill information of a cache block backfill event; the method further comprises the steps of:
Under the condition that a cache backfilling event exists in the target cache group, determining a backfilling block address and a replacement block address according to the backfilling information;
Adding a replacement mark in the first access observation diagram according to the backfill block address and the replacement block address;
And the abscissa of the replacement mark is the same as the abscissa of the access point corresponding to the backfill block address, and the ordinate of the replacement mark is the ordinate corresponding to the replacement block address.
Optionally, the evaluating the decision performance of the first replacement policy according to the first access observation graph includes:
determining first cache miss information corresponding to the first replacement strategy according to the first access observation diagram;
Determining a target replacement block corresponding to each backfill block in the target cache group under the condition of adopting a target replacement strategy according to access trace information reflected in the first access observation diagram;
Drawing a target memory observation diagram corresponding to the target replacement strategy according to the memory trace information and the target replacement block;
Determining target cache set deletion information corresponding to the target replacement strategy according to the target access observation diagram;
and evaluating decision performance of the first replacement policy based on the first cache miss information and the target cache set miss information.
Optionally, the obtaining access information of the target cache set from the simulator under the condition that the first replacement policy is adopted includes:
acquiring a to-be-observed cache group identifier, a first replacement strategy and a simulation instruction number;
and configuring the simulator according to the cache group identifier, the first replacement strategy and the simulation instruction number, so that the simulator executes a test program according to the first replacement strategy and the simulation instruction number, and outputting access information of a target cache group corresponding to the cache group identifier.
Optionally, the method further comprises:
drawing a second access observation diagram corresponding to a second replacement strategy;
Determining performance scores of the first replacement strategy and the second replacement strategy according to the first access observation diagram and the second access observation diagram respectively;
And updating the cache replacement strategy of the target cache group to the second replacement strategy under the condition that the performance score of the first replacement strategy is lower than that of the second replacement strategy.
On the other hand, the embodiment of the invention discloses a management device of a cache replacement strategy, which comprises the following components:
The acquisition module is used for acquiring access information of the target cache group from the simulator under the condition of adopting the first replacement strategy; the simulator is used for simulating the running process of the processor system;
the preprocessing module is used for classifying the access information according to the cache block address and sorting the classified access information according to the time stamp;
The first drawing module is used for drawing a first memory access observation diagram corresponding to the first replacement strategy based on the sequencing result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
And the evaluation module is used for evaluating the decision performance of the first replacement strategy according to the first access observation diagram.
In still another aspect, the embodiment of the invention also discloses an electronic device, which comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is used for storing executable instructions, and the executable instructions enable the processor to execute the memory access method.
The embodiment of the invention also discloses a readable storage medium, which enables the electronic equipment to execute the memory access method when the instructions in the readable storage medium are executed by the processor of the electronic equipment.
The embodiment of the invention has the following advantages:
the embodiment of the invention provides a management method of a cache replacement strategy, which can draw a memory access observation diagram by taking a cache block address as a view angle through memory access information of a target cache group provided by a simulator, and display all access and cache block backfill events of the target cache group in a period of time, thereby clearly displaying backfill and replacement time of each cache block, more intuitively displaying decision and influence of the replacement strategy, and providing feedback for design of the replacement strategy. The embodiment of the invention can realize perspective of the operation process of the cache, increases the interpretability of the strategy and is beneficial to auxiliary debugging and design.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of an embodiment of a method of managing cache replacement policies of the present invention;
FIG. 2 is an example of a memory access observation diagram of the present invention;
FIG. 3 is an example of another memory access observation diagram of the present invention;
FIG. 4 is an example of a memory map of two alternative strategies of the present invention;
FIG. 5 is an example of a memory access observation diagram of the present invention;
FIG. 6 is an example of another memory access observation diagram of the present invention;
FIG. 7 is an example of yet another memory access observation diagram of the present invention;
FIG. 8 is a block diagram of a cache replacement policy management apparatus of the present invention;
Fig. 9 is a block diagram of an electronic device according to an example of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present invention may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, the term "and/or" as used in the specification and claims to describe an association of associated objects means that there may be three relationships, e.g., a and/or B, may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The term "plurality" in embodiments of the present invention means two or more, and other adjectives are similar.
Method embodiment
Referring to FIG. 1, a flowchart illustrating steps of an embodiment of a method for managing cache replacement policies of the present invention, the method may include the steps of:
step 101, acquiring access information of a target cache group from a simulator under the condition of adopting a first replacement strategy; the simulator is used for simulating the running process of the processor system;
step 102, classifying the access information according to the cache block address, and sorting the classified access information according to the time stamp;
step 103, drawing a first access observation diagram corresponding to the first replacement policy based on the sorting result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
Step 104, evaluating the decision performance of the first replacement policy according to the first access observation diagram.
The management method of the cache replacement strategy provided by the embodiment of the invention can evaluate the performance of the cache replacement strategy.
Cache (Cache) is one of the important components of a computer processor, a memory with a storage speed between registers and memory. In a computer system, various storage components (including registers, caches, memories, hard disks, etc.) may be divided into different levels according to operating speed and unit cost. The closer to the processor end, the faster the working speed of the storage component, the smaller the capacity, and the higher the cost per unit capacity; the closer to the memory end, the larger the capacity of the storage component, the slower the operating speed, and the lower the cost per unit capacity. Cache typically runs slower than processor but faster than memory. By utilizing the locality principle of program execution, the data which is likely to be accessed repeatedly recently is copied into a Cache with higher working speed, and when the processor needs the data, the data can be submitted to the processor with small time delay, so that the time consumed by memory access can be effectively reduced, the working speed difference between the processor and the memory can be covered to a certain extent, and the performance of the processor is improved.
Because the capacity of the Cache is smaller, the Cache needs to be effectively managed, and data needed by the processor are put into the Cache as much as possible, so that the access failure probability of the system is reduced, the access cost is reduced, and the overall performance of the system is improved.
Currently, the mainstream cache replacement policies mainly include Random replacement policy (Random), first-in first-out replacement policy (FIRST IN FIRST out, FIFO), first-in last-out replacement policy (LAST IN FIRST out, LIFO), least recently Used replacement policy (LEAST RECENTLY Used, LRU), least recently Used replacement policy (Least Frequently Used, LFU), adaptive replacement policy (ADAPTIVE REPLACEMENT CACHE, ARC), bimodal multiplexing interval Prediction replacement policy (Bimodal Re-REFERENCE INTERVAL Prediction, BRRIP), and the like.
In a cache employing the ideal LRU replacement policy, each cache line maintains a timestamp counter that is used to record the clock count of the last time the line was accessed. Each time an access fails, the data in the cache line with the smallest timestamp count is replaced from the cache in the same cache set, and the cache line is used to store the data newly read from the memory. And at the same time, each time of access, including the access of newly added data after invalidation, updates the time stamp in the cache line of the corresponding data, thereby ensuring that the data replaced each time is from the least recently used cache line, namely the cache line with the smallest time stamp in the same group.
The FIFO policy is replaced according to the order in which the data entered the cache. The data that entered the cache earliest will be replaced earliest, while the data that entered the cache latest will be retained.
The LFU policy is replaced according to how frequently each data item is accessed. Less accessed data will be replaced preferentially to make room for more commonly accessed data.
The random replacement strategy is a simple and straightforward method to randomly select one of the data for replacement. Due to the randomness, this strategy cannot guarantee that the most useful data is stored in the cache.
In addition, for less localized applications, the cache may employ a policy that does not update the newly added data cache line timestamp, i.e., the newly added data is placed in the first replaced location, commonly referred to as the LRU location insertion policy (the LRU Position Insertion Policy, LIP).
In practical applications, selecting an appropriate cache replacement policy has a significant impact on the system performance of the computer system. Different application scenarios may have different requirements on the cache replacement policy. The evaluation indexes commonly used by the cache replacement strategy are as follows: hit Rate (Hit Ratio), substitution overhead (REPLACEMENT OVERHEAD), fairness (Fairness), miss Rate (Miss Rate), miss latency (MISS LATENCY), and the like. The hit rate refers to the proportion of existing data in the cache during cache access, and high hit rate means that the data in the cache can meet most of access requirements, and the system performance is good. The replacement overhead refers to time and computing resources required for performing the cache replacement operation, and the response speed of the system can be improved by lower replacement overhead. Fairness refers to whether a cache replacement policy is fair when treating different data items, and if some data is replaced frequently, other data is replaced rarely, possibly resulting in an imbalance in system performance. The miss rate refers to the ratio of the number of times required data is not found to the total number of requests in cache access, and high miss rate means that the data in the cache cannot meet most of access requirements, and the system performance is poor. The miss latency refers to the access latency due to cache misses, the higher the miss latency, the worse the system performance.
It can be appreciated that these evaluation indexes are merely evaluation of the cache replacement result, and cannot reflect the decision process of the replacement policy and the access feature of the application, and cannot provide enough feedback for the replacement policy design.
According to the management method for the cache replacement policy provided by the embodiment of the invention, through the Access information of the target cache group provided by the simulator, the cache block address is taken as a view angle, and a microscopic observation diagram of the Access sequence in the Set, namely the first Access observation diagram in the invention, shows all Access (Access) events and cache block backfill (CACHE FILL) events of the target cache group in a period of time, so that backfill and replacement time of each cache block are clearly shown, decision (selected replacement block) and influence (whether the subsequent Access of the replacement block is caused) of the replacement policy are more intuitively shown, and feedback is provided for design of the replacement policy.
It should be noted that, the simulator in the embodiment of the present invention is used for simulating the operation process of the processor system. A processor system includes a processor core, a cache, which typically contains multiple levels of cache, and memory. The simulator can simulate various components such as the processor core, the cache and the like and provide access information of the target cache group. The access information of the target cache group comprises access information from an upper-level cache and prefetch information of a local-level cache.
Optionally, the access information includes at least one of: the accessed cache block address, the access type, the access hit condition, the access time stamp and the replacement block address.
As an example, before performing a system-wide simulation on a processor system by using a simulator, a user may specify a number of a target cache Set (Set) to be observed in a certain level of cache, and the simulator discovers, in the target cache Set, information about accesses from a higher level cache and a current level prefetch, and outputs the information to a specified file for saving in the simulation process. Wherein, the access from the upper level cache comprises the prefetch sent by the upper level cache and the read-write request from the CPU, and the prefetch of the present level comprises the prefetch request sent by the present level cache. After finding these requests, the information that needs to be recorded includes: the cache block Address (Address), access Type (Type), access hit, access Timestamp (Timestamp), and replacement block Address that are accessed. Among these, the access Hit cases include an access Hit (Hit) or a Miss (Miss). The replacement block address refers to the address of the cache block that was replaced when the cache block backfilling occurred.
Optionally, the access type includes a processor core read-write, a prefetch of a previous level cache, a prefetch of a present level cache.
In the embodiment of the invention, after access information of the target cache group is obtained from the simulator, the access information is classified according to the cache block address, and is arranged according to the time sequence to form an access point diagram of the target cache group in a period of time. Marking different events on the access point diagram according to event types, such as a cache block backfill event, a cache block missing event, a cache block hit event and the like, so as to obtain a first access observation diagram corresponding to the target cache group.
According to the first access observation diagram, cache miss information and replacement block information in the target cache set when the first replacement policy is adopted can be intuitively reflected, for example, according to the first access observation diagram, a replaced cache block in the target cache set, that is, a replacement block, and whether a certain replacement block causes subsequent access miss of the replacement block after being replaced by the cache can be seen, and information such as cache miss times in the target cache set can be counted based on the first access observation diagram. Based on the first memory observation diagram, the first replacement strategy can be evaluated from microscopic memory characteristics (such as whether memory is multiplexed, multiplexed characteristics and the like) in the target cache group, and feedback is provided for the replacement strategy design.
Optionally, step 101 of obtaining access information of the target cache set from the simulator under the condition that the first replacement policy is adopted includes:
step S11, obtaining a to-be-observed cache group identifier, a first replacement strategy and a simulation instruction number;
And step S12, configuring the simulator according to the cache group identifier, the first replacement strategy and the simulation instruction number, so that the simulator executes a test program according to the first replacement strategy and the simulation instruction number, and outputting access information of a target cache group corresponding to the cache group identifier.
Simulators in embodiments of the present invention may include, but are not limited to, full system simulators, cache simulators, and the like, and the simulation may be functional simulation or performance simulation. Before the simulator runs, the observed cache set identification, the first replacement policy and the simulation instruction number need to be specified. The cache set identification may include, among other things, a cache hierarchy, a target cache set number, and the like. The simulator executes the test program according to the first replacement policy and the number of simulation instructions, and targets access information of the cache group, such as an access address of each access, an access request type, whether an access hits, an access timestamp, and an address of a cache block replaced when the cache block backfilling occurs, and the like.
In an optional embodiment of the invention, the access information includes information about a cache block access event; step 103 of drawing a first access observation map corresponding to the first replacement policy based on the sorting result includes:
s21, determining the ordinate of the access point according to the buffer block address corresponding to the buffer block access event, and determining the abscissa of the access point according to the ordering result corresponding to the buffer block access event;
step S22, determining an identifier of the access point according to the access type of the cache block access event;
step S23, determining the color of the identifier according to the hit condition of the cache block access event;
and step S24, drawing a first access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
When the first access observation diagram is drawn, the ordinate of the access point can be determined according to the buffer block address corresponding to the buffer access event, and the abscissa of the access point can be determined according to the ordering result corresponding to the buffer block access event. For example, the access address tag may be determined according to the accessed cache block address, and the access address tag may be obtained by hashing the cache block address, or may be the upper bits (for example, the upper 4 bits) of the cache block address. And then determining an ordinate according to the access address label, wherein if the label appears for the first time, the ordinate is the current maximum ordinate value +1, otherwise, the ordinate is equal to the ordinate value corresponding to the original label. After the access time stamp is used for ordering the access information, the abscissa of the access point can be equal to the abscissa value +1 of the last access point.
Next, an identifier of the access point is determined based on the access type. The access types may include processor core reads and writes, prefetches for a higher level cache, prefetches for a present level cache. Different access types are represented by different identifiers, respectively. For example, marks "≡", "" and ". DELTA." are used to indicate different access types such as CPU read/write, upper level prefetch and local level prefetch, respectively. For each cache block access, a different color is used to indicate whether the access caused a miss.
And drawing a first access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
Optionally, the access information further includes backfill information of a cache block backfill event; the method further comprises the steps of:
Step S31, determining a backfill block address and a replacement block address according to the backfill information under the condition that a cache backfill event exists in the target cache group;
And S32, adding a replacement mark in the first access observation diagram according to the backfill block address and the replacement block address.
And the abscissa of the replacement mark is the same as the abscissa of the access point corresponding to the backfill block address, and the ordinate of the replacement mark is the ordinate corresponding to the replacement block address.
In the embodiment of the invention, if a cache block backfilling event occurs in the target cache set, a replacement mark can be added in the first access observation diagram to represent a backfill block and a replacement block. The backfill block refers to a data block backfilled from a next-level cache or an internal memory to a present-level cache, and the replacement block refers to a cache block replaced by the backfill block in the cache. The cache block backfill event may be triggered by a cache block access miss or a prefetch access of this level.
Illustratively, if a cache block is backfilled due to a cache block access miss or a current level prefetch access, replacing the old cache block is indicated by a vertical dashed line and a replacement marker "X" in the figure. The abscissa of the replacement mark 'x' is the same as the access point, and the ordinate is equal to the ordinate value corresponding to the address label of the replacement block, which represents that the cache block corresponding to the address label on the left side is swapped out.
Referring to fig. 2, a memory access observation diagram provided by an embodiment of the present invention is shown. As shown in FIG. 2, a Set memory observation of a 2-way Set associative cache is shown with the cache Set and the cache block address tags as views, respectively. The example shows 10 accesses to three cache blocks within the set with address tags A, B, C, the corresponding cache block access sequence being "A-B-C-C-A-A-D-E-A-A". The left access observation diagram taking the cache set as the view angle is a common observation means at present, which shows the cache blocks stored in the cache set before and after each access, but whether the access is missing and the replacement block information can only be associated with the access sequence, is very scattered, and can not reflect the multiplexing condition of the cache blocks. In the access observation diagram taking the cache block address label as the view angle on the right side, the accessed missing information and the accessed replacement block information can be associated with the cache block address, and the access, backfill, replacement and multiplexing conditions of each cache block are clearly shown. In addition, the memory access observation diagram taking the cache block address tag as the view angle can also conveniently observe the size of the program working set in the time. The rectangular space enclosed by all address tags and all accesses within a group can be considered as an access space-time diagram of the cache group, and the number of "X" tags indicates the number of cache misses in the period of time, representing the performance of the corresponding replacement policy.
In a memory observation diagram with a cache block address Tag (Tag) as a view, a dashed circle indicates a memory miss, and a solid circle indicates a memory hit. It can be seen from the figure that the buffer block a is multiplexed at the 5 th and 6 th accesses, and the buffer block C is multiplexed at the 4 th access. At the 3 rd access, the cache block C replaces the cache block A during backfilling, thereby causing memory loss of the cache block A during multiplexing of the 5 th access.
Referring to fig. 3, another memory access observation diagram provided by an embodiment of the present invention is shown. It should be noted that, fig. 3 illustrates an example of a memory access observation diagram with prefetching, compared with the first 6 cache set accesses illustrated in fig. 2, the present level prefetch access to the cache block C is added before the 3 rd access, and the cache block is fetched from the lower level cache (or the memory) before the arrival of the upper level read-write access, so as to reduce the upper level read-write access miss.
Optionally, the evaluating, in step 104, decision performance of the first replacement policy according to the first access observation graph includes:
step S41, determining first cache miss information corresponding to the first replacement strategy according to the first access observation diagram;
step S42, determining a target replacement block corresponding to each backfill block in the target cache group under the condition of adopting a target replacement strategy according to access trace information reflected in the first access observation diagram;
step S43, drawing a target memory access observation diagram corresponding to the target replacement strategy according to the memory access trace information and the target replacement block;
step S44, determining target cache set deletion information corresponding to the target replacement strategy according to the target access observation diagram;
Step S45, evaluating decision performance of the first replacement strategy based on the first cache miss information and the target cache set miss information.
In the embodiment of the invention, the target replacement block corresponding to each backfill block in the target cache group can be determined according to the access trace information reflected in the first access observation diagram corresponding to the first replacement policy under the condition of adopting the target replacement policy. Wherein, the target replacement strategy refers to a theoretical optimal strategy. For example, the target replacement policy may be to select a cache block to be used in the further future for replacement each time the cache block backfilling results in a replacement being needed. I.e. looking backwards from this access in the existing trace information, the cache blocks that were not accessed in the trace are replaced preferentially in the cache, followed by the cache blocks that were accessed last in the trace.
And drawing a target memory observation diagram corresponding to the target replacement strategy according to the target replacement block and memory trace information decided by the target replacement strategy. The specific drawing process may refer to the drawing process of the first access observation chart, and further description of the embodiment of the present invention is omitted herein.
The target memory access observation graph can be used for determining the performance upper limit of the replacement strategy and evaluating the performance of different replacement strategies. For example, for a first replacement policy to be evaluated, first cache miss information corresponding to the first replacement policy may be determined based on a first access memory observation diagram corresponding to the first replacement policy, target cache miss information corresponding to the target replacement policy may be determined based on a target access memory observation diagram, and then two sets of cache miss information are compared, where each item of data in the first cache miss information is closer to the target cache miss information, and the performance of the first replacement policy is better. Among other things, cache miss information may include a number of upper level read misses, a number of upper level write backs misses, and so on.
In addition, the method can also be used for directly counting the number of the deletions of various accesses in the target cache group when the first replacement strategy is adopted, such as CPU read-write deletion, upper-level prefetching deletion and the like, based on the first access observation diagram corresponding to the first replacement strategy, and evaluating the performance of the first replacement strategy based on the counting result.
Optionally, the method further comprises:
Step S51, drawing a second access observation diagram corresponding to a second replacement strategy;
Step S52, performance scores of the first replacement strategy and the second replacement strategy are respectively determined according to the first access observation diagram and the second access observation diagram;
Step S53, updating the cache replacement policy of the target cache set to the second replacement policy when the performance score of the first replacement policy is lower than the performance score of the second replacement policy.
In the embodiment of the invention, performance comparison can be performed on different replacement strategies by respectively drawing access observation diagrams corresponding to the different replacement strategies, and the replacement strategy with better performance is selected as the cache replacement strategy actually adopted in the target cache group, so that the cache replacement performance of the target cache group is improved, and the access performance of the whole computer system is improved.
The first replacement policy and the second replacement policy may be any one of the replacement policies, as long as the two replacement policies are different.
Illustratively, referring to fig. 4, a memory observation diagram of two replacement strategies provided by an embodiment of the present invention is shown. The replacement decision of the policy X is the same as that of fig. 2, and the cache block a is replaced when the cache block C is backfilled in the 3 rd access, so that access loss is caused when the 5 th access block a is multiplexed. The comparison strategy Y selects the replacement cache block B at the 3 rd access and hits when the block A is multiplexed. As can be seen from fig. 4, the policy Y is less than the policy X by one-time upper level read-write access deletion, and is also the optimal replacement policy in the access scene.
Referring to fig. 5 to fig. 7, a memory access observation diagram provided by an embodiment of the present invention is shown respectively. Specifically, fig. 5-7 illustrate an example of application of two policies LRU and BRRIP to libquantum of a SPEC 2006 test set using the method for managing cache replacement policies provided by embodiments of the present invention. Fig. 5 is a memory access observation diagram corresponding to the LRU replacement policy, fig. 6 is a memory access observation diagram corresponding to the BRRIP replacement policy, and fig. 7 is a memory access observation diagram corresponding to the optimal replacement policy. In this example configuration, the L3 cache has a associativity of 16, running four tens of millions (40M) of instructions, mapping the memory accesses of the two replacement policies at the L3 cache. As can be seen from the figure, a total of 32 different cache blocks are sequentially accessed in a loop during this period. Due to the characteristic of L3 group connection, a cache group (Set) can only accommodate 16 different cache blocks, BRRIP protects the 16 cache blocks accessed before and avoids cache miss caused by revisiting the part of access. However, since the multiplexing distance (the number of different cache blocks between two accesses of the same cache block) of all the cache blocks is 32, which is greater than the set capacity 16, the cache block is replaced by the set when the cache block is accessed again, resulting in cache miss. In the generated optimal strategy, a mode of reserving the first 16 cache blocks is also adopted.
In the example, there are upper level read requests and upper level write back requests, whose lack has little impact on performance because the write back requests are not on critical paths. Whereas read requests issued by higher level caches are closely related to the data supply of the CPU, their misses will affect performance to a large extent. From the statistics of cache miss numbers, the LRU strategy causes 68 upper-level read access misses, while the BRRIP strategy only causes 50 upper-level read access misses, which is close to 48 misses caused by the optimal strategy, and the performance is better.
In summary, the embodiment of the invention provides a management method for a cache replacement policy, which can draw a memory Access observation diagram by taking a cache block address as a view angle through memory Access information of a target cache group provided by a simulator, and show all Access (Access) and cache block backfill events of the target cache group in a period of time, so that backfill and replacement time of each cache block are clearly shown, decision and influence of the replacement policy are more intuitively shown, and feedback is provided for design of the replacement policy. The embodiment of the invention can realize perspective of the operation process of the cache, whitens the prior black box process, increases the interpretability of the strategy and is beneficial to auxiliary debugging and design.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Device embodiment
Referring to FIG. 8, there is shown a block diagram of a cache replacement policy management apparatus of the present invention, which may specifically include:
an obtaining module 801, configured to obtain access information of a target cache set from a simulator when a first replacement policy is adopted; the simulator is used for simulating the running process of the processor system;
The preprocessing module 802 is configured to classify the access information according to the cache block address, and sort the classified access information according to the timestamp;
A first drawing module 803, configured to draw a first memory observation map corresponding to the first replacement policy based on the ordering result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
and the evaluation module 804 is configured to evaluate the decision performance of the first replacement policy according to the first access observation diagram.
Optionally, the access information includes information related to a cache block access event; the first drawing module includes:
the coordinate determination submodule is used for determining the ordinate of the access point according to the buffer block address corresponding to the buffer block access event and determining the abscissa of the access point according to the ordering result corresponding to the buffer block access event;
An identifier determining submodule, configured to determine an identifier of the access point according to an access type of the cache block access event;
The color determination submodule is used for determining the color of the identifier according to the hit condition of the cache block access event;
And the first drawing submodule is used for drawing a first memory access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
Optionally, the access information further includes backfill information of a cache block backfill event; the apparatus further comprises:
the backfill determining module is used for determining a backfill block address and a replacement block address according to the backfill information under the condition that a cache backfill event exists in the target cache group;
The mark adding module is used for adding a replacement mark in the first access observation diagram according to the backfill block address and the replacement block address;
And the abscissa of the replacement mark is the same as the abscissa of the access point corresponding to the backfill block address, and the ordinate of the replacement mark is the ordinate corresponding to the replacement block address.
Optionally, the evaluation module includes:
A first determining submodule, configured to determine first cache miss information corresponding to the first replacement policy according to the first access observation diagram;
The second determining submodule is used for determining a target replacement block corresponding to each backfill block in the target cache group under the condition of adopting a target replacement strategy according to the access trace information reflected in the first access observation diagram;
The second drawing submodule is used for drawing a target memory access observation diagram corresponding to the target replacement strategy according to the memory access trace information and the target replacement block;
a third determining submodule, configured to determine target cache set deletion information corresponding to the target replacement policy according to the target access observation graph;
and the evaluation sub-module is used for evaluating the decision performance of the first replacement strategy based on the first cache miss information and the target cache set miss information.
Optionally, the acquiring module includes:
the acquisition sub-module is used for acquiring a to-be-observed cache group identifier, a first replacement strategy and a simulation instruction number;
and the configuration submodule is used for configuring the simulator according to the cache group identifier, the first replacement strategy and the simulation instruction number so that the simulator executes a test program according to the first replacement strategy and the simulation instruction number and outputs access information of a target cache group corresponding to the cache group identifier.
Optionally, the apparatus further comprises:
The second drawing module is used for drawing a second memory access observation diagram corresponding to the second replacement strategy;
The score determining module is used for determining performance scores of the first replacement strategy and the second replacement strategy according to the first access observation diagram and the second access observation diagram respectively;
and the strategy updating module is used for updating the cache replacement strategy of the target cache group into the second replacement strategy under the condition that the performance score of the first replacement strategy is lower than that of the second replacement strategy.
In summary, the embodiment of the invention provides a management device for a cache replacement policy, which can draw a memory access observation diagram by taking a cache block address as a view angle through memory access information of a target cache group provided by a simulator, and display all access and cache block backfill events of the target cache group in a period of time, so that backfill and replacement time of each cache block are clearly displayed, decision and influence of the replacement policy are more intuitively displayed, and feedback is provided for design of the replacement policy. The embodiment of the invention can realize perspective of the operation process of the cache, whitens the prior black box process, increases the interpretability of the strategy and is beneficial to auxiliary debugging and design.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
The specific manner in which the various modules perform the operations in relation to the processor of the above-described embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
Referring to fig. 9, a block diagram of an electronic device for cache replacement policy management according to an embodiment of the present invention is provided. As shown in fig. 9, the electronic device includes: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is configured to store executable instructions that cause the processor to perform the method of managing cache replacement policies of the foregoing embodiments.
The Processor may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor ), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other editable device, transistor logic device, hardware component, or any combination thereof. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
The communication bus may include a path to transfer information between the memory and the communication interface. The communication bus may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in fig. 9, but not only one bus or one type of bus.
The Memory may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, an EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), a CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium, which when executed by a processor of an electronic device (server or terminal), enables the processor to perform the method of managing cache replacement policies shown in fig. 1.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems) and computer program products according to embodiments of the invention. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or terminal device that comprises the element.
The above description of the method, the device and the electronic equipment for managing cache replacement policy provided by the present invention applies specific examples to illustrate the principles and the implementation of the present invention, and the description of the above examples is only used to help understand the method and the core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (10)

1. A method of managing cache replacement policies, the method comprising:
Acquiring access information of a target cache group under the condition of adopting a first replacement strategy from a simulator; the simulator is used for simulating the running process of the processor system;
Classifying the access information according to the cache block address, and sorting the classified access information according to the time stamp;
drawing a first access observation diagram corresponding to the first replacement strategy based on the sequencing result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
evaluating the decision performance of the first replacement strategy according to the first access observation diagram;
The ordering result comprises an access point diagram corresponding to the target cache group; the drawing the first access observation diagram corresponding to the first replacement policy based on the sorting result includes:
marking different events according to event types on the access point diagram to obtain a first access observation diagram corresponding to the target cache group;
The evaluating decision performance of the first replacement policy according to the first access observation graph includes:
and counting the cache miss times in the target cache group based on the first access observation graph, and evaluating the first replacement strategy.
2. The method of claim 1, wherein the access information includes information regarding cache block access events; the drawing the first access observation diagram corresponding to the first replacement policy based on the sorting result includes:
Determining the ordinate of the access point according to the buffer block address corresponding to the buffer block access event, and determining the abscissa of the access point according to the ordering result corresponding to the buffer block access event;
Determining an identifier of the access point according to the access type of the cache block access event;
Determining the color of the identifier according to the hit condition of the cache block access event;
And drawing a first access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
3. The method of claim 2, wherein the access information further comprises backfill information of a cache block backfill event; the method further comprises the steps of:
Under the condition that a cache backfilling event exists in the target cache group, determining a backfilling block address and a replacement block address according to the backfilling information;
Adding a replacement mark in the first access observation diagram according to the backfill block address and the replacement block address;
And the abscissa of the replacement mark is the same as the abscissa of the access point corresponding to the backfill block address, and the ordinate of the replacement mark is the ordinate corresponding to the replacement block address.
4. The method of claim 1, wherein the evaluating decision performance of the first replacement policy from the first memory-access observation graph comprises:
determining first cache miss information corresponding to the first replacement strategy according to the first access observation diagram;
Determining a target replacement block corresponding to each backfill block in the target cache group under the condition of adopting a target replacement strategy according to access trace information reflected in the first access observation diagram;
Drawing a target memory observation diagram corresponding to the target replacement strategy according to the memory trace information and the target replacement block;
Determining target cache set deletion information corresponding to the target replacement strategy according to the target access observation diagram;
and evaluating decision performance of the first replacement policy based on the first cache miss information and the target cache set miss information.
5. The method of claim 1, wherein the obtaining access information of the target cache set from the simulator when the first replacement policy is adopted comprises:
acquiring a to-be-observed cache group identifier, a first replacement strategy and a simulation instruction number;
and configuring the simulator according to the cache group identifier, the first replacement strategy and the simulation instruction number, so that the simulator executes a test program according to the first replacement strategy and the simulation instruction number, and outputting access information of a target cache group corresponding to the cache group identifier.
6. The method according to claim 1, wherein the method further comprises:
drawing a second access observation diagram corresponding to a second replacement strategy;
Determining performance scores of the first replacement strategy and the second replacement strategy according to the first access observation diagram and the second access observation diagram respectively;
And updating the cache replacement strategy of the target cache group to the second replacement strategy under the condition that the performance score of the first replacement strategy is lower than that of the second replacement strategy.
7. An apparatus for managing cache replacement policies, the apparatus comprising:
The acquisition module is used for acquiring access information of the target cache group from the simulator under the condition of adopting the first replacement strategy; the simulator is used for simulating the running process of the processor system;
the preprocessing module is used for classifying the access information according to the cache block address and sorting the classified access information according to the time stamp;
The first drawing module is used for drawing a first memory access observation diagram corresponding to the first replacement strategy based on the sequencing result; the first access observation diagram is used for reflecting cache miss information and replacement block information in the target cache group when a first replacement strategy is adopted;
The evaluation module is used for evaluating the decision performance of the first replacement strategy according to the first access observation diagram;
the ordering result comprises an access point diagram corresponding to the target cache group; the first drawing module is specifically configured to:
marking different events according to event types on the access point diagram to obtain a first access observation diagram corresponding to the target cache group;
The evaluation module is specifically used for:
and counting the cache miss times in the target cache group based on the first access observation graph, and evaluating the first replacement strategy.
8. The apparatus of claim 7, wherein the access information includes information regarding cache block access events; the first drawing module includes:
the coordinate determination submodule is used for determining the ordinate of the access point according to the buffer block address corresponding to the buffer block access event and determining the abscissa of the access point according to the ordering result corresponding to the buffer block access event;
An identifier determining submodule, configured to determine an identifier of the access point according to an access type of the cache block access event;
The color determination submodule is used for determining the color of the identifier according to the hit condition of the cache block access event;
And the drawing submodule is used for drawing a first access observation diagram corresponding to the first replacement strategy according to the ordinate, the abscissa, the identifier and the color of the identifier of each access point.
9. An electronic device, comprising a processor, a memory, a communication interface, and a communication bus, wherein the processor, the memory, and the communication interface communicate with each other via the communication bus; the memory is configured to store executable instructions that cause the processor to perform the method of managing cache replacement policies according to any one of claims 1 to 6.
10. A readable storage medium, characterized in that instructions in the readable storage medium, when executed by a processor of an electronic device, enable the processor to perform the method of managing cache replacement policies according to any one of claims 1 to 6.
CN202410726081.7A 2024-06-06 2024-06-06 Management method and device of cache replacement policy and electronic equipment Active CN118295936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410726081.7A CN118295936B (en) 2024-06-06 2024-06-06 Management method and device of cache replacement policy and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410726081.7A CN118295936B (en) 2024-06-06 2024-06-06 Management method and device of cache replacement policy and electronic equipment

Publications (2)

Publication Number Publication Date
CN118295936A CN118295936A (en) 2024-07-05
CN118295936B true CN118295936B (en) 2024-08-02

Family

ID=91678150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410726081.7A Active CN118295936B (en) 2024-06-06 2024-06-06 Management method and device of cache replacement policy and electronic equipment

Country Status (1)

Country Link
CN (1) CN118295936B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157605A (en) * 2021-03-31 2021-07-23 西安交通大学 Resource allocation method and system for two-level cache, storage medium and computing device
CN113297098A (en) * 2021-05-24 2021-08-24 北京工业大学 High-performance-oriented intelligent cache replacement strategy adaptive to prefetching

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10185666B2 (en) * 2015-12-15 2019-01-22 Facebook, Inc. Item-wise simulation in a block cache where data eviction places data into comparable score in comparable section in the block cache
WO2022160321A1 (en) * 2021-01-30 2022-08-04 华为技术有限公司 Method and apparatus for accessing memory
CN117938683A (en) * 2024-01-25 2024-04-26 中国科学技术大学 Information center network-oriented collaborative cache test platform and automatic test method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157605A (en) * 2021-03-31 2021-07-23 西安交通大学 Resource allocation method and system for two-level cache, storage medium and computing device
CN113297098A (en) * 2021-05-24 2021-08-24 北京工业大学 High-performance-oriented intelligent cache replacement strategy adaptive to prefetching

Also Published As

Publication number Publication date
CN118295936A (en) 2024-07-05

Similar Documents

Publication Publication Date Title
Wu et al. Efficient metadata management for irregular data prefetching
CN110297787B (en) Method, device and equipment for accessing memory by I/O equipment
Liang et al. STEP: Sequentiality and thrashing detection based prefetching to improve performance of networked storage servers
CN102640124A (en) Store aware prefetching for a datastream
Franey et al. Tag tables
US20210182214A1 (en) Prefetch level demotion
US20170193055A1 (en) Method and apparatus for data mining from core traces
CN117609110B (en) Caching method, cache, electronic device and readable storage medium
US20240264951A1 (en) Logging cache line lifetime hints when recording bit-accurate trace
CN115495394A (en) Data prefetching method and data prefetching device
CN118295936B (en) Management method and device of cache replacement policy and electronic equipment
Liu et al. FLAP: Flash-aware prefetching for improving SSD-based disk cache
Dybdahl et al. Enhancing last-level cache performance by block bypassing and early miss determination
KR20180072345A (en) Prefetching method and apparatus for pages
CN113190350B (en) LLC (logical Link control) distribution method for mixed deployment of off-line containers
US20240193092A1 (en) Processor support for using cache way-locking to simultaneously record plural execution contexts into independent execution traces
US8191067B2 (en) Method and apparatus for establishing a bound on the effect of task interference in a cache memory
CN114817085A (en) Memory simulation system, method, electronic equipment and storage medium
Sato et al. An accurate simulator of cache-line conflicts to exploit the underlying cache performance
CN113778693B (en) Cache operation method, cache operation device, electronic equipment and processor
CN109213698A (en) VIVT cache access method, arbitration unit and processor
CN118550853A (en) Cache replacement method and device, electronic equipment and readable storage medium
LU500060B1 (en) Processor support for using memory page markings as logging cues to simultaneously record plural execution contexts into independent execution traces
Chang et al. PARC: A novel OS cache manager
Li et al. Algorithm-Switching-Based Last-Level Cache Structure with Hybrid Main Memory Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant