WO2017117734A1 - 一种缓存管理方法、缓存控制器以及计算机系统 - Google Patents
一种缓存管理方法、缓存控制器以及计算机系统 Download PDFInfo
- Publication number
- WO2017117734A1 WO2017117734A1 PCT/CN2016/070230 CN2016070230W WO2017117734A1 WO 2017117734 A1 WO2017117734 A1 WO 2017117734A1 CN 2016070230 W CN2016070230 W CN 2016070230W WO 2017117734 A1 WO2017117734 A1 WO 2017117734A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cache
- cache line
- access frequency
- controller
- line
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/122—Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1021—Hit rate improvement
Definitions
- the present invention relates to the field of communications, and in particular, to a cache management method, a cache controller, and a computer system.
- the main idea of the Cache technology is to place commonly used data in a Cache by a storage medium (such as a disk). Since the Cache read and write speed is much higher than that of a disk, the access efficiency of the entire system can be improved.
- Cache has relatively good performance, but has limited capacity and can only cache partially accessed data. In order to make Cache work better, you need to try to make the files involved in the upcoming read and write operations hit in the Cache. How to choose the data cached in the Cache (Cache Insert), how to choose to replace the Cache data (Cache replacement), is the two main aspects of Cache management.
- the prior art provides a Cache management algorithm based on a statistical count, and adds a counter for each cache line cache line, which is used to count the number of access requests for accessing the Cache line within a certain time interval, that is, for accessing the cache line. frequency.
- the prior art implements two cache line replacement algorithms.
- the time interval of the first algorithm is defined as accessing the cache line twice; when the access counter exceeds a certain threshold ⁇ thd, the cache line is used as the replacement target.
- the time interval of the second algorithm is defined as the time from the start of the cache line to the Cache to the current time; likewise, when the access counter exceeds a certain threshold LT, the cache line is used as the replacement target.
- the controller extracts the data from the storage medium and prepares to be put into the Cache as a new cache line. If the number of stored cache lines has reached the upper limit, you need to delete one from the original cache line.
- the controller will traverse the counters of all the original cache lines to know the number of access requests or access time intervals of each cache line, thereby selecting the least number of access requests or the cache line with the longest access interval, and The cache line is deleted and the new cache line is placed in the Cache.
- the controller needs to traverse all the original cache lines, and select a cache line to be replaced according to a certain calculation rule, thereby causing a large system overhead. And affect the efficiency of cache line replacement.
- the embodiment of the invention provides a cache management method, a cache controller and a computer system, which can effectively reduce the system overhead when the cache line is replaced and improve the efficiency of the cache line replacement.
- a first aspect of an embodiment of the present invention provides a cache management method applied to a computer system.
- the cache controller acquires an operation instruction sent by the CPU, where the operation instruction carries a destination address, where the destination address is an address in a memory to be accessed by the CPU. If at this time, the cache controller does not find a cache line matching the destination address in any cache line cache line in the cache of the computer system, that is, the destination address is missed, and there is no cache line existing in the cache at this time.
- the cache controller selects the cache line to be replaced from a pre-acquired replacement set for replacement. It should be noted that the replacement set includes at least two cache lines.
- the cache controller eliminates the selected cache line to be replaced from the cache. After that, the cache controller stores the cache line obtained from the destination address into the cache to complete the replacement of the cache line.
- the cache controller may directly select the cache line to be replaced from the replacement set for replacement, and each of the prior attempts of the cache needs to traverse the cache.
- the line finds that the replacement cache line is different, which effectively reduces the system overhead when the cache line is replaced, and improves the efficiency.
- the cache controller may obtain the access frequency count of each cache line in the cache in advance, and cache line according to a preset partitioning strategy.
- the access frequency count range is divided into multiple access frequency segments.
- the cache controller may select the cache line corresponding to the plurality of access frequency segments according to the number M of cache lines to be replaced. Select the cache line to be replaced to get the replacement set.
- M is an integer not less than 2.
- the cache controller may also not select a plurality of access frequency segments, and directly select M cache lines to be replaced from the cache lines in the cache to obtain a replacement set.
- the cache controller divides each cache line into a plurality of segments, which may facilitate the cache controller to select the cache line to be replaced in units of segments.
- selecting the cache line to be replaced it is necessary to compare the access frequency counts of the cache line one by one in large batches, and reduce the work overhead.
- the cache controller selects a cache line to be replaced from the cache lines corresponding to the plurality of access frequency segments according to the number M of cache lines to be replaced, including: the cache controller divides according to multiple access frequencies.
- the access frequency count range corresponding to the segment ranges from small to large, and the cache lines belonging to each access frequency segment are sequentially selected until the number of selected cache lines is equal to M.
- the lower the access frequency count of the cache line the less the number of times the cache line is accessed, and the later the time of the latest access, the priority may be replaced.
- the cache controller selects the cache lines belonging to each access frequency segment in turn according to the order in which the access frequency count ranges from small to large, and the optimal replacement set can be selected.
- the number M of cache lines to be replaced is determined according to the elimination ratio R and the total number of cache lines in the cache, where M is the product of the elimination ratio R and the total number of Cache lines.
- the cache controller may also periodically monitor the elimination frequency parameter, the elimination frequency parameter including at least one of a missing rate and a traversed frequency of each cache line in the cache.
- the elimination frequency parameter exceeds the first threshold, the cache controller adjusts the elimination ratio R to the elimination ratio R1, and the elimination ratio R1 is greater than the elimination ratio R.
- the elimination frequency parameter is less than the second threshold, the cache controller adjusts the elimination ratio R to the elimination ratio R2, and the elimination ratio R2 is smaller than the elimination ratio R, wherein the second threshold here is smaller than the first threshold.
- the cache controller dynamically adjusts the elimination ratio R according to the missing rate of the cache line, that is, the high miss rate, and the traversal rate of the cache line, so as to achieve a high miss rate and frequent traversal in the cache line.
- the elimination ratio R can be reduced in the case of a low defect rate of the cache line and less traversal to reduce the number of replacements in the replacement set.
- the number of cache lines to be replaced avoids the overhead of frequently selecting the cache line to generate a replacement set.
- the cache controller selects the cache line to be replaced from the replacement set, and the cache controller selects the cache line to be replaced according to the order of the access frequency of the cache line in the replacement set. Achieve optimal cache line utilization.
- the cache controller can also monitor the genus after obtaining the replacement set.
- the access frequency of the cache line of the replacement set is counted, and the cache line whose access frequency count is greater than the third threshold within a preset time is eliminated from the replacement set.
- the cache controller may continue to monitor the number of hits of the cache line in the replacement set. If the cache frequency of the cache line in the replacement set is greater than the first time in the preset time When the threshold is three, the cache controller can think that the cache line is hit by the destination address in the subsequently received operation instruction, and it is not necessary to eliminate it. In this way, the cache line in the replacement set can be updated to avoid partial elimination of the cache line being accessed.
- the cache controller can count the access frequency of each cache line obtained according to the number of accesses of each cache line.
- the cache controller can also obtain the access frequency count of each cache line according to the number of accesses of each cache line and the access time.
- a second aspect of the present invention provides a cache controller comprising means for performing the method of the first aspect described above and the various possible designs of the first aspect.
- a third aspect of the present invention provides a computer system including a processor, a cache, and a cache controller.
- the processor is used to send operational instructions.
- a cache controller for use in performing the method of the first aspect described above and the various possible designs of the first aspect
- the cache controller acquires an operation instruction, where the operation instruction carries a destination address, and the destination address is an address in a memory to be accessed by the operation instruction, and the destination address does not hit any cache in the cache of the computer system.
- the cache controller selects the cache line to be replaced from the replacement set, wherein the replacement set contains at least two cache lines, and the cache controller is eliminated from the cache.
- the cache line to be replaced the cache controller stores the cache line obtained from the destination address in the cache, so in the process of performing cache line replacement, the cache controller only needs to select the cache line to be replaced from the replacement set.
- the replacement set is pre-selected, which effectively improves the efficiency of cache line replacement.
- the present application provides a computer program product comprising a computer readable storage medium storing program code, the program code comprising instructions for performing the above first aspect and various possible designs of the first aspect At least one of the methods described.
- FIG. 1 is a schematic diagram of a computer system according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of a cache management method according to an embodiment of the present invention.
- FIG. 3 is another schematic diagram of a cache management method according to an embodiment of the present disclosure.
- FIG. 4 is a schematic diagram of a cache controller according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram of a cache controller according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of another computer system according to an embodiment of the present invention.
- the embodiment of the invention provides a cache management method, a cache controller and a computer system, which can effectively reduce the system overhead when the cache line is replaced and improve the efficiency of the cache line replacement.
- FIG. 1 is a structural diagram of a computer system according to an embodiment of the present invention.
- the Cache is a small-capacity memory between the CPU and main memory.
- the access speed is faster than the main memory and is close to the CPU. It can provide instructions and data to the CPU at high speed, improving the execution speed of the program.
- Cache technology is an important technology used to solve the speed mismatch between CPU and main memory.
- Cache is the main memory buffer, which is composed of high-speed static random access memory (SRAM).
- SRAM static random access memory
- the Cache can be built into the CPU of the central processing unit or built into the memory.
- the Cache can also be externally placed on the CPU as a separate entity. Between memory and memory. All control logic of the Cache is implemented by the internal cache controller.
- L1Cache level 1 cache
- L2Cache level 2 cache
- L3Cache third level cache
- the current Cache has a small capacity, and the content it stores is only a subset of the main memory content, and the data exchange between the CPU and the Cache is in units of words, and the data exchange between the Cache and the main memory is It is based on the cache line.
- a cache line consists of several fixed-length words.
- the controller when replacing the cache line, the controller traverses all the original cache lines to know the number of access requests or access intervals of each cache line, thereby selecting the least number of access requests or accessing.
- the cache line with the longest interval and delete the cache line, and put the new cache line into the Cache.
- the system overhead is large and the efficiency is not high.
- the embodiment of the present invention provides a cache management method.
- the cache management method in the process of replacing the cache line, it is not necessary to traverse all the original cache lines, and the calculation and selection are not required. Replace the cache line, which can effectively save system overhead and improve the efficiency of cache line replacement.
- the cache management method of the computer system shown in FIG. 1 is described in detail below for the specific flow in the embodiment of the present invention. It can be understood that the cache management method described in FIG. 2 can be applied to the cache shown in FIG. 1.
- an embodiment of a cache management method in an embodiment of the present invention includes:
- the cache controller allocates a marking area for each cache line.
- the cache controller may allocate a data area and a mark area for each cache line, and the data area may be used for storing real file data, and the mark area may store the file number and offset corresponding to the data area. , length information, or data address information, which is not limited here.
- the storage space size of each area of the foregoing data area may be dynamically changed or may be static.
- the cache controller may record in the marked area.
- the effective character length of the data area is not limited herein.
- the cache controller may add an access number and an access time counter for each marked area, and the access count counter may be used to count the number of times the cache line is accessed, and the access time counter may be used to record that the cache line is recently accessed.
- the time may be identified by an absolute time value, or may be identified by a number of time periods, which is not limited herein.
- the cache controller can count the number of accesses for each cache line, and can also At the same time, the number of accesses and the access time are counted for each cache line, which is not limited here.
- the Cache may be built in the CPU, may be built in the memory, or may be a separate entity.
- the entity may include a cache controller and a cache line.
- the cache line may include a static random access memory (SRAM). , specifically here is not limited.
- the cache controller obtains an access frequency count of each cache line.
- the cache controller may obtain the access frequency count of each cache line by detecting the marked area of each cache line, and the access frequency count may include at least one of the number of accesses or the access time. limited.
- the access frequency count of each cache line can be obtained according to the number of accesses of each cache line.
- the number of accesses to the cache line can be counted as the access frequency of the cache line.
- the access frequency count of each cache line can also be obtained according to the number of accesses of each cache line and the access time. For example, in an actual application, the cache controller may adjust the access frequency count by using the access time.
- One implementation manner may be: the cache controller corresponds the access time to the time slice, and one time slice includes a short period of time, and the calculation is performed.
- the value can be set statically based on the empirical value).
- the cache controller may update the access times or access times of the cache line after each cache line is accessed.
- the access time may be updated by recording the current access time, or by accessing the foregoing access.
- the time counter is reset to zero and restarts counting, which is not limited here.
- the cache controller determines an access frequency segment.
- the cache controller may determine multiple access frequency segments according to the access frequency count of each cache line in the cache and the preset division policy, and each access frequency segment may be counted according to different access frequencies. range.
- the preset dividing strategy may be that the cache controller first determines the parameter upper limit value and the parameter lower limit value based on the access frequency count of each cache line, and the lower limit value of the parameter may be 0, and the parameter upper limit is The value can be access_times_max. It should be noted that in the actual application, the access_times_max can be a preset value according to the experience of the user, and can also be obtained by counting the access frequency of each cache line by the cache controller. limited.
- the cache controller can determine that the cache line does not need to be replaced.
- the cache controller may divide the range of the parameter lower limit value 0 to the parameter upper limit access_times_max into N access frequency segments after determining the upper limit of the parameter and the lower limit of the parameter, and the value N may be
- the size of each access frequency segment can be a fixed value access_times_max/N, and can also be a random value, which can be randomly generated by the cache controller.
- the value of the value can also be a value that is input in advance by the user, and is not limited herein.
- the cache controller may further establish a data structure, where the data structure is used to record the access frequency count of the cache line in each access frequency segment, and the number of cache lines belonging to the segment may also be
- the access frequency segment corresponds to establish a data structure, which is not limited herein.
- the cache controller determines an access frequency segment to which each cache line belongs.
- the cache controller may determine the target access frequency count range to which the access frequency count of each cache line belongs by traversing each cache line, and determine the target access frequency score corresponding to the target access frequency count range of each cache line.
- the cache controller may record the number of cache lines included in each access frequency segment by using the above data structure to count the number of cache lines of each access frequency segment.
- the cache controller determines that a cache line belongs to an access frequency segment, and can adjust the data structure corresponding to the access frequency segment correspondingly. It can be understood that, in practical applications, Each time the cache controller determines that a cache line belongs to an access frequency segment, the data structure corresponding to the access frequency segment can increase by a value of 1.
- the cache controller determines the number of cache lines to be replaced M
- the cache controller may pre-select M cache lines that can be replaced according to the elimination ratio R.
- the cache line to be replaced may be a cache line with a low access frequency count, and the number of cache lines to be replaced is M. It can be the product of the elimination ratio R and the total number of cache lines, and M is an integer not less than 2. It should be noted that, in practical applications, the elimination ratio R can be a fixed value, or can be dynamically adjusted by the cache controller. , specifically here is not limited.
- the cache controller can monitor the phase-out frequency parameter in real time to determine whether the phase-out frequency parameter exceeds the first threshold. If yes, the cache controller can adjust the elimination ratio R to Amoy. The ratio R1, if not, the cache controller may further determine whether the phase-out frequency parameter is less than the second threshold. If less, the cache controller may adjust the elimination ratio R to the elimination ratio R2, which is required to be The cache controller may continue to monitor the elimination frequency parameter, and the first threshold and the second threshold may be preset values set by the user, and the first threshold is smaller than the second threshold.
- the cache controller may increase the elimination ratio R by A to obtain the elimination ratio R1, and may reduce the elimination ratio R by A to obtain the elimination ratio R2, and the value A may be a value preset by the user.
- the elimination frequency parameter may include at least one of a missing rate and a traversed frequency of each cache line in the cache, and the cache controller may return to continue to monitor the elimination frequency parameter after each adjustment of the elimination ratio R.
- the cache controller determines a replacement set.
- the cache controller may sequentially accumulate the cache lines in each access frequency segment into the replacement set according to the sequence of the access frequency count range from small to large, until the number of cache lines in the replacement set Equal to M, it can be understood that, in practical applications, the cache controller accumulating the cache line in each access frequency segment may include the following steps:
- the cache controller may determine, as the cumulative access frequency segment, the access frequency segment with the lowest value corresponding to the access frequency count range in the access frequency segment group, where the access frequency segment group includes each access frequency.
- the cache controller may remove the accumulated access frequency segment from the access frequency segment group
- the cache controller may accumulate the cache lines in the accumulated access frequency segment into the replacement set
- the cache controller can determine whether the number of candidate cache lines in the replacement set is equal to M. If it is equal to M, the flow can be ended. If it is less than M, it can determine whether the next access frequency segment to be integrated into the replacement set is For the critical access frequency segmentation, if not, steps 1) to 4) may be repeatedly performed, and if so, X cache lines may be randomly selected from the critical access frequency segment to be integrated into the replacement set, and the X may be The difference between the number of M and the number of cache lines that have been accumulated into the replacement set. It should be noted that the critical access frequency segment is smaller than the candidate cache line in the replacement set before the included cache line is accumulated into the replacement set. After the included cache lines are accumulated into the replacement set, the candidate cache line in the replacement set is greater than the access frequency segment of M.
- the cache controller may Continue to traverse each access frequency segment until the accumulated value is greater than or equal to M.
- the cache controller may first accumulate each access frequency segment into the eliminated range segment set, and then traverse each cache line to select the access frequency segment that belongs to the phase-out.
- the cache line the cache controller can determine the replacement set by using the selected cache line as a replacement set cache line.
- the cache controller acquires an operation instruction.
- the cache controller may receive an operation instruction sent by the CPU, where the operation instruction may carry a destination address, where the destination address may be an address in a memory to be accessed by the operation instruction.
- the cache controller determines whether the destination address is hit in the Cache, and if so, step 209 is performed, and if not, step 210 is performed;
- the main memory address of the word can be sent to the Cache and the main memory.
- the cache controller can determine whether the word currently exists in the Cache according to the address, which needs to be explained.
- the cache controller can preferentially match the cache line except the replacement set, and can also preferentially match the cache line in the replacement set, which is not limited herein.
- the cache controller updates the access frequency count.
- the cache controller may update the access frequency count of the cache line after the destination address is hit in the Cache, and the update manner may be increasing the value of the counter of the marked area in the cache line.
- the cache controller may also monitor the access frequency count of the cache line in the replacement set in real time, and may eliminate the cache line whose access frequency count is greater than the third threshold within a preset time from the replacement set.
- the third threshold can be one.
- the cache controller may continue to monitor the number of hits of the cache line in the replacement set. If the cache frequency of the cache line in the replacement set is greater than the first time in the preset time At the three thresholds, the cache controller can consider that the cache line is hit by the destination address in the subsequently received operation instruction, and may also be accessed, so that it may not need to be eliminated. In this way, the cache line in the replacement set can be updated to avoid partial elimination of the cache line being accessed.
- the cache controller completes the step process after updating the access frequency count of the cache line.
- the cache controller stores the cache line obtained from the destination address in the cache.
- the cache controller may be from the storage medium after the destination address is not hit in the cache.
- the destination address is obtained, and the destination address can be replaced by the cache line to be replaced selected from the replacement set.
- the cache controller can count corresponding to the access frequency. In the order of the values from small to large, the cache line with a small access frequency count in the replacement set is preferentially selected as the cache line to be replaced.
- the cache controller completes the step process after replacing the destination address with the cache line to be replaced in the replacement set.
- the Cache includes 10 cache lines, and the cache controller can obtain the access frequency count of each cache line, as shown in Table 1.
- the cache controller can determine a replacement set according to the access frequency count of each cache line in Table 1, and the process is as follows:
- the cache controller determines that the access frequency count lower limit value is 0, and the upper limit value is 30, wherein the cache controller determines that the cache line whose access frequency count is greater than the parameter upper limit 30 does not need to be eliminated;
- the cache controller can evenly divide the access frequency counts from 0 to 30 into three access frequency segments (ie, range segment #1 is 0 to 10, range segment #2 is 11 to 20, and range segment #3 is 21 to 30. );
- the cache controller can obtain the number of cache lines of each access frequency segment by traversing each cache line, as shown in Table 2;
- the cache controller can accumulate the cache lines that need to be eliminated as a replacement set according to the order of the access frequency count range from small to large, that is, select cache lines A1, A2, A3, A4 and A5 are used as replacement sets.
- the cache controller may preferentially select the cache line A1 to be replaced with the lowest access frequency count from the replacement set, and replace the destination address read from the storage medium into the standby address. Replaced cache line A1.
- the cache controller acquires an operation instruction, where the operation instruction carries a destination address, and the destination address is an address in a memory to be accessed by the operation instruction, and if the destination address does not hit any cache line in the cache, the cache line And the cache does not include an idle cache line, the cache controller selects a cache line to be replaced from the replacement set, wherein the replacement set includes at least two cache lines, and the cache controller deletes the cache to be replaced from the cache Line, the cache controller stores the cache line obtained from the memory according to the destination address in the cache, so in the process of performing cache line replacement, the cache controller only needs to select the cache line to be replaced from the replacement set.
- the replacement set is pre-selected, which effectively improves the efficiency of cache line replacement.
- an optional solution in the embodiment of the present invention is that the cache controller can determine the access frequency segment of the cache line without directly dividing the access frequency of each cache line. Replace the collection.
- another embodiment of the cache management method in the embodiment of the present invention includes:
- Steps 301 to 302 in this embodiment are the same as steps 201 to 202 in the embodiment shown in FIG. 2, and details are not described herein again.
- Step 303 in this embodiment is the same as step 205 in the embodiment shown in FIG. 2, and details are not described herein again.
- the cache controller determines the replacement set.
- the cache controller may sequentially accumulate the cache lines into the replacement set according to the sequence of the access frequency count range from small to large, until the number of cache lines to be replaced in the replacement set is equal to M.
- the cache line of the accumulated replacement set is the cache line to be replaced.
- Steps 305 to 308 in this embodiment are the same as steps 207 to 210 in the embodiment shown in FIG. 2, I will not repeat them here.
- FIG. 4 is a device embodiment corresponding to the method embodiment of FIG.
- An embodiment of the cache controller in the embodiment of the present invention includes:
- the obtaining module 401 is configured to obtain an operation instruction, where the operation instruction carries a destination address, and the destination address is an address to be accessed by the operation instruction;
- the first selection module 402 is configured to: when the destination address does not hit any cache line cache line in the cache of the computer system, and the cache line is not included in the cache, select the cache line to be replaced from the replacement set, where The replacement set contains at least two cache lines;
- the elimination module 403 is configured to eliminate the cache line to be replaced from the cache
- the storage module 404 is configured to store the cache line acquired from the destination address in the cache.
- the first determining module 405 is configured to determine, according to an access frequency count of each cache line in the cache and a preset splitting strategy, multiple access frequency segments, where each access frequency segment corresponds to a different access frequency count range;
- the second determining module 406 is configured to determine, according to an access frequency count of each cache line in the cache, an access frequency segment to which each cache line belongs;
- the second selection module 407 is configured to select a cache line to be replaced from the cache lines corresponding to the plurality of access frequency segments according to the number M of cache lines to be replaced, to obtain a replacement set, where M is not less than 2. Integer.
- the second selection module 407 in this embodiment is specifically configured to sequentially select the cache lines belonging to each access frequency segment according to the order in which the access frequency count ranges corresponding to the multiple access frequency segments are from small to large, until the selection is performed.
- the number of cache lines is equal to M.
- the third determining module 408 is configured to determine, according to the elimination ratio R and the total number of cache lines in the cache, the number M of the cache lines to be replaced, where M is the product of the elimination ratio R and the total number of cache lines;
- the monitoring module 409 is configured to monitor the elimination frequency parameter, and the elimination frequency parameter includes at least one of a missing rate and a traversed frequency of each cache line in the cache;
- the adjusting module 410 is configured to adjust the elimination ratio R when the elimination frequency parameter exceeds the first threshold In order to eliminate the ratio R1, the elimination ratio R1 is greater than the elimination ratio R;
- the adjustment module 410 is further configured to adjust the elimination ratio R to the elimination ratio R2 when the elimination frequency parameter is less than the second threshold, and the elimination ratio R2 is smaller than the elimination ratio R, wherein the second threshold is smaller than the first threshold.
- the monitoring module 409 is specifically configured to monitor an access frequency count of a cache line belonging to the replacement set
- the eliminating module 403 is specifically configured to: retire from the replacement set a cache line whose access frequency count is greater than a third threshold within a preset time.
- the obtaining module 401 obtains an operation instruction, where the operation instruction carries a destination address, and the destination address is an address to be accessed by the operation instruction, and when the destination address does not hit any cache line cache line in the cache of the computer system,
- the first selection module 402 selects the cache line to be replaced from the replacement set, wherein the replacement set includes at least two cache lines, and the elimination module 403 eliminates the replacement to be replaced from the cache.
- the cache line, the storage module 404 stores the cache line obtained from the destination address in the cache, so in the process of performing cache line replacement, the first selection module 402 only needs to select the cache line to be replaced from the replacement set.
- the replacement set is pre-selected, which effectively improves the efficiency of cache line replacement.
- the cache controller in the embodiment of the present invention further includes An embodiment includes:
- the obtaining module 501 is configured to obtain an operation instruction, where the operation instruction carries a destination address, and the destination address is an address to be accessed by the operation instruction;
- the first selection module 502 is configured to: when the destination address does not hit any cache line cache line in the cache of the computer system, and the cache line is not included in the cache, select the cache line to be replaced from the replacement set, where The replacement set contains at least two cache lines;
- the elimination module 503 is configured to eliminate the cache line to be replaced from the cache
- the storage module 504 is configured to store the cache line acquired from the destination address in the cache.
- the second selection module 505 is configured to select a cache line to be replaced from the cache line in the cache according to the number M of cache lines to be replaced, to obtain a replacement set, where M is an integer not less than 2.
- the second selection module 505 in this embodiment is specifically configured to sequentially select a cache line according to the order in which the access frequency count ranges corresponding to the cache lines are from small to large, until the number of selected cache lines is equal to M.
- the determining module 506 is configured to determine, according to the elimination ratio R and the total number of cache lines in the cache, the number M of the cache lines to be replaced, where M is the product of the elimination ratio R and the total number of cache lines;
- the monitoring unit 507 is configured to monitor the elimination frequency parameter, and the elimination frequency parameter includes at least one of a missing rate and a traversed frequency of each cache line in the cache;
- the adjusting module 508 is configured to adjust the elimination ratio R to the elimination ratio R1 when the elimination frequency parameter exceeds the first threshold, and the elimination ratio R1 is greater than the elimination ratio R;
- the adjustment module 508 is further configured to adjust the elimination ratio R to the elimination ratio R2 when the elimination frequency parameter is less than the second threshold, and the elimination ratio R2 is smaller than the elimination ratio R, wherein the second threshold is smaller than the first threshold.
- the monitoring module 506 is specifically configured to monitor an access frequency count of a cache line belonging to the replacement set
- the eliminating module 503 is specifically configured to: retire from the replacement set, the cache line whose access frequency count is greater than a third threshold within a preset time.
- the obtaining module 501 acquires an operation instruction, where the operation instruction carries a destination address, and the destination address is an address to be accessed by the operation instruction, and when the destination address does not hit any cache line cache line in the cache of the computer system,
- the selection module 502 selects the cache line to be replaced from the replacement set, wherein the replacement set includes at least two cache lines, and the elimination module 503 eliminates the cache line to be replaced from the cache.
- the storage module 504 stores the cache line obtained from the destination address in the cache, so in the process of performing cache line replacement, the selection module 502 only needs to select the cache line to be replaced from the replacement set, and the replacement set is Pre-selected, effectively improve the efficiency of cache line replacement.
- an embodiment of the computer system in the embodiment of the present invention includes: The processor 601, the cache 602, the cache controller 603, and the memory 604.
- the cache 602 is used to cache part of the data in the memory 604.
- the processor 601 is configured to send an operation instruction
- the cache controller 603 is used to:
- the cache line to be replaced is selected from the replacement set, wherein the replacement set contains at least two Cache line;
- the cache line obtained from the destination address is stored in the cache 602.
- the cache controller 603 is further configured to perform the following steps:
- the cache line to be replaced is selected from the cache lines corresponding to the plurality of access frequency segments according to the number of cache lines to be replaced, to obtain a replacement set, where M is an integer not less than 2.
- the cache controller 603 is further configured to perform the following steps:
- the cache lines belonging to each access frequency segment are sequentially selected according to the order in which the access frequency count ranges corresponding to the plurality of access frequency segments are from small to large, until the number of selected cache lines is equal to M.
- the cache controller 603 is further configured to perform the following steps:
- Monitoring the elimination frequency parameter, and the elimination frequency parameter includes at least one of a missing rate and a traversed frequency of each cache line in the cache;
- the elimination ratio R is adjusted to the elimination ratio R1, and the elimination ratio R1 is greater than the elimination ratio R;
- the elimination ratio R is adjusted to the elimination ratio R2, and the elimination ratio R2 is smaller than the elimination ratio R, wherein the second threshold is smaller than the first threshold.
- the cache controller 603 is further configured to perform the following steps:
- the cache line whose access frequency count is greater than the third threshold within a preset time is eliminated from the replacement set.
- the embodiment of the present invention further provides a computer program product for implementing an access request processing method, comprising: a computer readable storage medium storing program code, the program code comprising instructions for executing the method described in any one of the foregoing method embodiments Process.
- a computer readable storage medium storing program code, the program code comprising instructions for executing the method described in any one of the foregoing method embodiments Process.
- the foregoing storage medium includes: a USB flash drive, a mobile hard disk, a magnetic disk, an optical disk, a random access memory (RAM), a solid state disk (SSD), or other nonvolatiles.
- a non-transitory machine readable medium that can store program code, such as non-volatile memory.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
一种缓存管理方法、缓存控制器以及计算机系统。在该方法中,缓存控制器获取操作指令,当操作指令中的目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,缓存控制器从替换集合中选择待替换的cache line。其中,替换集合中包含有至少两个cache line。缓存控制器从缓存中淘汰待替换的cache line并将从目的地址获取的cache line存储于缓存中。本缓存管理方法能够减少cache line替换时的系统开销,提高cache line替换的效率。
Description
本发明涉及通信领域,尤其涉及一种缓存管理方法、缓存控制器以及计算机系统。
缓存(Cache)技术的主要思想是将常用的数据由存储介质(如磁盘)放置在Cache中,由于Cache的读写速度远高于磁盘,所以能够提高整个系统的访问效率。
一般而言,Cache的性能相对较好,但是容量有限,仅能缓存部分被访问的数据。为了让Cache发挥出更好的效率,需要尽量让即将发生的读写操作涉及到的文件在Cache里命中。如何选择缓存在Cache中的数据(Cache插入),如何选择替换出Cache的数据(Cache替换),是Cache管理的两个主要方面。
现有技术提供了一种基于统计计数的Cache管理算法,为每个缓存行cache line增加一个计数器,用于统计一定时间间隔内访问Cache line的访问请求数,也就是用于统计cache line的访问次数。基于访问计数器,现有技术实现了两种cache line替换算法。第一种算法的时间间隔定义为相邻两次访问该cache line;当访问计数器超过一定的阈值Δthd后,该cache line作为被替换目标。第二种算法的时间间隔定义为自cache line被加入到Cache中开始到当前时间;同样,当访问计数器超过一定阈值LT后,该cache line作为被替换目标。
现有技术中,如果当前需要读写的数据未在Cache里命中cache line,则控制器会从存储介质中提取出该数据,作为一个新的cache line准备放入Cache中,如果此时Cache中存储的cache line的数目已经达到了上限,则需要从原有的cache line中删除一个。
控制器会遍历原有的所有cache line的计数器,以获知每一个cache line的访问请求数或访问时间间隔,从而选取出一个访问请求数最少,或者是访问时间间隔最长的cache line,并将该cache line删除,同时将新的cache line放入Cache中。
因此,现有技术中,在进行cache line替换的过程中,控制器需要遍历所有的原有cache line,并依据一定的计算规则选择一个待替换的cache line,从而会造成较大的系统开销,并影响了cache line替换的效率。
发明内容
本发明实施例提供了一种缓存管理方法、缓存控制器以及计算机系统,能够有效减少cache line替换时的系统开销,提高cache line替换的效率。
本发明实施例第一方面提供了一种应用于计算机系统中的缓存管理方法。在该方法中,缓存控制器获取CPU发送的操作指令,其中,该操作指令中携带有目的地址,该目的地址为CPU待访问的内存中的地址。如果此时缓存控制器未在计算机系统的缓存中的任意一个缓存行cache line中找到与该目的地址匹配的cache line,即该目的地址未命中,且此时在缓存中没有空闲的cache line存在,则缓存控制器从一个预先获得的替换集合中选择待替换的cache line进行替换,这里需要说明的是,该替换集合中包含有至少两个cache line。缓存控制器将选择出的待替换的cache line从缓存中淘汰。之后,缓存控制器将从目的地址获取的cache line存储入缓存中以完成cache line的替换。
在本发明的实施例中,缓存控制器在目的地址未命中时,可以直接从替换集合中挑选出待替换的cache line进行替换,而与现有技术的每次未命中都需要遍历一遍各cache line找出替换cache line不同,有效减少了cache line替换时的系统开销,提高了效率。
在一个可能的设计中,缓存控制器在从替换集合中选择待替换的cache line之前,缓存控制器可以预先获取缓存中各cache line的访问频度计数,并依据预设的划分策略cache line的访问频度计数范围划分为多个访问频度分段。在依据各cache line的访问频度计数分别确定各cache line所属的访问频度分段之后,缓存控制器可以按照待替换的cache line的数量M从多个访问频度分段对应的cache line中选择待替换的cache line,以得到替换集合。其中,M为不小于2的整数。
可选的,本发明实施例中,缓存控制器也可以不划分多个访问频度分段,而直接从缓存中的各cache line中挑选出M个待替换的cache line得到替换集合。
本发明实施例中,缓存控制器将各cache line划分成多个分段,可以方便缓存控制器以段为单位选择所要替换的cache line。避免在选择待替换的cache line时,需要大批量的一个个比较cache line的访问频度计数,减少工作开销。
在一个可能的设计中,缓存控制器按照待替换的cache line的数量M从多个访问频度分段对应的cache line中选择待替换的cache line包括:缓存控制器按照多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于M。
本发明实施例中,cache line的访问频度计数越低代表该cache line的被访问次数越少,最近被访问的时间越晚,可以优先被替换。缓存控制器按照访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,可以选出最优的替换集合。
在一个可能的设计中,待替换的cache line的数目M是根据淘汰比例R与缓存中Cache line的总数确定的,其中,M为淘汰比例R与Cache line总数的乘积。
在又一个可能的设计中,缓存控制器还可以周期性监控淘汰频率参数,该淘汰频率参数包括缓存中各cache line的缺失率和被遍历频率中的至少一个。当淘汰频率参数超过第一阈值时,此时缓存控制器将淘汰比例R调整为淘汰比例R1,该淘汰比例R1会大于淘汰比例R。当淘汰频率参数小于第二阈值时,此时缓存控制器将淘汰比例R调整为淘汰比例R2,该淘汰比例R2会小于淘汰比例R,其中,此处的第二阈值会小于第一阈值。
本发明实施例中,缓存控制器依据cache line的缺失率,即高未命中率,以及cache line的被遍历率动态调整淘汰比例R,以达到在cache line在高缺失率以及频繁被遍历的情况下,通过提高淘汰比例R来增加替换集合中待替换的cache line的数量,同时,还可以在cache line低缺失率以及较少被遍历的情况下,降低淘汰比例R,以减少替换集合中的待替换的cache line的数量,避免了频繁挑选cache line生成替换集合的开销。
在一个可能的设计中,缓存控制器从替换集合中选择待替换的cache line包括:缓存控制器按照替换集合中cache line的访问频度计数从小到大的顺序选择出待替换的cache line,以达到最优的cache line的利用率。
在又一个可能的设计中,在获得替换集合之后,缓存控制器还可以监控属
于替换集合的cache line的访问频度计数,并从替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。换一种表达方式,在获得替换集合之后,缓存控制器可以继续监控替换集合中的cache line被命中的次数,若在预设时间内替换集合中的某个cache line的访问频度计数大于第三阈值时,缓存控制器可以认为该cache line被后续接收的操作指令中的目的地址命中,可以不必淘汰。通过这种方式,可以更新替换集合中的cache line,避免部分被访问的cache line被误淘汰。
在一个可能的设计中,缓存控制器可以根据各cache line的访问次数获取的各cache line的访问频度计数。缓存控制器也可以根据各cache line的访问次数以及访问时间获得各cache line的访问频度计数。
本发明第二方面提供了一种缓存控制器,该缓存控制器包括用于执行上述第一方面以及第一方面的各种可能的设计中所述的方法的模块。
本发明第三方面提供了一种计算机系统,该计算机系统包括处理器、缓存以及缓存控制器。该处理器用于发送操作指令。缓存控制器用于用于执行上述第一方面以及第一方面的各种可能的设计中所述的方法
本发明实施例中,缓存控制器获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的内存中的地址,当目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,则缓存控制器从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line,缓存控制器从缓存中淘汰待替换的cache line,缓存控制器将从目的地址获取的cache line存储于缓存中,因而在进行cache line替换的过程中,缓存控制器只需要从替换集合中挑选出待替换的cache line即可,而替换集合是预先选出的,有效提高了cache line替换的效率。
第四方面,本申请提供了一种计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令用于执行上述第一方面以及第一方面的各种可能的设计中所述的至少一种方法。
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本发明
的一些实施例。
图1为本发明实施例提供的一种计算机系统框架图;
图2为本发明实施例提供的一种缓存管理方法示意图;
图3为本发明实施例提供的一种缓存管理方法另示意图;
图4为本发明实施例提供的一种缓存控制器示意图;
图5为本发明实施例提供的一种缓存控制器示意图;
图6为本发明实施例提供的又一种计算机系统架构示意图。
本发明实施例提供了一种缓存管理方法、缓存控制器以及计算机系统,能够有效减少cache line替换时的系统开销,提高cache line替换的效率。
本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。
请参阅图1,图1为本发明实施例提供的一种计算机系统架构图。如图1所示,Cache是介于CPU和主存之间的小容量存储器,存取速度比主存快,接近CPU。它能高速地向CPU提供指令和数据,提高程序的执行速度。Cache技术是为了解决CPU和主存之间速度不匹配而采用的一项重要技术。
Cache是主存的缓冲存储器,由高速的静态随机存取存储器SRAM组成,Cache可以内置于中央处理器CPU中,也可以内置于内存Memory上,Cache还可以以一个单独存在的实体外置于CPU与内存之间。Cache的所有控制逻辑全部由内部缓存控制器实现。随着半导体器件集成度的不断提高,当前已出现了两级以上的多级Cache系统,如L1Cache(一级缓存)、L2Cache(二级缓存)以及L3Cache(三级级缓存)。
当前Cache的容量很小,它保存的内容只是主存内容的一个子集,且CPU与Cache之间的数据交换是以字(word)为单位的,而Cache与主存之间的数据交换则是以缓存行(cache line)为单位的。一个cache line由若干个定长的word组成。当CPU读取主存中的一个字时,该字的主存地址被发给Cache和主存,此时,Cache控制逻辑依据地址判断该字当前是否存在于Cache中:若
在,该字立即被从Cache传送给CPU;若不在,则用主存读周期把该字从主存读出送到CPU,同时把含有这个字的整个cache line从主存读出送到Cache中,并采用一定的替换策略将Cache中的某一cache line替换掉,替换算法由Cache管理逻辑电路来实现。
现有技术中,在替换cache line的时候,控制器会遍历原有的所有cache line,以获知每一个cache line的访问请求数或访问时间间隔,从而选取出一个访问请求数最少,或者是访问时间间隔最长的cache line,并将该cache line删除,同时将新的cache line放入Cache中。现有技术中替换cache line时的系统开销较大,且效率不高。
本发明实施例提供了一种缓存管理方法,在本发明实施例提供的缓存管理方法中,在每次替换cache line的过程中,不需要遍历原有的所有cache line,也不需要计算选择待替换的cache line,从而能够有效的节省系统开销,提高cache line替换的效率。为便于理解,下面对本发明实施例中的具体流程对图1所示的计算机系统的缓存管理方法进行详细描述。可以理解的是,图2所述的缓存管理方法可以应用于图1中所示的cache中。请参阅图2,本发明实施例中缓存管理方法一个实施例包括:
201、缓存控制器为各cache line分配标记区域;
本实施例中,缓存控制器可以为每个cache line分配一个数据区域和标记区域,该数据区域可以用于存放真正的文件数据,该标记区域可以存放上述数据区域所对应的文件号、偏移、长度信息,或者数据地址信息,具体此处不做限定。
需要说明的是,上述数据区域的每一个区域的存储空间大小都可以动态变化或者可以静态不动,在实际应用中,为了实现数据区域的存储空间大小动态变化,缓存控制器可以在标记区域记录该数据区域的有效字符长度,具体此处不做限定。
本实施例中,缓存控制器可以为每个标记区域增加访问次数和访问时间计数器,该访问次数计数器可以用于统计cache line被访问的次数,该访问时间计数器可以用于记录cache line最近被访问的时间,该时间可以用绝对时间值来标识,也可以用时间周期数来标识,具体此处不做限定。
需要说明的是,缓存控制器可以为每个cache line统计访问次数,还可以
同时为每个cache line统计访问次数和访问时间,具体此处不做限定。
本发明实施例中,Cache可以内置于CPU,也可以内置于内存,还可以为单独存在的实体,该实体可以包括缓存控制器和cache line,该cache line的组成可以包括静态随机存取存储器SRAM,具体此处不做限定。
202、缓存控制器获取各cache line的访问频度计数;
本实施例中,缓存控制器可以通过检测各cache line的标记区域得到各cache line的访问频度计数,该访问频度计数可以包含访问次数或访问时间中的至少一种,具体此处不做限定。例如,在一种情形下,各cache line的访问频度计数可以根据各cache line的访问次数获得。在这种情况下,可以将cache line的访问次数作为该cache line的访问频度计数。在又一种情形下,各cache line的访问频度计数还可以根据各cache line的访问次数以及访问时间获得。例如,在实际应用中,缓存控制器可以通过访问时间来调整访问频度计数,其一种实现方式可以是:缓存控制器将访问时间对应到时间片,其中一个时间片包含一小段时间,计算当前时间片与cache line最近访问时间所在的时间片的差值,根据时间片的差值来调整访问频度计数(例如:访问频度计数=当前访问次数-时间片差值*Delta,该Delta值可以根据经验值静态设定)。
需要说明的是,缓存控制器可以在各cache line被访问之后更新该cache line的访问次数或访问时间,在实际应用中,更新访问时间的方式可以是记录当前访问时间,还可以是将上述访问时间计数器归零并重新开始计数,具体此处不做限定。
203、缓存控制器确定访问频度分段;
本实施例中,缓存控制器可以根据缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段可以对应不同的访问频度计数范围。
本实施例中,该预设的划分策略可以是缓存控制器先基于各cache line的访问频度计数确定参数上限值和参数下限值,该参数下限值可以为0,该参数上限值可以为access_times_max,需要说明的是,在实际应用中,该access_times_max可以为用户根据经验预先设置的值,还可以为缓存控制器通过统计各cache line的访问频度计数获得,具体此处不做限定。
需要说明的是,在本实施例中,若cache line的访问频度计数大于该
access_times_max,则缓存控制器可以判定该cache line不需要被替换。
本实施例中预设的划分策略,缓存控制器可以在确定参数上限和参数下限后将参数下限值0到参数上限值access_times_max的范围划分为N个访问频度分段,该数值N可以为用户预先输入的数值,可以理解的是,在实际应用中,每个访问频度分段的大小可以为定值access_times_max/N,还可以为随机值,该随机值可以为缓存控制器随机生成的数值,还可以为用户预先输入的数值,具体此处不做限定。
需要说明的是,在实际应用中,每个访问频度分段的大小总和等于access_times_max。
本实施例中,缓存控制器还可以建立一个数据结构,该数据结构用来记录每个访问频度分段内cache line的访问频度计数属于该分段的cache line的数量,还可以为每个访问频度分段对应建立一个数据结构,具体此处不做限定。
204、缓存控制器确定各cache line所属的访问频度分段;
本实施例中,缓存控制器可以通过遍历各cache line确定各cache line的访问频度计数所属的目标访问频度计数范围,确定各cache line所属目标访问频度计数范围对应的目标访问频度分段,需要说明的是,缓存控制器可以通过上述数据结构记录各访问频度分段内所包含的cache line数量来统计各访问频度分段的cache line数量。
需要说明的是,在实际应用中,缓存控制器每判断一个cache line属于一个访问频度分段,可以对应调整该访问频度分段对应的数据结构,可以理解的是,在实际应用中,缓存控制器每判断一个cache line属于一个访问频度分段,该访问频度分段对应的数据结构可以增加数值1。
205、缓存控制器确定待替换的cache line的数量M;
本实施例中,缓存控制器可以根据淘汰比例R预先选择M个可以待替换的cache line,该待替换的cache line可以为访问频度计数低的cache line,该待替换的cache line的数目M可以是淘汰比例R与cache line总数的乘积,且M为不小于2的整数,需要说明的是,在实际应用中,该淘汰比例R可以为定值,也可以为缓存控制器对其动态调整,具体此处不做限定。
本实施例中,缓存控制器可以通过实时监控淘汰频率参数,判断该淘汰频率参数是否超过了第一阈值,若是,缓存控制器可以将淘汰比例R调整为淘
汰比例R1,若否,则缓存控制器可以进一步判断该淘汰频率参数是否小于第二阈值,若小于,则缓存控制器可以将淘汰比例R调整为淘汰比例R2,需要说明的是,若不小于,则缓存控制器可以继续监控淘汰频率参数,该第一阈值和第二阈值可以为用户预先设置的值,该第一阈值小于第二阈值。
需要说明的是,在实际应用中,缓存控制器可以将淘汰比例R增加A得到淘汰比例R1,可以将淘汰比例R减少A得到淘汰比例R2,该数值A可以为用户预先设置的值。
本实施例中,该淘汰频率参数可以包括缓存中各cache line的缺失率和被遍历频率中的至少一个,且缓存控制器可以在每次调整完淘汰比例R后返回继续监控淘汰频率参数。
206、缓存控制器确定替换集合;
本实施例中,缓存控制器可以按照访问频度计数范围对应的数值从小到大的顺序,依次将各访问频度分段中的cache line累计入替换集合,直至该替换集合中cache line的数目等于M,可以理解的是,在实际应用中,缓存控制器累计各访问频度分段中的cache line可以包含以下步骤:
1)缓存控制器可以将访问频度分段组中访问频度计数范围对应的数值最低的访问频度分段确定为累计访问频度分段,该访问频度分段组包含各访问频度分段;
2)缓存控制器可以从访问频度分段组中去除累计访问频度分段;
3)缓存控制器可以将该累计访问频度分段中的cache line累计入替换集合;
4)缓存控制器可以判断替换集合中的候选cache line的数目是否等于M,若等于M,则可以结束流程,若小于M,则可以判断下一个将要累计入替换集合的访问频度分段是否为临界访问频度分段,若不是,则可以重复执行步骤1)至4),若是,则可以从该临界访问频度分段中随机选取X个cache line累计入替换集合,该X可以为M与已累计入替换集合的cache line的数目的差值,需要说明的是,该临界访问频度分段为所包含的cache line累计入替换集合之前,替换集合中的候选cache line小于M,且所包含的cache line累计入替换集合之后,替换集合中的候选cache line大于M的访问频度分段。
需要说明的是,若当前替换集合中累计值不超过M,则缓存控制器可以
继续遍历各访问频度分段,直至累计值大于或等于M。
需要说明的是,在实际应用中,缓存控制器可以先将各访问频度分段累计入被淘汰的范围段集合,再可以通过遍历各cache line,选出属于被淘汰的访问频度分段的cache line,缓存控制器可以通过将该选出的cache line作为替换集合cache line来确定替换集合。
207、缓存控制器获取操作指令;
缓存控制器可以接收CPU发送的操作指令,该操作指令中可以携带有目的地址,该目的地址可以为操作指令待访问的内存中的地址。
208、缓存控制器判断目的地址是否在Cache中命中,若是,则执行步骤209,若否,则执行步骤210;
当CPU读取主存中的一个字时,该字的主存地址可以被发给Cache和主存,此时,缓存控制器可以依据该地址判断该字当前是否存在于Cache中,需要说明的是,在实际应用中,缓存控制器可以优先匹配除替换集合以外cache line,也可以优先匹配替换集合中的cache line,具体此处不做限定。
209、缓存控制器更新访问频度计数;
本实施例中,缓存控制器可以在目的地址在Cache中命中之后更新该cache line的访问频度计数,该更新方式可以为增加该cache line中标记区域的计数器的数值。
在实际应用中,缓存控制器还可以实时监控替换集合中cache line的访问频度计数,并且可以从替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。例如,该第三阈值可以为1。换一种表达方式,在获得替换集合后,缓存控制器可以继续监控替换集合中的cache line被命中的次数,若在预设时间内替换集合中的某个cache line的访问频度计数大于第三阈值时,缓存控制器可以认为该cache line被后续接收的操作指令中的目的地址命中,可能还会被访问,从而可以不必被淘汰。通过这种方式,可以更新替换集合中的cache line,避免部分被访问的cache line被误淘汰。
需要说明的是,缓存控制器在更新完cache line的访问频度计数之后完成步骤流程。
210、缓存控制器将从目的地址获取的cache line存储于缓存中。
本实施例中,缓存控制器可以在目的地址未在cache中命中之后从存储介
质中获取该目的地址,并可以将该目的地址替换写入从替换集合中选出的待替换cache line,需要说明的是,在实际应用中,缓存控制器可以通过按照访问频度计数对应的数值从小到大的顺序,优先选择替换集合中访问频度计数小的cache line作为待替换的cache line。
需要说明的是,缓存控制器在将目的地址替换入替换集合中待替换的cache line之后完成步骤流程。
在本发明实施例的一个具体场景中,Cache中包含10个cache line,缓存控制器可以获取到各cache line的访问频度计数,详见表1。
表1
缓存控制器可以根据表1中各cache line的访问频度计数来确定一个替换集合,其过程如下:
缓存控制器确定访问频度计数下限值为0,上限值为30,其中,缓存控制器判定访问频度计数大于参数上限30的cache line不需要淘汰;
缓存控制器可以将访问频度计数0~30均匀划分成3个访问频度分段(即范围段#1为0~10,范围段#2为11~20,范围段#3为21~30);
缓存控制器可以通过遍历各cache line获得各访问频度分段的cache line数量,详见表2;
表2
当缓存控制器确定淘汰比例R为50%时,缓存控制器可以计算出待替换的cache line的数目M为5(M=cache line总数*淘汰比例R=10*50%=5),其中,缓存控制器可以确定范围段#1内A1、A2、A3cache line都可以是待替换的cache line,并且缓存控制器可以判定范围段#2为临界访问频度分段,临界访
问频度分段范围段#2中需要被淘汰的cache line数量为2(M-范围段#1cache line数量=5-3=2);
参考表1和表2,缓存控制器可以按照访问频度计数范围对应的数值从小到大的顺序,将需要被淘汰的cache line累计作为替换集合,即依次选出cache line A1、A2、A3、A4以及A5作为替换集合。
当目的地址未在Cache中命中时,缓存控制器可以优先从替换集合中挑选出访问频度计数最低的待替换的cache line A1,并将从存储介质中读取的该目的地址替换入该待替换的cache line A1。
本发明实施例中,缓存控制器获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的内存中的地址,若目的地址未命中缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line,则缓存控制器从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line,缓存控制器从缓存中删除待替换的cache line,缓存控制器将根据目的地址从内存中获取的cache line存储于缓存中,因而在进行cache line替换的过程中,缓存控制器只需要从替换集合中挑选出待替换的cache line即可,而替换集合是预先选出的,有效提高了cache line替换的效率。
与图2所示实施例不同的是,本发明实施例中一个可选的方案是:缓存控制器可以不划分cache line的访问频度分段,而直接通过各cache line的访问频度计数确定替换集合。
请参阅图3,本发明实施例中缓存管理方法另一实施例包括:
本实施例中的步骤301至302与图2所示实施例中的步骤201至202相同,此处不再赘述。
本实施例中的步骤303与图2所示实施例中的步骤205相同,此处不再赘述。
304、缓存控制器确定替换集合;
本实施例中,缓存控制器可以按照访问频度计数范围对应的数值从小到大的顺序,依次将各cache line累计入替换集合,直至该替换集合中的待替换的cache line的数目等于M,该累计入替换集合的cache line都为待替换的cache line。
本实施例中的步骤305至308与图2所示实施例中的步骤207至210相同,
此处不再赘述。
上面对本发明实施例中的缓存管理方法进行了描述,下面对本发明实施例中的缓存控制器进行描述,图4描述的是与图2方法实施例所对应的装置实施例,请参阅图4,本发明实施例中缓存控制器一个实施例包括:
获取模块401,用于获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的地址;
第一选择模块402,用于当目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line;
淘汰模块403,用于从缓存中淘汰待替换的cache line;
存储模块404,用于将从目的地址获取的cache line存储于缓存中。
本实施例中的缓存控制器还包括:
第一确定模块405,用于根据缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段对应不同的访问频度计数范围;
第二确定模块406,用于分别根据缓存中各cache line的访问频度计数确定各cache line所属的访问频度分段;
第二选择模块407,用于按照待替换的cache line的数量M从多个访问频度分段对应的cache line中选择待替换的cache line,以得到替换集合,其中,M为不小于2的整数。
本实施例中的第二选择模块407具体用于按照多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于M。
本实施例中的缓存控制器还包括:
第三确定模块408,用于根据淘汰比例R与缓存中cache line的总数确定待替换的cache line的数量M,M为淘汰比例R与cache line总数的乘积;
本实施例中的缓存控制器还可以进一步包括:
监控模块409,用于监控淘汰频率参数,淘汰频率参数包括缓存中各cache line的缺失率和被遍历频率中的至少一个;
调整模块410,用于当淘汰频率参数超过第一阈值时,将淘汰比例R调整
为淘汰比例R1,淘汰比例R1大于淘汰比例R;
该调整模块410,还用于当淘汰频率参数小于第二阈值时,将淘汰比例R调整为淘汰比例R2,淘汰比例R2小于淘汰比例R,其中,第二阈值小于第一阈值。
本实施例中的缓存控制器还包括:
监控模块409具体用于监控属于替换集合的cache line的访问频度计数;
淘汰模块403具体用于从替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
本实施例中,获取模块401获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的地址,当目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,第一选择模块402从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line,淘汰模块403从缓存中淘汰待替换的cache line,存储模块404将从目的地址获取的cache line存储于缓存中,因而在进行cache line替换的过程中,第一选择模块402只需要从替换集合中挑选出待替换的cache line即可,而替换集合是预先选出的,有效提高了cache line替换的效率。
上面对图2方法实施例所对应的装置实施例进行了描述,下面对图3所示方法实施例的装置部分实施例进行描述,请参阅图5,本发明实施例中缓存控制器另一实施例包括:
获取模块501,用于获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的地址;
第一选择模块502,用于当目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line;
淘汰模块503,用于从缓存中淘汰待替换的cache line;
存储模块504,用于将从目的地址获取的cache line存储于缓存中。
本实施例中的缓存控制器还包括:
第二选择模块505,用于按照待替换的cache line的数量M从缓存中的cache line中选择待替换的cache line,以得到替换集合,其中,M为不小于2的整数。
本实施例中的第二选择模块505具体用于按照各cache line对应的访问频度计数范围从小到大的顺序,依次选择出cache line,直至选择的cache line的数量等于M。
本实施例中的缓存控制器还包括:
确定模块506,用于根据淘汰比例R与缓存中cache line的总数确定待替换的cache line的数量M,M为淘汰比例R与cache line总数的乘积;
本实施例中的缓存控制器还可以进一步包括:
监控单元507,用于监控淘汰频率参数,淘汰频率参数包括缓存中各cache line的缺失率和被遍历频率中的至少一个;
调整模块508,用于当淘汰频率参数超过第一阈值时,将淘汰比例R调整为淘汰比例R1,淘汰比例R1大于淘汰比例R;
该调整模块508,还用于当淘汰频率参数小于第二阈值时,将淘汰比例R调整为淘汰比例R2,淘汰比例R2小于淘汰比例R,其中,第二阈值小于第一阈值。
本实施例中的缓存控制器还包括:
监控模块506具体用于监控属于替换集合的cache line的访问频度计数;
淘汰模块503具体用于从替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
本实施例中,获取模块501获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的地址,当目的地址未命中计算机系统的缓存中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,选择模块502从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line,淘汰模块503从缓存中淘汰待替换的cache line,存储模块504将从目的地址获取的cache line存储于缓存中,因而在进行cache line替换的过程中,选择模块502只需要从替换集合中挑选出待替换的cache line即可,而替换集合是预先选出的,有效提高了cache line替换的效率。
上面从模块化功能实体的角度对本发明实施例中的缓存控制器进行描述,下面对本发明实施例中的计算机系统进行描述,请参阅图6,本发明实施例中的计算机系统一个实施例包括:处理器601、缓存602、缓存控制器603和内存604。其中,缓存602用于缓存内存604中的部分数据。
在本发明的一些实施例中,处理器601用于发送操作指令;
缓存控制器603用于:
获取操作指令,其中,操作指令中携带有目的地址,目的地址为操作指令待访问的地址;
当目的地址未命中计算机系统的缓存602中的任意一个缓存行cache line,且缓存中未包括空闲的cache line时,从替换集合中选择待替换的cache line,其中,替换集合中包含有至少两个cache line;
从缓存中淘汰待替换的cache line;
将从目的地址获取的cache line存储于缓存602中。
在本发明的一些实施例中,缓存控制器603还用于执行以下步骤:
根据缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段对应不同的访问频度计数范围;
分别根据缓存中各cache line的访问频度计数确定各cache line所属的访问频度分段;
按照待替换的cache line的数量M从多个访问频度分段对应的cache line中选择待替换的cache line,以得到替换集合,其中,M为不小于2的整数。
在本发明的一些实施例中,缓存控制器603还用于执行以下步骤:
按照多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于M。
在本发明的一些实施例中,缓存控制器603还用于执行以下步骤:
监控淘汰频率参数,淘汰频率参数包括缓存中各cache line的缺失率和被遍历频率中的至少一个;
当淘汰频率参数超过第一阈值时,将淘汰比例R调整为淘汰比例R1,淘汰比例R1大于淘汰比例R;
当淘汰频率参数小于第二阈值时,将淘汰比例R调整为淘汰比例R2,淘汰比例R2小于淘汰比例R,其中,第二阈值小于第一阈值。
在本发明的一些实施例中,缓存控制器603还用于执行以下步骤:
监控属于替换集合的cache line的访问频度计数;
从替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本发明实施例还提供一种实现访问请求处理方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令用于执行前述任意一个方法实施例所述的方法流程。本领域普通技术人员可以理解,前述的存储介质包括:U盘、移动硬盘、磁碟、光盘、随机存储器(Random-Access Memory,RAM)、固态硬盘(Solid State Disk,SSD)或者其他非易失性存储器(non-volatile memory)等各种可以存储程序代码的非短暂性的(non-transitory)机器可读介质。
需要说明的是,本申请所提供的实施例仅仅是示意性的。所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。在本发明实施例、权利要求以及附图中揭示的特征可以独立存在也可以组合存在。在本发明实施例中以硬件形式描述的特征可以通过软件来执行,反之亦然。在此不做限定。
Claims (17)
- 一种应用于计算机系统中的缓存管理方法,其特征在于,包括:缓存控制器获取操作指令,其中,所述操作指令中携带有目的地址,所述目的地址为所述操作指令待访问的地址;当所述目的地址未命中所述计算机系统的缓存中的任意一个缓存行cache line,且所述缓存中未包括空闲的cache line时,所述缓存控制器从替换集合中选择待替换的cache line,其中,所述替换集合中包含有至少两个cache line;所述缓存控制器从所述缓存中淘汰所述待替换的cache line;所述缓存控制器将根据所述目的地址获取的cache line存储于所述缓存中。
- 根据权利要求1所述的缓存管理方法,其特征在于,所述方法还包括:所述缓存控制器根据所述缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段对应不同的访问频度计数范围;所述缓存控制器分别根据所述缓存中各cache line的访问频度计数确定各cache line所属的访问频度分段;所述缓存控制器按照待替换的cache line的数量M从所述多个访问频度分段对应的cache line中选择待替换的cache line,以得到所述替换集合,其中,所述M为不小于2的整数。
- 根据权利要求2所述的缓存管理方法,其特征在于,所述缓存控制器按照待替换的cache line的数量M从所述多个访问频度分段对应的cache line中选待替换的cache line包括:所述缓存控制器按照所述多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于所述M。
- 根据权利要求2或3所述的缓存管理方法,其特征在于,所述待替换的cache line的数目M是根据淘汰比例R与所述缓存中Cache line的总数确定的,其中,所述M为所述淘汰比例R与所述Cache line总数的乘积;所述方法还包括:所述缓存控制器监控淘汰频率参数,所述淘汰频率参数包括所述缓存中各 cache line的缺失率和被遍历频率中的至少一个;当所述淘汰频率参数超过第一阈值时,所述缓存控制器将所述淘汰比例R调整为淘汰比例R1,所述淘汰比例R1大于所述淘汰比例R;当所述淘汰频率参数小于第二阈值时,所述缓存控制器将所述淘汰比例R调整为淘汰比例R2,所述淘汰比例R2小于所述淘汰比例R,其中,所述第二阈值小于所述第一阈值。
- 根据权利要求1-4任意一项所述的缓存管理方法,其特征在于,所述各cache line的访问频度计数包括:所述各cache line的访问频度计数根据各cache line的访问次数获得;或所述各cache line的访问频度计数根据各cache line的访问次数以及访问时间获得。
- 根据权利要求1-4任意一项所述的方法,其特征在于,还包括:所述缓存控制器监控属于所述替换集合的cache line的访问频度计数;所述缓存控制器从所述替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
- 一种缓存控制器,其特征在于,包括:获取模块,用于获取操作指令,其中,所述操作指令中携带有目的地址,所述目的地址为所述操作指令待访问的地址;第一选择模块,用于当所述目的地址未命中所述计算机系统的缓存中的任意一个缓存行cache line,且所述缓存中未包括空闲的cache line时,从替换集合中选择待替换的cache line,其中,所述替换集合中包含有至少两个cache line;淘汰模块,用于从所述缓存中淘汰所述待替换的cache line;存储模块,用于将从所述目的地址获取的cache line存储于所述缓存中。
- 根据权利要求7所述的缓存控制器,其特征在于,所述缓存控制器还包括:第一确定模块,用于根据所述缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段对应不同的访问频度计数范围;第二确定模块,用于分别根据所述缓存中各cache line的访问频度计数确 定各cache line所属的访问频度分段;第二选择模块,用于按照待替换的cache line的数量M从所述多个访问频度分段对应的cache line中选择待替换的cache line,以得到所述替换集合,其中,所述M为不小于2的整数。
- 根据权利要求8所述的缓存控制器,其特征在于,所述第二选择模块具体用于:按照所述多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于所述M。
- 根据权利要求8或9所述的缓存控制器,其特征在于,所述缓存控制器还包括:第三确定模块,用于根据淘汰比例R与所述缓存中cache line的总数确定所述待替换的cache line的数量M,所述M为所述淘汰比例R与所述cache line总数的乘积。
- 根据权利要求10所述的缓存控制器,其特征在于,所述缓存控制器还包括:监控模块,用于监控淘汰频率参数,所述淘汰频率参数包括所述缓存中各cache line的缺失率和被遍历频率中的至少一个;调整模块,用于当所述淘汰频率参数超过第一阈值时,将所述淘汰比例R调整为淘汰比例R1,所述淘汰比例R1大于所述淘汰比例R;或当所述淘汰频率参数小于第二阈值时,将所述淘汰比例R调整为淘汰比例R2,所述淘汰比例R2小于所述淘汰比例R,其中,所述第二阈值小于所述第一阈值。
- 根据权利要求7所述的缓存控制器,其特征在于:所述监控模块还用于监控属于所述替换集合的cache line的访问频度计数;所述淘汰模块还用于从所述替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
- 一种计算机系统,其特征在于,包括处理器、缓存以及缓存控制器:所述处理器,用于发送操作指令;所述缓存控制器用于:获取操作指令,其中,所述操作指令中携带有目的地址,所述目的地址为所述操作指令待访问的地址;当所述目的地址未命中所述计算机系统的缓存中的任意一个缓存行cache line,且所述缓存中未包括空闲的cache line时,从替换集合中选择待替换的cache line,其中,所述替换集合中包含有至少两个cache line;从所述缓存中淘汰所述待替换的cache line;将从所述目的地址获取的cache line存储于所述缓存中。
- 根据权利要求13所述的计算机系统,其特征在于,所述缓存控制器还用于:根据所述缓存中各cache line的访问频度计数以及预设的划分策略确定多个访问频度分段,每个访问频度分段对应不同的访问频度计数范围;分别根据所述缓存中各cache line的访问频度计数确定各cache line所属的访问频度分段;按照待替换的cache line的数量M从所述多个访问频度分段对应的cache line中选择待替换的cache line,以得到所述替换集合,其中,所述M为不小于2的整数。
- 根据权利要求14所述的计算机系统,其特征在于,所述缓存控制器还用于:按照所述多个访问频度分段对应的访问频度计数范围从小到大的顺序,依次选择属于各访问频度分段中的cache line,直至选择的cache line的数量等于所述M。
- 根据权利要求14或15所述的计算机系统,其特征在于,所述缓存控制器还用于:监控淘汰频率参数,所述淘汰频率参数包括所述缓存中各cache line的缺失率和被遍历频率中的至少一个;当所述淘汰频率参数超过第一阈值时,将所述淘汰比例R调整为淘汰比例R1,所述淘汰比例R1大于所述淘汰比例R;当所述淘汰频率参数小于第二阈值时,将所述淘汰比例R调整为淘汰比例R2,所述淘汰比例R2小于所述淘汰比例R,其中,所述第二阈值小于所述第一阈值。
- 根据权利要求13所述的计算机系统,其特征在于,所述缓存控制器还用于:监控属于所述替换集合的cache line的访问频度计数;从所述替换集合中淘汰在预设时间内访问频度计数大于第三阈值的cache line。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16882879.6A EP3388935B1 (en) | 2016-01-06 | 2016-01-06 | Cache management method, cache controller and computer system |
PCT/CN2016/070230 WO2017117734A1 (zh) | 2016-01-06 | 2016-01-06 | 一种缓存管理方法、缓存控制器以及计算机系统 |
CN201680059005.9A CN108139872B (zh) | 2016-01-06 | 2016-01-06 | 一种缓存管理方法、缓存控制器以及计算机系统 |
US16/028,265 US10831677B2 (en) | 2016-01-06 | 2018-07-05 | Cache management method, cache controller, and computer system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/070230 WO2017117734A1 (zh) | 2016-01-06 | 2016-01-06 | 一种缓存管理方法、缓存控制器以及计算机系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/028,265 Continuation US10831677B2 (en) | 2016-01-06 | 2018-07-05 | Cache management method, cache controller, and computer system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017117734A1 true WO2017117734A1 (zh) | 2017-07-13 |
Family
ID=59273134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/070230 WO2017117734A1 (zh) | 2016-01-06 | 2016-01-06 | 一种缓存管理方法、缓存控制器以及计算机系统 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10831677B2 (zh) |
EP (1) | EP3388935B1 (zh) |
CN (1) | CN108139872B (zh) |
WO (1) | WO2017117734A1 (zh) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408411A (zh) * | 2018-09-25 | 2019-03-01 | 浙江工商大学 | 基于数据访问次数的GPGPU的L1 Cache管理方法 |
CN111221749A (zh) * | 2019-11-15 | 2020-06-02 | 新华三半导体技术有限公司 | 数据块写入方法、装置、处理器芯片及Cache |
WO2022048187A1 (zh) * | 2020-09-07 | 2022-03-10 | 华为技术有限公司 | 一种发送清除报文的方法及装置 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10845995B2 (en) * | 2017-06-30 | 2020-11-24 | Intel Corporation | Techniques to control an insertion ratio for a cache |
US10824567B2 (en) * | 2018-12-04 | 2020-11-03 | International Business Machines Corporation | Selectively preventing pre-coherence point reads in a cache hierarchy to reduce barrier overhead |
CN110519181B (zh) * | 2019-07-23 | 2022-09-06 | 中国航空无线电电子研究所 | 基于跨频段时间触发通信的交换方法 |
US11768779B2 (en) * | 2019-12-16 | 2023-09-26 | Advanced Micro Devices, Inc. | Cache management based on access type priority |
CN116897335A (zh) * | 2021-02-26 | 2023-10-17 | 华为技术有限公司 | 一种缓存替换方法和装置 |
CN113177069B (zh) * | 2021-05-08 | 2024-07-09 | 中国科学院声学研究所 | 一种高速缓存与查询系统及查询方法 |
CN113282523B (zh) * | 2021-05-08 | 2022-09-30 | 重庆大学 | 一种缓存分片的动态调整方法、装置以及存储介质 |
CN113778912B (zh) * | 2021-08-25 | 2024-05-07 | 深圳市中科蓝讯科技股份有限公司 | cache映射架构动态调整方法及cache控制器 |
CN117130663B (zh) * | 2023-09-19 | 2024-06-11 | 摩尔线程智能科技(北京)有限责任公司 | 一种指令读取方法及l2指令缓存、电子设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7546417B1 (en) * | 2008-07-15 | 2009-06-09 | International Business Machines Corporation | Method and system for reducing cache tag bits |
CN103246616A (zh) * | 2013-05-24 | 2013-08-14 | 浪潮电子信息产业股份有限公司 | 一种长短周期访问频度的全局共享缓存替换方法 |
CN103873562A (zh) * | 2014-02-27 | 2014-06-18 | 车智互联(北京)科技有限公司 | 缓存方法和缓存系统 |
CN104077242A (zh) * | 2013-03-25 | 2014-10-01 | 华为技术有限公司 | 一种缓存管理方法和装置 |
CN104123243A (zh) * | 2013-04-24 | 2014-10-29 | 鸿富锦精密工业(深圳)有限公司 | 数据缓存系统及方法 |
CN105094686A (zh) * | 2014-05-09 | 2015-11-25 | 华为技术有限公司 | 数据缓存方法、缓存和计算机系统 |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09212421A (ja) * | 1996-02-01 | 1997-08-15 | Hitachi Ltd | データ処理装置及びデータ処理システム |
US7133971B2 (en) * | 2003-11-21 | 2006-11-07 | International Business Machines Corporation | Cache with selective least frequently used or most frequently used cache line replacement |
US6813691B2 (en) * | 2001-10-31 | 2004-11-02 | Hewlett-Packard Development Company, L.P. | Computer performance improvement by adjusting a count used for preemptive eviction of cache entries |
US7284095B2 (en) * | 2004-08-18 | 2007-10-16 | International Business Machines Corporation | Latency-aware replacement system and method for cache memories |
US8996812B2 (en) * | 2009-06-19 | 2015-03-31 | International Business Machines Corporation | Write-back coherency data cache for resolving read/write conflicts |
US8712984B2 (en) * | 2010-03-04 | 2014-04-29 | Microsoft Corporation | Buffer pool extension for database server |
US8914582B1 (en) * | 2011-11-14 | 2014-12-16 | Google Inc. | Systems and methods for pinning content in cache |
US8843707B2 (en) * | 2011-12-09 | 2014-09-23 | International Business Machines Corporation | Dynamic inclusive policy in a hybrid cache hierarchy using bandwidth |
US8688915B2 (en) * | 2011-12-09 | 2014-04-01 | International Business Machines Corporation | Weighted history allocation predictor algorithm in a hybrid cache |
US9767032B2 (en) * | 2012-01-12 | 2017-09-19 | Sandisk Technologies Llc | Systems and methods for cache endurance |
US10133678B2 (en) * | 2013-08-28 | 2018-11-20 | Advanced Micro Devices, Inc. | Method and apparatus for memory management |
CN104156322B (zh) * | 2014-08-05 | 2017-10-17 | 华为技术有限公司 | 一种缓存管理方法及缓存管理装置 |
US20160062916A1 (en) * | 2014-08-27 | 2016-03-03 | The Board Trustees Of The Leland Stanford Junior University | Circuit-based apparatuses and methods with probabilistic cache eviction or replacement |
US9405706B2 (en) * | 2014-09-25 | 2016-08-02 | Intel Corporation | Instruction and logic for adaptive dataset priorities in processor caches |
WO2016105241A1 (en) * | 2014-12-23 | 2016-06-30 | Emc Corporation | Selective compression in data storage systems |
US9892045B1 (en) * | 2015-01-30 | 2018-02-13 | EMC IP Holding Company LLC | Methods to select segments of an evicted cache unit for reinsertion into the cache |
US9720835B1 (en) * | 2015-01-30 | 2017-08-01 | EMC IP Holding Company LLC | Methods to efficiently implement coarse granularity cache eviction based on segment deletion hints |
US9921963B1 (en) * | 2015-01-30 | 2018-03-20 | EMC IP Holding Company LLC | Method to decrease computation for cache eviction using deferred calculations |
US9892044B1 (en) * | 2015-01-30 | 2018-02-13 | EMC IP Holding Company LLC | Methods to efficiently implement coarse granularity cache eviction |
JP2016170682A (ja) * | 2015-03-13 | 2016-09-23 | 富士通株式会社 | 演算処理装置及び演算処理装置の制御方法 |
US9983996B2 (en) * | 2015-12-10 | 2018-05-29 | Intel Corporation | Technologies for managing cache memory in a distributed shared memory compute system |
US10185666B2 (en) * | 2015-12-15 | 2019-01-22 | Facebook, Inc. | Item-wise simulation in a block cache where data eviction places data into comparable score in comparable section in the block cache |
US10496551B2 (en) * | 2017-06-28 | 2019-12-03 | Intel Corporation | Method and system for leveraging non-uniform miss penality in cache replacement policy to improve processor performance and power |
US20190087344A1 (en) * | 2017-09-20 | 2019-03-21 | Qualcomm Incorporated | Reducing Clean Evictions In An Exclusive Cache Memory Hierarchy |
US10585798B2 (en) * | 2017-11-27 | 2020-03-10 | Intel Corporation | Tracking cache line consumption |
-
2016
- 2016-01-06 CN CN201680059005.9A patent/CN108139872B/zh active Active
- 2016-01-06 EP EP16882879.6A patent/EP3388935B1/en active Active
- 2016-01-06 WO PCT/CN2016/070230 patent/WO2017117734A1/zh active Application Filing
-
2018
- 2018-07-05 US US16/028,265 patent/US10831677B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7546417B1 (en) * | 2008-07-15 | 2009-06-09 | International Business Machines Corporation | Method and system for reducing cache tag bits |
CN104077242A (zh) * | 2013-03-25 | 2014-10-01 | 华为技术有限公司 | 一种缓存管理方法和装置 |
CN104123243A (zh) * | 2013-04-24 | 2014-10-29 | 鸿富锦精密工业(深圳)有限公司 | 数据缓存系统及方法 |
CN103246616A (zh) * | 2013-05-24 | 2013-08-14 | 浪潮电子信息产业股份有限公司 | 一种长短周期访问频度的全局共享缓存替换方法 |
CN103873562A (zh) * | 2014-02-27 | 2014-06-18 | 车智互联(北京)科技有限公司 | 缓存方法和缓存系统 |
CN105094686A (zh) * | 2014-05-09 | 2015-11-25 | 华为技术有限公司 | 数据缓存方法、缓存和计算机系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3388935A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109408411A (zh) * | 2018-09-25 | 2019-03-01 | 浙江工商大学 | 基于数据访问次数的GPGPU的L1 Cache管理方法 |
CN111221749A (zh) * | 2019-11-15 | 2020-06-02 | 新华三半导体技术有限公司 | 数据块写入方法、装置、处理器芯片及Cache |
WO2022048187A1 (zh) * | 2020-09-07 | 2022-03-10 | 华为技术有限公司 | 一种发送清除报文的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN108139872A (zh) | 2018-06-08 |
EP3388935B1 (en) | 2021-09-29 |
US20180314646A1 (en) | 2018-11-01 |
CN108139872B (zh) | 2020-07-07 |
EP3388935A4 (en) | 2018-12-19 |
US10831677B2 (en) | 2020-11-10 |
EP3388935A1 (en) | 2018-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017117734A1 (zh) | 一种缓存管理方法、缓存控制器以及计算机系统 | |
US10133679B2 (en) | Read cache management method and apparatus based on solid state drive | |
EP3210121B1 (en) | Cache optimization technique for large working data sets | |
EP3367251B1 (en) | Storage system and solid state hard disk | |
US10198363B2 (en) | Reducing data I/O using in-memory data structures | |
JP6613375B2 (ja) | プロファイリングキャッシュ置換 | |
US20130007373A1 (en) | Region based cache replacement policy utilizing usage information | |
US9971698B2 (en) | Using access-frequency hierarchy for selection of eviction destination | |
WO2015110046A1 (zh) | Cache的管理方法及装置 | |
JP2014535106A (ja) | ストレージ・システムの二次キャッシュ内にデータをポピュレートするための方法、制御装置、プログラム | |
JP6106028B2 (ja) | サーバ及びキャッシュ制御方法 | |
US10296466B2 (en) | Information processing device, method of controlling a cache memory, and storage medium | |
TW201432451A (zh) | 用於改善輸入輸出效能之調節資料快取速率之方法 | |
JP6402647B2 (ja) | データ配置プログラム、データ配置装置およびデータ配置方法 | |
US11461239B2 (en) | Method and apparatus for buffering data blocks, computer device, and computer-readable storage medium | |
JP2017162194A (ja) | データ管理プログラム、データ管理装置、及びデータ管理方法 | |
JP2016066259A (ja) | データ配置制御プログラム、データ配置制御装置およびデータ配置制御方法 | |
JP6194875B2 (ja) | キャッシュ装置、キャッシュシステム、キャッシュ方法、及びキャッシュプログラム | |
TWI828307B (zh) | 用於記憶體管理機會與記憶體交換任務之運算系統及管理其之方法 | |
US11301395B2 (en) | Method and apparatus for characterizing workload sequentiality for cache policy optimization | |
JP6112193B2 (ja) | アクセス制御プログラム、ディスク装置及びアクセス制御方法 | |
JP3751814B2 (ja) | キャッシュメモリ制御方式 | |
EP3153972A1 (en) | Data management program, data management device, and data management method | |
US20160259592A1 (en) | Data storing control apparatus and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16882879 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016882879 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2016882879 Country of ref document: EP Effective date: 20180712 |