WO2017015952A1 - Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant - Google Patents

Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant Download PDF

Info

Publication number
WO2017015952A1
WO2017015952A1 PCT/CN2015/085571 CN2015085571W WO2017015952A1 WO 2017015952 A1 WO2017015952 A1 WO 2017015952A1 CN 2015085571 W CN2015085571 W CN 2015085571W WO 2017015952 A1 WO2017015952 A1 WO 2017015952A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
block
weight
replacement
main memory
Prior art date
Application number
PCT/CN2015/085571
Other languages
English (en)
Chinese (zh)
Inventor
相楠
宁科
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2015/085571 priority Critical patent/WO2017015952A1/fr
Priority to CN201580081799.4A priority patent/CN107851068A/zh
Publication of WO2017015952A1 publication Critical patent/WO2017015952A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control

Definitions

  • Embodiments of the present invention relate to the field of computer technologies, and in particular, to a replacement method and a replacement device for storing data in a cache memory.
  • the cache was first proposed by Wilkes in 1951 to compensate for the speed difference between the processor CPU and main memory, and is the most important part of the storage system. As the cache capacity is higher, the price is higher and the difficulty is higher. Therefore, the cache capacity is generally much smaller than the main memory, so some data is not stored in the cache. When accessing this part of the data, a miss occurs (cache). Miss), otherwise known as a hit (cache hit). The performance of the Cache is closely related to the cache hit rate. The higher the cache hit rate, the less time the storage access takes, and the higher the performance of the cache. Conversely, the lower the cache performance.
  • the choice of the replacement algorithm has a greater impact on the hit rate of the cache.
  • the replacement algorithms currently commonly used are the LRU (Least recently used) algorithm and the LFU (Lease Frequently Used) algorithm.
  • the LRU is based on the use of each block, always selecting the least recently used block to be replaced; the LFU algorithm is to replace the content with the least number of accesses out of the cache.
  • the performance of the Cache directly affects the performance of the storage system, and the system of the storage system determines the efficiency of the communication. With the replacement algorithm in the prior art, the performance of the storage system cannot be effectively improved.
  • Embodiments of the present invention provide a replacement method and a replacement device for storing data in a cache memory, which can improve the performance of the storage system.
  • a first aspect of the present invention provides a method for replacing data stored in a cache, including:
  • Determining a cache group corresponding to the main storage block where the cache group includes at least two cache blocks, each of the cache blocks corresponding to a weight, and the weight is according to a main memory of the storage data of the cache block. Access rate and access latency are determined;
  • the determined storage data of the replacement cache block is replaced with the storage data of the main storage block.
  • the determining, by the weight of each of the cache blocks in the cache group, the replacement cache block includes:
  • block pointer information the block pointer information being used to indicate a preliminary replacement cache block
  • the prepared replacement cache block is used as the replacement cache block.
  • the method further includes:
  • the determining, by the weight of each of the cache blocks in the cache group, the replacement cache block includes:
  • the preset replacement algorithm Determining, by the preset replacement algorithm, the replacement cache block in the cache block whose weight is less than the second preset weight, wherein the preset replacement algorithm includes an LRU algorithm, an LFU algorithm, and a first in first out algorithm An algorithm.
  • the method further includes :
  • the weight of the cache block in which the stored data reaches the second preset weight in the cache block that does not exist for the preset duration is decremented by one.
  • the method further includes:
  • the stored data of the main memory block is stored in the cache block in which the cache group is free.
  • the determining that the main memory block to be stored in the main memory includes:
  • the receiving processor sends an access request to the storage system. If the cache misses and the main memory hits, the main memory block hit in the main memory is used as the main memory block to be stored.
  • a second aspect of the present invention provides a device for storing data in a cache, including:
  • a main memory block determining module configured to determine a main memory block to be stored in the main memory
  • a group determining module configured to determine a cache group corresponding to the main memory block, where the cache group includes at least two cache blocks, each of the cache blocks corresponding to a weight, and the weight is determined according to the cache block The access rate and access latency of the main memory to which the storage data belongs are determined;
  • a replacement block determining module configured to determine, according to weights of each of the cache blocks in the cache group, a replacement cache block
  • a data replacement module configured to replace the determined storage data of the replacement cache block with the storage data of the primary storage block.
  • the replacement block determining module includes:
  • a pointer information obtaining unit configured to acquire block pointer information, where the block pointer information is used to indicate a preliminary replacement cache block
  • a searching unit configured to search for the prepared replacement cache block according to the block pointer information
  • a determining unit configured to determine whether a weight of the prepared replacement cache block searched by the searching unit is smaller than a first preset weight
  • a first replacement block determining unit configured to use the prepared replacement cache block as the replacement cache block if a weight of the preliminary replacement cache block is smaller than the first preset weight.
  • the apparatus further includes:
  • a weight control module configured to: if the weight of the prepared replacement cache block reaches the first preset weight, reduce the weight of the prepared replacement cache block by one;
  • a pointer information control module configured to: if the weight of the prepared replacement cache block reaches the first Presetting the weight, flipping the block pointer information, and triggering the replacement block determining module to determine the replacement cache block according to the weight of each of the cache blocks in the cache group.
  • the replacement block determining module includes:
  • a filtering unit configured to determine a cache block in the cache group that has a weight less than a second preset weight
  • a second replacement block determining unit configured to determine the replacement cache block in a cache block whose weight is less than a second preset weight according to a preset replacement algorithm, where the preset replacement algorithm includes an LRU algorithm, Any of the LFU algorithm and the first-in first-out algorithm.
  • the screening unit is also used to:
  • the device also includes:
  • the weight control module is configured to reduce the weight of the cache block in which the stored data reaches the second preset weight in the cache block in which the stored data does not exist within the preset duration is reduced by one.
  • the device further includes:
  • a capacity detecting module configured to determine whether the storage space of the cache group is smaller than the data volume of the main storage block, and if the storage space of the cache group is smaller than the data volume of the main storage block, triggering the replacement
  • the block determining module determines, according to the weight of each of the cache blocks in the cache group, a replacement cache block;
  • the data replacement module is also used to:
  • the storage data of the main storage block is stored in the cache block that is free of the cache group.
  • the main memory block determining module is specifically configured to:
  • the receiving processor sends an access request to the storage system. If the cache misses and the main memory hits, the main memory block hit in the main memory is used as the main memory block to be stored.
  • a third aspect of the present invention further provides a terminal device, where the terminal device includes a processor and a memory, wherein the memory further stores a set of programs, and the processor is configured to invoke a program stored in the memory, so that The terminal device performs the method of any of the first aspects.
  • a fourth aspect of the present invention provides a computer storage medium storing a program, the program comprising the method of any of the first aspect.
  • Each cache group includes at least two cache blocks, and each of the cache blocks is correspondingly provided with a weight, and since the weight is determined according to an access rate and an access delay of the main memory to which the stored data of the cache block belongs, Determining the replacement cache block according to the weight of each of the cache blocks in the determined cache group can reduce the number of times the storage data of the high MISS delay is replaced by the cache, thereby improving the performance of the storage system.
  • Figure 1 shows a schematic diagram of a group associative image
  • FIG. 2 is a flowchart of a method for replacing data stored in a cache according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram showing access rates and access delays of multi-level data
  • FIG. 4 is a flowchart of another method for replacing data stored in a cache according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a device for replacing data stored in a cache according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of a replacement block determining module according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • Address map refers to the pair between the address of a certain data in the main memory and the address in the cache. Should be related.
  • the commonly used address image is a group associative image, wherein the group associative image mode can be divided into several regions of the same size as shown in FIG. 1 , and each region is divided into blocks according to a direct image manner, and Number, therefore, there are multiple cache blocks with the same number in the cache.
  • the main memory is paged according to the size of the area, and each page is divided according to the size of the cache block, and each main memory block can correspond to the cache block of the same block number in different areas.
  • the 0th block of the 0th page of the main memory can correspond to the 0th block of the 0th area of the cache, or the 0th block of the Jth area.
  • the cache corresponding to the main memory block.
  • the block is divided into groups, such as the 0th block of the 0th to the jth regions corresponding to the 0th block of the 0th to the mth pages of the main memory block, and the 0th block of the 0th to the jth areas is divided into
  • a cache group that is, the 0th block of the main memory block to the 0th block of the mth page can be stored in any cache block in the cache group.
  • the number of the cache group can be determined according to the number of the main memory block. For example, it is determined that the cache group consisting of the 0th block of the 0th to the jth zones is the 0th group;
  • the performance of the storage system is usually evaluated by (HIT times * HIT delay + MISS times * MISS delay). For a storage system with only one main memory, if the cache hit rate is increased, the performance of the storage system will be improved accordingly. However, for a storage system with multiple main memories, the storage system includes PL2 (Private L2), SL2 (Shared L2), SL3 (Shared L3), and DDR (Double) due to different access delays of the CPU accessing each main memory. Data Rate, double rate synchronous dynamic random access memory, for example, DDR has the longest access latency and PL2 has the shortest access latency. If it is missed, the MISS delay of CPU access DDR is much larger than the MISS latency of accessing PL2.
  • the embodiment of the present invention improves the performance of the storage system by reducing the number of times the storage data of the high MISS delay is replaced by the cache, which is respectively introduced by the following embodiments.
  • FIG. 2 is a flowchart of a method for replacing data stored in a cache according to an embodiment of the present invention. As shown in FIG. 2, the method may include:
  • Step S201 determining a main memory block to be stored in the main memory.
  • the receiving CPU sends an access request to the storage system, the access request carries an access address, and queries whether the storage data corresponding to the access address is stored in the cache of the storage system, and if the cache does not store the access address corresponding to the access address The data is stored, that is, the cache misses, the CPU queries the main memory, and if the main memory hits, the main memory block hit in the main memory is used as the to-be-stored Main memory block
  • the main memory it is also possible to detect whether the number of hits of the hit main block in the preset duration reaches a preset number threshold, and if so, the main memory block is used as the main memory block to be stored. It should be noted that the preset number of thresholds can be adjusted according to actual needs.
  • Step S202 determining a cache group corresponding to the main storage block, the cache group includes at least two cache blocks, each of the cache blocks corresponding to a weight, and the weight is according to the storage data of the cache block.
  • the access rate and access latency of the main memory are determined.
  • the cache group corresponding to the main memory block may be determined.
  • the corresponding cache group may be determined according to the number of the main memory block, as shown in FIG. If the main memory block to be stored is the 0th block of the first page, it can be determined that the corresponding cache group is the 0th group, and the 0th block included in the 0th group is the 0th block of the 0th to the jth.
  • Each cache block in the cache group is correspondingly provided with a weight, and the weight is determined according to an access rate and an access delay of the main memory to which the storage data of the cache block belongs.
  • each main memory is correspondingly provided with a weight.
  • the weight may be an empirical value, and the experience value is determined by referring to an access delay and an access rate of the main memory, and the access delay is longer. And the lower the access rate, the higher the weight.
  • the storage system takes four main memories including PL2, SL2, SL3, and DDR as an example. As shown in Figure 3, the DDR has the lowest access rate and the longest access delay, the highest weight, the highest access rate of PL2, and the access delay. The shortest, the lowest weight, SL2 and SL3 are between the two;
  • the weight of DDR is 3, the weight of SL3 is 2, the weight of SL2 is 1, and the weight of PL2 is 0.
  • the main memory block of DDR is stored in the cache as the main memory block to be stored.
  • the weight of the block is 3; the main memory block of SL3 is stored in the cache as the main memory block to be stored, and the weight of the corresponding cache block is 2; the main memory block of SL2 is stored as the main memory block to be stored.
  • the weight of the corresponding cache block is 1; the main memory block of PL2 is stored in the cache as the main memory block to be stored, and the weight of the corresponding cache block is 0.
  • Step S203 determining a replacement cache block according to weights of each of the cache blocks in the cache group.
  • block pointer information may be obtained, where the block pointer information is used to indicate a preliminary replacement cache block; the preliminary replacement cache block is searched according to the block pointer information, and the preparation is determined. Whether the weight of the replacement cache block is less than the first preset weight; if so, the A prepared replacement cache block is used as the replacement cache block.
  • each cache group can be encoded in binary, as shown in Figure 1.
  • j is equal to 3
  • each cache group includes 4 cache blocks, taking the cache 0 group as an example, and the 0th of the 0th region.
  • the block, the 0th block of the 1st block, the 0th block of the 2nd block, and the 0th block of the 3rd block respectively have a binary code of 00, 01, 10, and 11, and the block pointer information can also be represented by a binary.
  • the block pointer information may be acquired. If the block pointer information is 00, the preliminary replacement cache block is found as the 0th block of the 0th area according to the block pointer information.
  • the weight of the 0th block of the 0th area is smaller than the first preset weight, assuming that the first preset weight is 2, and the data stored in the 0th block of the 0th area is the stored data in the SL2, that is, the right If the value is 1, it can be determined that the weight of the 0th block of the 0th area is smaller than the first preset weight, and the 0th block of the 0th area is used as the replacement cache block.
  • the weight of the prepared replacement cache block reaches the first preset weight, it is assumed that the first preset weight is still 2, and the data stored in the 0th block of the 0th area is stored in the DDR.
  • Data that is, a weight of 3
  • the flipping the block pointer information may be binary Add one, if the block pointer information before flipping is 00, the flipped block pointer information is 01;
  • the access delay of the stored data of the prepared replacement cache block is large, and if the storage data is directly replaced by the cache, When the CPU accesses the stored data, the MISS delay is large, which reduces the performance of the storage system. Therefore, the process directly returns to step S203, and the replacement cache is determined according to the inverted block pointer information, and the spare replacement cache block is prepared. The weight of the weight is reduced by one.
  • the cache block in the cache group whose weight is less than the second preset weight may be determined, and the cache with the weight less than the second preset weight according to the preset replacement algorithm
  • the replacement cache block is determined in the block, wherein the preset replacement algorithm includes any one of an LRU algorithm, an LFU algorithm, and a first in first out algorithm.
  • the replacement cache block in the cache block whose weight is less than the second preset weight according to the preset replacement algorithm it may further determine that the weight in the cache group reaches the location a cache block of the second preset weight, the cache value reaching the cache of the second preset weight
  • the weight of the cache block in the block where the stored data does not exist within the preset duration is decremented by one. As shown in FIG.
  • each cache group includes 4 cache blocks
  • the determined cache group is the 0th group
  • the 0th block of the 0th area has a weight of 3
  • the first area is The weight of the 0th block is 2
  • the weight of the 0th block of the 2nd zone is 2
  • the weight of the 0th block of the 3rd zone is 1
  • the second preset weight is 2, in accordance with the preset replacement algorithm.
  • the weight of the 0th block of the 0th area is reduced to 2
  • the weight of the 0th block of the 1st zone is reduced to 1
  • the weight of the 0th block of the 2nd zone is reduced to 1.
  • step S203 is performed; otherwise, if the storage space of the cache is sufficient to store the storage data of the main storage block, the storage data of the main storage block is directly stored in the cache group. In the cache block.
  • Step S204 replacing the determined storage data of the replacement cache block with the storage data of the main storage block.
  • the replacement cache block is determined according to the weight of each cache block in the cache group, and the weight of each cache block is based on the owner of the cache block.
  • the memory access rate and access delay are determined. The lower the access rate and the longer the access delay, the larger the weight. The larger the weight, the longer the stored data stays in the cache, and the higher the MISS delay. The number of times the stored data is replaced by the cache, which in turn improves the performance of the storage system.
  • FIG. 4 is a flowchart of another method for replacing data stored in a cache according to an embodiment of the present invention.
  • the method as shown in FIG. 4 may include:
  • Step S401 determining a main memory block to be stored in the main memory.
  • the receiving CPU sends an access request to the storage system, the access request carries an access address, and queries whether the storage data corresponding to the access address is stored in the cache of the storage system, and if the cache does not store the access address corresponding to the access address The data is stored, that is, the cache misses, the CPU queries the main memory, and if the main memory hits, the main memory block hit in the main memory is used as the main memory block to be stored;
  • the main memory it is also possible to detect that the main memory block hit is within a preset duration Whether the number of hits reaches a preset number of thresholds, and if so, the main memory block is used as the main memory block to be stored. It should be noted that the preset number of thresholds can be adjusted according to actual needs.
  • Step S402 determining a cache group corresponding to the main storage block, the cache group includes at least two cache blocks, each of the cache blocks corresponding to a weight, and the weight is according to the storage data of the cache block.
  • the access rate and access latency of the main memory are determined.
  • the cache group corresponding to the main memory block may be determined.
  • the corresponding cache group may be determined according to the number of the main memory block, as shown in FIG. If the main memory block to be stored is the 0th block of the first page, it can be determined that the corresponding cache group is the 0th group, and the 0th block included in the 0th group is the 0th block of the 0th to the jth.
  • Each cache block in the cache group is correspondingly provided with a weight, and the weight is determined according to an access rate and an access delay of the main memory to which the storage data of the cache block belongs.
  • each main memory is correspondingly provided with a weight.
  • the weight may be an empirical value, and the experience value is determined by referring to an access delay and an access rate of the main memory, and the access delay is longer. And the lower the access rate, the higher the weight.
  • the storage system uses four main memories including PL2, SL2, SL3 and DDR as examples. DDR has the longest access delay and the lowest access rate, and has the highest weight. PL2 has the shortest access delay and the highest access rate, and its weight is the lowest. , SL2 and SL3 are somewhere in between;
  • the weight of DDR is 3, the weight of SL3 is 2, the weight of SL2 is 1, and the weight of PL2 is 0.
  • the main memory block of DDR is stored in the cache as the main memory block to be stored.
  • the weight of the block is 3; the main memory block of SL3 is stored in the cache as the main memory block to be stored, and the weight of the corresponding cache block is 2; the main memory block of SL2 is stored as the main memory block to be stored.
  • the weight of the corresponding cache block is 1; the main memory block of PL2 is stored in the cache as the main memory block to be stored, and the weight of the corresponding cache block is 0.
  • Step S403 acquiring block pointer information, where the block pointer information is used to indicate a preliminary replacement cache block.
  • each cache group can be encoded in binary, as shown in Figure 1.
  • j is equal to 3
  • each cache group includes 4 cache blocks, taking the cache 0 group as an example, and the 0th of the 0th region.
  • the block, the 0th block of the 1st block, the 0th block of the 2nd block, and the 0th block of the 3rd block respectively have a binary code of 00, 01, 10, and 11, and the block pointer information can also be represented by a binary.
  • the block pointer information may be acquired.
  • Step S404 searching for the prepared replacement cache block according to the block pointer information.
  • step S405 it is determined whether the weight of the preliminary replacement cache block is smaller than the first preset weight; if the determination result is no, step S406 is performed; otherwise, step S407 is performed.
  • Step S406 the block pointer information is inverted, and the weight of the prepared replacement cache block is decremented by one, and the process returns to step S403.
  • the block pointer information acquired when the step S403 is executed is returned as the inverted block pointer information.
  • Step S407 the prepared replacement cache block is used as a replacement cache block.
  • Step S408 replacing the determined storage data of the replacement cache block with the storage data of the main storage block.
  • each cache group includes at least two cache blocks, and each of the cache blocks is correspondingly provided with a weight, and the weight is accessed according to a main memory to which the stored data of the cache block belongs.
  • Rate and access delay determining, after determining the cache group corresponding to the main memory block, searching for the reserved replacement cache block in the determined cache group according to the block pointer information, if the weight of the prepared replacement cache block reaches the first a preset weight, indicating that the access delay of the stored data of the prepared replacement cache block is large, and the storage data of the prepared replacement cache block is not replaced by the cache, and the storage with a larger MISS delay is added.
  • the time that the data stays in the cache reduces the number of times the stored data of the high MISS delay is replaced by the cache, thereby improving the performance of the storage system.
  • FIG. 5 is a schematic diagram of a device for replacing data stored in a cache according to an embodiment of the present invention.
  • the device for replacing data stored in the embodiment of the present invention may be applied to a base station baseband system or an embedded computer system.
  • the replacement device 5 for storing data as shown in FIG. 5 may at least include a main memory block determining module 51, a group determining module 52, a replacement block determining module 53, and a data replacing module 54, wherein:
  • a main memory block determining module 51 configured to determine a main memory block to be stored in the main memory
  • the main memory block determining module 51 is specifically configured to:
  • the receiving processor sends an access request to the storage system. If the cache misses and the main memory hits, the main memory block hit in the main memory is used as the main memory block to be stored.
  • the group determining module 52 is configured to determine a cache group corresponding to the main memory block, where the cache group includes at least two cache blocks, and each of the cache blocks is correspondingly provided with a weight, and the weight is according to the cache The access rate and access latency of the main memory to which the block's stored data belongs are determined.
  • the replacement block determining module 53 is configured to determine a replacement cache block according to weights of each of the cache blocks in the cache group.
  • the data replacement module 54 is configured to replace the determined storage data of the replacement cache block with the storage data of the primary storage block.
  • the replacement block determining module 53 may include at least a pointer information acquiring unit 531, a searching unit 532, a determining unit 533, and a first replacement block determining unit 534, as shown in FIG. 6, wherein:
  • a pointer information obtaining unit 531 configured to acquire block pointer information, where the block pointer information is used to indicate a preliminary replacement cache block;
  • the searching unit 532 is configured to search for the prepared replacement cache block according to the block pointer information
  • the determining unit 533 is configured to determine whether the weight of the preliminary replacement cache block searched by the searching unit 532 is smaller than the first preset weight
  • the first replacement block determining unit 534 is configured to use the prepared replacement cache block as the replacement cache block if the weight of the preliminary replacement cache block is smaller than the first preset weight.
  • the storage device replacing device 5 may further include a weight control module 55 and a pointer information control module 56, wherein:
  • the weight control module 55 is configured to: if the weight of the prepared replacement cache block reaches the first preset weight, reduce the weight of the prepared replacement cache block by one;
  • the pointer information control module 56 is configured to: if the weight of the prepared replacement cache block reaches the first preset weight, flip the block pointer information, and trigger the replacement block determining module 53 to be according to the cache group The weight of each of the cache blocks in the determination determines a replacement cache block.
  • the replacement block determining module 53 may at least include: a filtering unit 535 and a second replacement block determining unit 536, as shown in FIG. 6, wherein:
  • the filtering unit 535 is configured to determine a cache block in the cache group that has a weight less than a second preset weight
  • a second replacement block determining unit 536 configured to determine, according to a preset replacement algorithm, the replacement cache block in a cache block whose weight is less than a second preset weight, where the preset replacement algorithm includes an LRU algorithm Any one of the LFU algorithm and the FIFO algorithm.
  • first replacement block determining unit 534 and the second replacement block determining unit 536 may be combined or may be independent of each other, which is not limited by the present invention.
  • screening unit 535 can also be used to:
  • the weight control module 55 is further configured to:
  • the weight of the cache block in which the stored data reaches the second preset weight in the cache block that does not exist for the preset duration is decremented by one.
  • the storage device replacing device 5 may further include a capacity detecting module 57, configured to determine whether the storage space of the cache group is smaller than the data amount of the main storage block, if the cache group is idle storage The space is smaller than the amount of data of the main memory block, and the replacement block determining module 53 is triggered to determine the replacement cache block according to the weight of each of the cache blocks in the cache group;
  • the data replacement module 54 is further configured to:
  • the storage data of the main storage block is stored in the cache block that is free of the cache group.
  • FIG. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
  • the storage system of the terminal device is provided with a cache and at least two main memories.
  • the terminal device 7 may include at least one processor 71, such as a CPU, at least one communication bus 72, and a memory 73.
  • the communication bus 72 is used to implement connection communication between these components.
  • the memory 73 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory. Alternatively, the memory 73 may also be at least one storage device located away from the aforementioned processor 71.
  • a set of program codes is stored in the memory 73, and the processor 71 is configured to call the program code stored in the memory 73 for performing the following operations:
  • Determining a cache group corresponding to the main storage block where the cache group includes at least two cache blocks, each of the cache blocks is correspondingly provided with a weight, and the weight is according to the storage data of the cache block.
  • the access rate and access latency of the main memory are determined;
  • the determined storage data of the replacement cache block is replaced with the storage data of the main storage block.
  • the determining, by the processor 71, the replacement cache block according to the weight of each of the cache blocks in the cache group may be:
  • block pointer information the block pointer information being used to indicate a preliminary replacement cache block
  • the prepared replacement cache block is used as the replacement cache block.
  • the processor 71 may further perform the following operations:
  • the determining, by the processor 71, the replacement cache block according to the weight of each of the cache blocks in the cache group may be:
  • the preset replacement algorithm Determining, by the preset replacement algorithm, the replacement cache block in the cache block whose weight is less than the second preset weight, wherein the preset replacement algorithm includes an LRU algorithm, an LFU algorithm, and a first in first out algorithm An algorithm.
  • the processor may further perform the following operations:
  • the weight of the cache block in which the stored data reaches the second preset weight in the cache block that does not exist for the preset duration is decremented by one.
  • processor 71 determines the cache group corresponding to the main memory block, the following operations may also be performed:
  • the stored data of the main memory block is stored in the cache block in which the cache group is free.
  • the processor 71 determines that the main memory block to be stored in the main memory may be:
  • the receiving processor sends an access request to the storage system. If the cache misses and the main memory hits, the main memory block hit in the main memory is used as the main memory block to be stored.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a program, and the program includes some or all of the steps in the method described in connection with FIG. 1 or FIG. 4 in the embodiment of the present invention.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé de remplacement de données stockées dans une mémoire cache et un dispositif l'utilisant, la mémoire cache étant configurée pour stocker des données dans une pluralité de mémoires principales, et la mémoire cache présentant des latences d'accès différentes lors de l'accès à chaque mémoire de la pluralité de mémoires principales. Le procédé comporte les étapes consistant à: déterminer un bloc de mémoire principale contenant des données à placer en mémoire cache (S201); déterminer un ensemble de mémoire cache correspondant au bloc de mémoire principale et comportant au moins deux blocs de mémoire cache, chacun des blocs de mémoire cache étant doté d'un poids déterminé en fonction d'un débit d'accès et d'une latence d'accès de la mémoire principale qui contient des données placées en mémoire cache dans chacun des blocs de mémoire cache (S202); déterminer, en fonction du poids de chacun des blocs de mémoire cache de l'ensemble de mémoire cache, un bloc de mémoire cache de remplacement (S203); et remplacer les données placées en mémoire cache dans le bloc déterminé de mémoire cache de remplacement par les données à placer en mémoire cache présentes dans la mémoire principale (S204). Le procédé et le dispositif peuvent réduire le nombre de reprises de remplacement de données placées en mémoire cache présentant une latence élevée de requêtes en mémoire cache non satisfaites, améliorant ainsi les performances d'un système de stockage de données.
PCT/CN2015/085571 2015-07-30 2015-07-30 Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant WO2017015952A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/085571 WO2017015952A1 (fr) 2015-07-30 2015-07-30 Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant
CN201580081799.4A CN107851068A (zh) 2015-07-30 2015-07-30 一种高速缓冲存储器中存储数据的替换方法和替换装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/085571 WO2017015952A1 (fr) 2015-07-30 2015-07-30 Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant

Publications (1)

Publication Number Publication Date
WO2017015952A1 true WO2017015952A1 (fr) 2017-02-02

Family

ID=57883936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/085571 WO2017015952A1 (fr) 2015-07-30 2015-07-30 Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant

Country Status (2)

Country Link
CN (1) CN107851068A (fr)
WO (1) WO2017015952A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395221B (zh) * 2020-11-20 2023-02-10 华中科技大学 一种基于mlc stt-ram的能耗特性的缓存替换方法及设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1255986A (zh) * 1997-03-14 2000-06-07 艾利森电话股份有限公司 基于惩罚的高速缓冲存储器和置换技术
CN102207909A (zh) * 2011-05-31 2011-10-05 孟小峰 一种基于代价的闪存数据库缓冲区置换方法
CN102289354A (zh) * 2011-06-17 2011-12-21 华中科技大学 一种失效盘优先的高速缓冲存储器替换方法
CN103150122A (zh) * 2011-12-07 2013-06-12 华为技术有限公司 一种磁盘缓存空间管理方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496711B2 (en) * 2006-07-13 2009-02-24 International Business Machines Corporation Multi-level memory architecture with data prioritization
CN104375957B (zh) * 2013-08-15 2018-10-09 华为技术有限公司 一种数据替换的方法和设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1255986A (zh) * 1997-03-14 2000-06-07 艾利森电话股份有限公司 基于惩罚的高速缓冲存储器和置换技术
CN102207909A (zh) * 2011-05-31 2011-10-05 孟小峰 一种基于代价的闪存数据库缓冲区置换方法
CN102289354A (zh) * 2011-06-17 2011-12-21 华中科技大学 一种失效盘优先的高速缓冲存储器替换方法
CN103150122A (zh) * 2011-12-07 2013-06-12 华为技术有限公司 一种磁盘缓存空间管理方法和装置

Also Published As

Publication number Publication date
CN107851068A (zh) 2018-03-27

Similar Documents

Publication Publication Date Title
TWI684099B (zh) 剖析快取替代
US10929308B2 (en) Performing maintenance operations
US20160378652A1 (en) Cache memory system and processor system
US8095734B2 (en) Managing cache line allocations for multiple issue processors
US20130262767A1 (en) Concurrently Accessed Set Associative Overflow Cache
US10007615B1 (en) Methods and apparatus for performing fast caching
WO2017184497A1 (fr) Procédé de surveillance de mémoire d'objets étiquetés et appareil de traitement
EP3023878B1 (fr) Procédé et appareil d'interrogation d'adresse-mémoire physique
JP6027562B2 (ja) キャッシュメモリシステムおよびプロセッサシステム
CN108073527A (zh) 一种缓存替换的方法和设备
WO2016015583A1 (fr) Procédé et dispositif de gestion de mémoire et contrôleur de mémoire
US10831673B2 (en) Memory address translation
US20080307169A1 (en) Method, Apparatus, System and Program Product Supporting Improved Access Latency for a Sectored Directory
CN107562806B (zh) 混合内存文件系统的自适应感知加速方法及系统
US10853262B2 (en) Memory address translation using stored key entries
WO2017015952A1 (fr) Procédé de remplacement de données stockées dans une mémoire cache et dispositif l'utilisant
US20180052778A1 (en) Increase cache associativity using hot set detection
CN107861819B (zh) 一种缓存组负载均衡的方法、装置和计算机可读存储介质
WO2021008552A1 (fr) Procédé et appareil de lecture de données et support de stockage lisible par ordinateur
KR101976320B1 (ko) 라스트 레벨 캐시 메모리 및 이의 데이터 관리 방법
US10866904B2 (en) Data storage for multiple data types
CN116069719A (zh) 处理器、内存控制器、片上系统芯片和数据预取方法
KR20240069323A (ko) 이종 메모리로 구성된 메인 메모리 장치를 포함하는 컴퓨터 시스템 및 그 동작 방법
CN117222989A (zh) Dram感知高速缓存
CN109508302A (zh) 一种内容填充方法和存储器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15899304

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15899304

Country of ref document: EP

Kind code of ref document: A1