WO2019052442A1 - 一种内容填充方法和存储器 - Google Patents

一种内容填充方法和存储器 Download PDF

Info

Publication number
WO2019052442A1
WO2019052442A1 PCT/CN2018/105043 CN2018105043W WO2019052442A1 WO 2019052442 A1 WO2019052442 A1 WO 2019052442A1 CN 2018105043 W CN2018105043 W CN 2018105043W WO 2019052442 A1 WO2019052442 A1 WO 2019052442A1
Authority
WO
WIPO (PCT)
Prior art keywords
group
cache
cache entry
content
access source
Prior art date
Application number
PCT/CN2018/105043
Other languages
English (en)
French (fr)
Inventor
李琪
崔鲁平
熊礼文
徐志通
陈俊锐
余谓为
孙璐
李又麟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019052442A1 publication Critical patent/WO2019052442A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/084Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a content filling method and a memory.
  • Multiple access sources in the processor can access the same cache, where the cache can be referred to simply as a cache.
  • a cache miss When an access source accesses the cache, if it accesses the requested content, it is a cache hit. If it does not access the required content, it is a cache miss.
  • the content required to access the source needs to be fetched from other storage and populated into the cache to replace the original content in the cache, and the original content may be required by other access sources. If the original content needs to be accessed frequently, the original content needs to be re-populated into the cache. This may result in a content being repeatedly replaced and filled in, which in turn causes content flooding of multiple access sources to be stepped on, increasing access latency and reducing processor performance.
  • the embodiment of the present application provides a content filling method and a cache memory. Can improve processor performance.
  • the embodiment of the present application provides a content filling method, where the method includes: determining, when a content required by an access source needs to be filled into a cache entry, determining a first group to which the access source belongs; detecting the first Whether a cache entry corresponding to a group is idle; if the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • the access source is divided into groups, and according to the cache entry corresponding to the group, the probability of stepping on the content filling of each access source can be reduced, thereby reducing the access delay and improving the processor performance.
  • the group to which the access source belongs is determined according to the type of the access source or the identifier of the access source; or the group to which the access source belongs is determined according to a hash algorithm.
  • the method may further include: if the cache entries corresponding to the first group are not idle, detecting whether the cache entries corresponding to the other groups are idle; if the second of the other groups is The second cache entry in the cache entry corresponding to the group is idle, and the content required by the access source is filled to the second cache entry.
  • the cache space can be flexibly utilized.
  • the method may further include: if the cache entries corresponding to the first group are not idle, detecting whether there is a group content in the content cached by the cache entry corresponding to the first group The access source of the cross-group content does not belong to the first group; if the content of the third cache entry cached in the cache entry corresponding to the first group is the cross-group content, the access source is The required content is populated to the third cache entry.
  • the cache space can be flexibly utilized.
  • the method may further include: if no cache group content is cached in the cache entry corresponding to the first group, selecting any one of the cache entries corresponding to the first group as the cache entry The fourth cache entry and populating the content required by the access source to the fourth cache entry.
  • the cache space can be flexibly utilized.
  • the method may further include: determining, if the plurality of cache entries in the cache entry corresponding to the first group are idle, determining that the cache entry with the highest priority among the plurality of cache entries is The first cache entry.
  • the priority of the cache entry corresponding to the first group is determined according to the identifier of the cache entry.
  • an embodiment of the present application provides a cache memory.
  • the cache includes a controller and a plurality of cache entries. Wherein, the controller is configured to perform any one of the first aspects.
  • an embodiment of the present application provides a readable non-volatile storage medium storing computer instructions, where the computer instructions are used to perform any one of the first aspects.
  • the first group to which the access source belongs may be determined, and whether the cache entry corresponding to the first group is idle, if the first group is If the first cache entry in the corresponding cache entry is idle, the content required by the access source may be filled to the first cache entry.
  • the access source is divided into groups, and according to the cache entry corresponding to the group, the probability of stepping on the content filling of each access source can be reduced, thereby reducing the access delay and improving the processor performance.
  • FIG. 1 is a schematic structural diagram of a computer system according to an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a cache memory according to an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a cache entry according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a content filling method provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an application of content filling according to an embodiment of the present application.
  • FIG. 6 is a schematic flowchart of another content filling method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of another content filling application provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of still another content filling method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another application of content filling provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of still another content filling method provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of still another content filling method provided by an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of a cache memory according to an embodiment of the present application.
  • FIG. 1 is a computer system according to an embodiment of the present application. As shown in FIG. 1, the computer system includes a processor 10 and a memory 30.
  • the processor 10 includes the processor cores 11 to 1N and the cache 1M.
  • the cache 1M is configured outside the processor core, and the cache 1M may be an out-of-core buffer, and the processor cores 11 to 1N may be accessed as the cache 1M.
  • Source in which case the access source is triggered by the program running in it to access the contents of cache 1M.
  • access can be understood as reading or invoking of content.
  • each processor core may include one or more access sources and a cache. Taking the processor core 11 as an example, the processor core 11 includes access sources 111-11x and a cache 110. Wherein, the cache is configured in the processor core, and the cache 110 may be a core cache.
  • the caches 120 to 1N0 may also be intra-core caches.
  • the access sources 111 to 11x may be processes, threads, virtual machines, and the like running in the processor core 11, and are not limited herein.
  • M, N, x, y, and z are all positive integers; M is any positive integer other than 1 to N; x, y, and z may be the same, and may be different, and are not limited herein.
  • processor 10 shown in FIG. 1 may be implemented by one or more processor chips, and the processor cores 11 to 1N included in the processor 10 may be from different processor chips, which is not limited herein.
  • the cache 1M may be implemented by one or more extra-core cache chips, which is not limited herein.
  • the processor 10 and the memory 30 are connected.
  • the memory 30 may be in the same chip as the processor, or may be disposed outside the chip where the processor is located, and is not limited herein.
  • FIG. 2 shows a schematic structural diagram of a cache.
  • the cache 20 includes a controller 201 and a storage unit 203.
  • cache 20 may be an in-core cache, in which case cache 20 is configured within the processor core and access sources 21-2P represent access sources within the processor core.
  • the cache 20 may be any one of the cache 110 to the cache 1N0 shown in FIG. 1, and the cache 20 is the cache 110 shown in FIG. 1, and the access sources 21 to 2P are as shown in FIG. Access sources 111 to 11x.
  • the cache 20 may be an off-core cache. At this time, the cache 20 is disposed outside the processor core, and the access sources 21 to 2P represent the processor core accessing the cache 20.
  • the cache 20 may be the cache 1M shown in FIG. 2, and the access sources 21 to 2P are one or more of the processor cores 11 to 1N shown in FIG.
  • the cache 20 may also be another level or type of cache, which is not limited herein.
  • the controller 201 included in the cache 20 may be implemented by an application integrated circuit, an integrated logic circuit, a chip, or other device capable of implementing a control function, which is not limited herein.
  • the storage unit 203 may include one or more cache entries, entry1 to entryQ, respectively. Specifically, when a content is backfilled into the cache, the content is filled in a cache entry. Wherein, the content is backfilled means that the content is copied from the memory into the cache; the cache entry can be understood as a unit storage unit, as shown in FIG. 3, a cache entry can include a content and a tag corresponding to the content.
  • the label here refers to some or all of the storage addresses in which the content is stored in the memory.
  • the content described in this application may include any of the following: an instruction, a data, or a Page Table Entry (PTE).
  • P, Q is a positive integer.
  • Source 21 may first send an access request to controller 201 in cache 20 for requesting access to the content, which may carry a storage address requesting access.
  • the controller 201 can determine whether the storage address exists according to the label in the cache entry. If the controller 201 determines that the storage address exists in the cache entry by using the label, it indicates that the corresponding content in the storage address is cached in the cache entry. This situation can be understood as a cache hit.
  • the controller 201 determines through the label that there is no storage address in the cache entry, it indicates that the corresponding content in the storage address is not cached in the cache entry, which can be understood as a cache miss. . If the cache misses, the controller 201 can retrieve the content from the corresponding memory, such as the memory 30 shown in FIG. 1, according to the storage address, and fill it into a cache entry in the storage unit 203, and The storage address (label) corresponding to the content is also populated into the cache entry, for example, into the cache entry entry1.
  • the cache entries in the storage unit 203 are all filled with content and still need to be filled with the storage unit 203, according to the current replacement algorithm, such as Least Recent Used (LRU), Most Recent Used (Most Recent Used, MRU), random or first in first out (FIFO), etc., fill the above content into a cache entry, for example, fill the content into the cache entry entry1, then cache the original entry in entry1 The content will be replaced. If the original content still needs to be accessed, the original content needs to be re-populated into a cache entry. Based on the same replacement policy, the original content is likely to be re-populated into the cache entry entry1. The content of the cache entry entry1 is repeatedly replaced, which affects the access efficiency of the access source, which in turn affects the performance of the processor.
  • LRU Least Recent Used
  • MRU Most Recent Used
  • FIFO first in first out
  • FIG. 4 is a schematic flowchart diagram of a method for content filling according to an embodiment of the present application. As shown in FIG. 4, the method includes at least the following steps.
  • Step S401 When it is required to fill the content required by the access source to the cache entry, determine the first group to which the access source belongs.
  • a situation in which the content required to access the source needs to be populated to the cache entry may be: when the access source requests access to a content from the controller, the controller may first look up the cache entry of the storage unit for the content. If the content is not found, the content can be retrieved from the memory and filled into the storage unit, and the process can also be understood as content backfilling.
  • the access source and group membership can include any of the following:
  • the access source is divided into one or more groups according to the association relationship between the access sources.
  • the access source may be combined according to the type of the access source, the size of the access source normally accessed, the type of the access source normally accessed, the frequency of accessing the source access cache, and the identity of the access source.
  • the division is not limited here.
  • the types of access sources can include processes, threads, and so on. For example, if the access source is divided according to the type of the access source, the same type of access source can be divided into one group; if the access source is divided according to the identifier of the access source, the adjacent N access sources can be divided into one.
  • the group, N may be a preset positive integer; if the content of the content is usually accessed according to the access source, a fixed size of the access content of the group may be set, and according to the fixed size, the access sources included in one group are determined, so that each The sum of the sizes of the accessing content is less than or equal to the fixed size.
  • the first type of the content includes an instruction, a data, a page table item, and the like
  • the second type of the content refers to the first type.
  • a specific type under one type such as an instruction including a read instruction, a write instruction, a processing instruction, etc., can be understood as the second type of content.
  • the access source is divided according to the type of the content that the access source normally accesses, the access source is divided according to the second type of the commonly accessed content of the access source; if the access source is used according to the frequency of accessing the cache by the access source
  • the division can divide the frequently accessed access source and the infrequently accessed access source into one group, thereby avoiding the problem that too many frequent access sources generate content mutual treading when accessing the cache.
  • the access source may also be accessed according to the combination of the above manners, which is not limited herein.
  • each access source may be divided into a group.
  • Each group can correspond to one or more cache entries.
  • mapping relationship between the access source and the group by using a preset algorithm (such as a hash algorithm), thereby determining an association relationship between the access source and the group, for example, the access source A and the access source B respectively adopt a preset algorithm.
  • a preset algorithm such as a hash algorithm
  • the group to which the access source belongs can be determined.
  • the controller may determine the group to which the access source belongs according to the group identifier of the group to which the access source carries. This is not limited here.
  • multiple cache entries in the storage unit may be grouped, and each packet includes one or more cache entries, that is, one group to which the access source belongs corresponds to one group, and each group corresponds to one or more Cache entries.
  • the manner of grouping the cache entries may be hardware implemented, that is, the cache entries of one packet in the storage unit are physically independent from the cache entries in other packets; or the manner of grouping the cache entries may be implemented by software, and the controller may invoke the The grouping method determines the cache entry corresponding to the group.
  • the number of cache entries corresponding to the group may be determined by any one of the following methods: the number of cache entries corresponding to the group is determined according to the number of access sources in the group, or the group The number of corresponding cache entries is determined according to the size of the content normally accessed by the access source in the group. For example, the size of the content normally accessed by the access source of each group may be counted, and then according to the size of the access content of each group. , determine the number of cache entries corresponding to the group.
  • the number of cache entries corresponding to the group can also be determined by other means, which is not limited herein.
  • the cache entries corresponding to the respective groups may have an intersection, or the cache entries corresponding to the respective groups are completely independent, and there is no intersection, which is not limited herein.
  • the correspondence between the group and the cache entry may be pre-stored in a correspondence relationship table, and the controller may determine the correspondence between the group and the cache entry by viewing the correspondence relationship table; or, the group and the cache entry
  • the corresponding relationship is implemented by a logic circuit in the controller, which is not limited herein.
  • access source A to access source E both access the cache shown in Figure 5.
  • the cache includes at least cache entries entry1 through entry8.
  • the access source A and the access source B belong to the group 1
  • the access source C belongs to the group 2
  • the access source D and the access source E belong to the group 3.
  • For the manner of accessing the source group refer to the above manner or other division manner, which is not limited herein.
  • Group 1 corresponds to entry1 to entry3 in the cache, that is, access source A or access source B can preferentially access entry1 to 3;
  • group 2 corresponds to entry4 to entry5 in the cache, that is, access source C can preferentially access entry4 and entry5;
  • Group 3 corresponds to entry5 to entry8 in the cache, that is, access source D or access source E can preferentially access entry5 to entry8.
  • the entry 5 can be accessed by the access source C, the access source D, and the access source E, that is, can be accessed by the access sources from the two groups; that is, the cache entries of the respective groups 2 and 3 have an intersection. The intersection includes the cache entry entry5.
  • the controller may retrieve the content required to access the source A from the memory, and fill the content into the cache entry entry1 as the content to be filled.
  • the controller may first determine the group to which the access source belongs. After determining that the group to which the access source A belongs is the group 1, the cache entry corresponding to the group 1 may be determined as entry1 to entry3. Further, the access source A may fill the to-be-filled content into one of the entry entries 1 through 3 in one of the following embodiments.
  • one or more of the cache entries may also be allocated for each access source in the group.
  • the cache entry corresponding to the access source A is entry1 to entry2
  • the cache entry corresponding to the access source B is entry3 and the like. This is not limited here.
  • Step S402 detecting whether the cache entry corresponding to the first group is idle.
  • whether the cache entry is free may be detected by detecting a valid flag included in the cache entry. If the valid flag is set to be valid, it indicates that the cache entry has valid content, if the valid identifier is If it is invalid, it indicates that the cache entry is free and can store the content required by the above access source.
  • the multiple cache entries may be sequentially detected according to the identifier of the cache entry, or the multiple cache entries may be sequentially detected in a preset order to see whether there is idle.
  • Cache entries are not limited here.
  • Step S403 if the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • the cache entry may be selected according to the priority of the cache entry, the priority of the cache entry may be determined based on the identifier of the cache entry, or based on the frequency at which the cache entry is used. Determined, or determined based on other means, is not limited herein. If the priority of the cache entry may be determined based on the identifier of the cache entry, the cache entry with the largest identifier, or the smallest identifier, or the identifier closest to the identifier average may be selected as the first cache entry to store the to-be-filled content. Alternatively, if there are multiple cache entries in the cache entry, it is also possible to randomly select one of the multiple cache entries. Or the first cache entry is selected by other means, which is not limited herein.
  • the traversal detection of the cache entry in the first group may be continued until there is a free cache entry; or, the corresponding corresponding to the first group is selected. Any one of the cache entries is cached, and the content to be filled is filled into the cache entry; or the cache entry is selected by any of the methods described in the following embodiments, which is not limited herein.
  • the first group to which the access source belongs may be determined, and the content required by the access source is preferentially padded to the idle cache corresponding to the first group. In the entry.
  • FIG. 6 is a schematic flowchart diagram of another content filling method according to an embodiment of the present application. As shown in FIG. 6, the method includes at least the following steps.
  • Step S601 When it is required to fill the content required by the access source to the cache entry, determine the first group to which the access source belongs.
  • Step S602 detecting whether the cache entry corresponding to the first group is idle.
  • Step S603 if the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • Step S604 If none of the cache entries corresponding to the first group are idle, check whether the cache entries corresponding to the other groups are idle.
  • cache entries corresponding to other groups may be sequentially detected in a preset order. For example, when the cache entry corresponding to the first group is not idle, the cache entry corresponding to the other group may be detected in sequence, or the cache entry of the cache entry corresponding to the first group may be detected in turn, which is not limited herein. . By detecting cache entries corresponding to other groups, you can make better use of the cache space.
  • Step S605 if the second cache entry in the cache entry corresponding to the second group in the other group is idle, filling the content required by the access source to the second cache entry.
  • the method further includes: if it is detected that the second cache entry corresponding to the second group is idle, determining whether the access source has the right to access the cache entry corresponding to the second group. If there is permission, the controller fills the content required by the access source to the second cache entry, and if there is no permission, can continue to detect whether the cache entries of other groups are free, or fill the content of the access source to the first group. In one of the cache entries, to replace the original content in the cache entry, and so on.
  • the access source has the right to access other groups except the group, and the controller can fill the content required by the access source to other groups.
  • the access permission may be set for the cache entry corresponding to each group, thereby restricting the access of the access source that does not belong to the group to the corresponding cache entry. For example, if the access source belongs to the first group, and the access source does not have the right to access the second group, the controller may fill the content required by the access source into the cache entry corresponding to the first group, and may not The content required by the access source is filled into the cache entry corresponding to the second group.
  • the controller may obtain the access source if the cache entry corresponding to the first group is not idle.
  • the content is populated into the cache entry corresponding to the third group.
  • the content required by the access source can also be understood as the over-group content cached in the cache entry corresponding to the third group.
  • the permission setting of the access source may be determined based on factors such as the type of the access source, the type of content that the access source normally accesses, or the size of the content that the access source normally accesses, and the like, which is not limited herein.
  • the method further includes: after the controller accesses the cache entry in a serial access manner, after filling the content required by the access source to the second cache entry, the controller may continue to detect the cache corresponding to the first group. Whether the entry is idle, if there is a cache entry in the cache entry corresponding to the first group, the content filled in the second cache entry may be transferred to the idle cache entry for caching, and the content corresponding to the content may be notified.
  • the source is accessed so that when the access source needs a content, the controller searches for the content required by the access source in the cache entry corresponding to the first group to improve access efficiency.
  • the serial access mode means that the controller sequentially accesses the cache entry in the cache, that is, only one cache entry is accessed at a time, or only one cache entry corresponding to one group is accessed at a time.
  • the controller when it is required to fill the content required for accessing the source A to the cache entry, the controller first determines that the group to which the access source A belongs is group 1, and further determines that the cache entry corresponding to the group 1 is entry1 to entry3. . Detect whether entry1 to entry3 are idle. If entry1 to entry3 are not idle as shown in FIG. 7, it is possible to sequentially detect whether the adjacent cache entry is idle, such as detecting whether entry4 to entry8 are idle, or detecting adjacent groups in turn. Whether the corresponding cache entry is idle, for example, whether the cache entry corresponding to each of the group 2 to the group 3 is idle is determined, which is not limited herein.
  • the access source A When it is detected that the entry 5 is idle, it may be determined whether the access source A has the right to access the entry5. If the access source A has the right to access the entry 5, the content required to access the source A may be filled to the entry 5, if there is no permission, Then, it is possible to further detect whether other cache entries are idle. For example, if the entry 6 is idle, and the access source A has the right to access the entry 6, the content of the access source A can be filled in the entry 6. Optionally, the content required to access the access source of the source A may be filled into the entry6, so that when the access source A requests to access the content next time, the controller can access the entry6 except for accessing entry1 to entry3. This eliminates the need for the controller to access all caches, improving access efficiency.
  • the entry1 to entry3 may be further detected to be idle, when in entry1 to entry3 When at least one cache entry is in an idle state, the content filled in to entry 6 may be transferred to entry1 to entry3 for caching to improve access efficiency.
  • the cache entry corresponding to the other group may be used to cache the content required by the access source, so that the space in the cache can be flexibly utilized.
  • FIG. 8 is a schematic flowchart diagram of still another content filling method according to an embodiment of the present application. As shown in FIG. 8, the method includes at least the following steps.
  • Step S801 when it is required to fill the content required by the access source to the cache entry, determine the first group to which the access source belongs.
  • Step S802 detecting whether the cache entry corresponding to the first group is idle.
  • Step S803 if the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • Step S804 if the cache entries corresponding to the first group are not idle, detecting whether the content cached by the cache entry corresponding to the first group is a group content, the access source of the group content does not belong to the Said the first group.
  • the packet in the embodiment of the present invention refers to the content requested by the access source of the other group in the cache entry corresponding to the current group, and the other group refers to the group to which the access source belongs. Any group outside the group. That is to say, if the content required by one access source is not filled into the cache entry corresponding to the group to which the access source belongs, but is filled into other groups, the content can be understood as the over-group content. For example, a content cached in a cache entry corresponding to the first group, the content is the content required to access the source C, and the group to which the access source C belongs is the second group, because the content required by the access source C is cached. In a cache entry corresponding to the first group, rather than a cache entry corresponding to the second group, the content required by the access source C can be understood as a cross-group content.
  • the access source in the other group except the first group is not idle because the cache entry corresponding to the group to which the group belongs is, the control unit fills the content required by the access source to the cache corresponding to the first group. In the entry, this causes the group content to be cached in the cache entry corresponding to the first group.
  • Step S805 If the third cache entry in the cache entry corresponding to the first group caches the cross-group content, fill the content required by the access source to the third cache entry.
  • the method further includes: when the controller accesses the cache entry in a serial access manner, when detecting that the cache entry having the cached group content is cached in the cache entry corresponding to the first group, Further determining the access source of the cross-group content. And detecting whether there is a free cache entry in the cache entry corresponding to the group to which the access source belongs. If there is a free cache entry, the profile content may be filled into the idle cache entry, and the required content of the source is accessed. Populate into the third cache entry.
  • the controller when it is required to fill the content of the access source A to the cache entry, the controller first determines that the group to which the access source A belongs is group 1, and further determines that the cache entry corresponding to the group 1 is entry1 to Entry3. It is detected whether entry1 to entry3 are idle. If entry1 to entry3 are not idle as shown in FIG. 7, it is further determined whether the group content is cached in entry1 to entry3, and if the packet is detected in entry2, The access source of the content does not belong to the group 1. For example, if the access source of the content belongs to the group 2, the content required by the access source of the access source A may be filled into the entry 2 to replace the cross-group content.
  • the controller may further determine the group to which the access source of the cross-group content belongs, If the access source of the cross-group content is the access source C, it can be further determined whether there is any idle in the cache entry corresponding to the group 2 to which the access source C belongs. If the entry 5 is idle at this time, the cross-group content can be first transferred to the entry 5, Then fill in the content required to access source A into entry2.
  • the cache address is recorded and the access source C is notified, when the cross-group content is transferred to the entry 5, the new cache address can be recorded and the access source is notified.
  • C or other access source to enable access to source C or an access source that needs to access the cross-group content to find the content in the cache.
  • FIG. 10 is a schematic flowchart diagram of still another content filling method according to an embodiment of the present application.
  • Figure 10 illustrates an implementation in conjunction with the methods illustrated in Figures 6 and 8. As shown in FIG. 8, the method includes at least the following steps.
  • Step S1001 When it is required to fill the content required by the access source to the cache entry, determine the first group to which the access source belongs.
  • Step S1002 Detect whether the cache entry corresponding to the first group is idle.
  • Step S1003 If the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • Step S1004 If the cache entries corresponding to the first group are not idle, detecting whether the content of the cache entry cache corresponding to the first group is a group content, the access source of the group content does not belong to the Said the first group.
  • Step S1005 If the third cache entry in the cache entry corresponding to the first group caches the cross-group content, fill the content required by the access source to the third cache entry.
  • Step S1006 If no cache group content is cached in the cache entry corresponding to the first group, it is detected whether the cache entry corresponding to the other group is idle.
  • Step S1007 If the second cache entry in the cache entry corresponding to the other group is idle, filling the content required by the access source to the second cache entry.
  • Step S1008 If none of the cache entries corresponding to the other groups are idle, select any one of the cache entries corresponding to the first group as the fourth cache entry, and fill the content required by the access source. To the fourth cache entry.
  • the controller may first determine that the group to which the access source A belongs is group 1, and further determine that the cache entry corresponding to the group 1 is entry1. To entry3. It is detected whether entry1 to entry3 are idle. If entry1 to entry3 are not idle as shown in FIG. 9, it is further determined whether the group content is cached in entry1 to entry3, and if it is detected that the entry group content is cached in entry2, The content required to access source A is filled into entry2.
  • the entry group to entry3 are not cached with the group content, and the neighbor cache entries can be detected as idle, for example, whether entry 4 to entry 8 are idle, or whether the cache entries corresponding to the adjacent group are idle, for example, sequentially detecting Whether the cache entry corresponding to each of the group 2 to the group 3 is idle is not limited herein.
  • entry5 When it is detected that entry5 is idle,
  • entry1 to entry8 are not idle, and the group content is not cached in entry1 and entry3.
  • an implementation manner is: the content required to access the source A can be filled into any one of entry1 to entry3, for example, a random entry is used to randomly select a cache entry.
  • Another implementation manner is as follows: determining the priority of entry1 to entry3, selecting a cache entry with the highest priority level or selecting a cache entry with the lowest priority according to the priority, which is not limited herein.
  • determining the priority of the cache entry it may be determined according to the identifier of the cache entry, or according to the frequency at which the cache entry is used, or according to the frequency at which the content in the cache entry is replaced, and the like.
  • the access sources of a group each correspond to one or more cache entries, the priority of the corresponding cache entry may be determined according to the priority of each access source. The manner of determining the priority is not limited herein.
  • the cache space in the cache can be fully utilized, and the content required to access the source is filled into the cache.
  • FIG. 11 is a schematic flowchart diagram of still another content filling method according to an embodiment of the present application.
  • Figure 11 illustrates an implementation in conjunction with the methods illustrated in Figures 6 and 8. As shown in FIG. 11, the method includes at least the following steps.
  • Step S1101 When it is required to fill the content required by the access source to the cache entry, determine the first group to which the access source belongs.
  • Step S1102 Detect whether the cache entry corresponding to the first group is idle.
  • Step S1103 If the first cache entry in the cache entry corresponding to the first group is idle, filling the content required by the access source to the first cache entry.
  • Step S1104 If none of the cache entries corresponding to the first group are idle, check whether the cache entries corresponding to the other groups are idle.
  • Step S1105 If the second cache entry in the cache entry corresponding to the second group in the cache group is idle, the content required by the access source is filled to the second cache entry.
  • Step S1106 If the cache entries corresponding to the other groups are not idle, detecting whether the group content is cached in the cache entry corresponding to the first group, the access source of the group content does not belong to the first Group.
  • Step S1107 If the third cache entry in the cache entry corresponding to the first group caches the cross-group content, fill the content required by the access source to the third cache entry.
  • Step S1108 If no cache group content is cached in the cache entry corresponding to the first group, select any one of the cache entries corresponding to the first group as the fourth cache entry, and select the access source. The required content is populated to the fourth cache entry.
  • the controller may first determine that the group to which the access source A belongs is group 1, and further determine that the cache entry corresponding to the group 1 is entry1 to Entry3. Detect whether entry1 to entry3 are idle. If entry1 to entry3 are not idle as shown in FIG. 9, it is possible to sequentially detect whether the adjacent cache entry is idle, such as detecting whether entry4 to entry8 are idle or not, or sequentially detecting adjacent groups. Whether the corresponding cache entry is idle, for example, whether the cache entry corresponding to each of the group 2 to the group 3 is idle is determined, which is not limited herein.
  • the access source A When it is detected that the entry 5 is idle, it may be determined whether the access source A has the right to access the entry5. If the access source A has the right to access the entry 5, the content required to access the source A may be filled to the entry 5, if there is no permission, Then, it can be further detected whether the other cache entries are free. For example, if the entry 6 is idle, and the access source A has the right to access the entry 6, the content of the access source A can be filled in the entry 6. Assuming that entry 4 to entry 8 are not idle, it can be further determined whether the packet content is cached in entry1 to entry3. If it is detected that the packet is cached in entry2, the content of the access source of access source A can be filled into entry2. .
  • entry1 to entry8 are not idle, and the group content is not cached in entry1 and entry3.
  • a cache entry in entry1 to entry3 can be selected in the above manner, and the content required by the access source is filled into the selected cache entry.
  • the cache space in the cache can be fully utilized, and the content required to access the source can be filled into the cache.
  • FIG. 12 is a schematic structural diagram of a cache memory according to an embodiment of the present application.
  • the cache 120 may include a controller 121 and a storage unit 123.
  • the storage unit 123 includes a plurality of cache entries entry1 to entryK, and K is a positive integer.
  • the plurality of cache entries may be divided into at least one packet, wherein the partitioning manner may be a hardware implementation, that is, a cache entry of one packet in the storage unit is physically independent from a cache entry in other packets; or it may be a software implementation. This is not limited here.
  • a grouping mode is exemplarily shown in FIG. 12, and it should be understood that other grouping modes may also exist, which are not limited herein. Each group corresponds to one group, and each group includes one or more access sources. As shown in FIG. 12, entry1 to entryk1 are packet 1, entryk2 to entryk3 are packet 2, and entrykx to entryK are packet J. Where k1, k2, k3 to kx, J are positive integers.
  • the controller 121 may include a functional unit.
  • the controller 121 may include a grouping unit 1211, a determining unit 1213, a detecting unit 1215, and a filling unit 1217.
  • the grouping unit 1211 is configured to group the plurality of cache entries, each group corresponding to a group, where the group includes at least one access source;
  • a determining unit 1213 configured to determine, when the content required by the access source needs to be filled into the cache entry, determine the first group to which the access source belongs;
  • the detecting unit 1215 is configured to detect whether the cache entry corresponding to the first group is idle;
  • the filling unit 1217 is configured to fill the content required by the access source to the first cache entry if the first cache entry in the cache entry corresponding to the first group is idle.
  • the above functional unit may refer to an application-specific integrated circuit (ASIC), an integrated logic circuit, and/or other device that can provide the above functions, or the above functional unit may also be implemented by software. Not limited.
  • ASIC application-specific integrated circuit
  • the program can be stored in a computer readable storage medium, when the program is executed
  • the flow of the method embodiments as described above may be included.
  • the foregoing storage medium includes various media that can store program codes, such as a ROM or a random access memory RAM, a magnetic disk, or an optical disk.

Abstract

一种内容填充方法和一种高速缓冲存储器。该方法包括:当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组(S401);检测所述第一群组对应的缓存条目是否空闲(S402);如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目(S403)。采用上述方法能够提升处理器性能。

Description

一种内容填充方法和存储器 技术领域
本申请涉及计算机技术领域,尤其涉及一种内容填充方法和存储器。
背景技术
处理器中的多个访问源可以访问同一个高速缓冲存储器,在此高速缓冲存储器可以简称为缓存(cache)。当一个访问源访问这个缓存时,如果访问到所需求的内容,则为缓存命中(cache hit),如果没有访问到所需求的内容,则为缓存未命中(cache miss)。当缓存未命中时,访问源所需内容需要从其他存储器中获取,并被填充至该缓存中,以替换该缓存中的原有内容,原有内容有可能为其他访问源所需。如果该原有内容需要被频繁访问时,则需要将该原有内容重新填充至该缓存中。这有可能导致一个内容重复的被替换和被填入,进而导致多个访问源的内容填充发生踩踏,增大了访问延迟,降低了处理器性能。
发明内容
本申请实施例提供了一种内容填充方法和一种高速缓冲存储器。能够提升处理器性能。
第一方面,本申请实施例提供了一种内容填充方法,该方法包括:当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组;检测所述第一群组对应的缓存条目是否空闲;如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。通过上述方式,将访问源分为群组,并根据群组对应的缓存条目,能够降低各访问源内容填充发生踩踏的概率,进而降低访问延迟,提升处理器性能。
可选地,访问源所属的群组是根据所述访问源的类型或者所述访问源的标识确定的;或者,访问源所属的群组是根据哈希算法确定的。
结合第一方面,进一步地,该方法还可以包括:如果所述第一群组对应的缓存条目均不空闲,检测其他群组对应的缓存条目是否空闲;如果所述其他群组中的第二群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。通过上述方式,能够灵活利用缓存空间。
结合第一方面,进一步地,该方法还可以包括:如果所述第一群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目所缓存的内容中是否存在越组内容,所述越组内容的访问源不属于所述第一群组;如果所述第一群组对应的缓存条目中的第三缓存条目缓存的内容为所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。通过上述方式,能够灵活利用缓存空间。
结合第一方面,进一步地,该方法还可以包括:如果所述第一群组对应的缓存条目中没有缓存越组内容,选取所述第一群组对应的缓存条目中的任意一个缓存条目作为第四缓存条目,并将所述访问源所需内容填充至所述第四缓存条目。通过上述方式,能够灵活利用缓存空间。
结合第一方面,进一步地,该方法还可以包括:如果所述第一群组对应的缓存条目中 有多个缓存条目空闲,确定所述多个缓存条目中优先级最高的缓存条目为所述第一缓存条目。可选地,所述第一群组对应的缓存条目的优先级是根据缓存条目的标识确定的。通过上述方式,能够灵活利用缓存空间。
第二方面,本申请实施例提供了一种高速缓冲存储器。该高速缓冲存储器包括控制器及多个缓存条目。其中,该控制器用于执行第一方面中的任意一种方法。
第三方面,本申请实施例提供了一种存储计算机指令的可读非易失性存储介质,所述计算机指令用以执行第一方面中的任意一种方法。
本申请实施例中,当需要将访问源所需内容填充至缓存条目时,可以确定该访问源所属的第一群组,检测该第一群组对应的缓存条目是否空闲,如果第一群组中对应缓存条目中的第一缓存条目空闲,则可以将访问源所需内容填充至第一缓存条目。通过上述方式,将访问源分为群组,并根据群组对应的缓存条目,能够降低各访问源内容填充发生踩踏的概率,进而降低访问延迟,提升处理器性能。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1是本申请实施例涉及的一种计算机系统的架构示意图;
图2是本申请实施例涉及的一种高速缓冲存储器的结构示意图;
图3是本申请实施例涉及的一种缓存条目的结构示意图;
图4是本申请实施例提供的一种内容填充方法的流程示意图;
图5是本申请实施例提供的一种内容填充的应用示意图;
图6是本申请实施例提供的另一种内容填充方法的流程示意图;
图7是本申请实施例提供的另一种内容填充的应用示意图;
图8是本申请实施例提供的又一种内容填充方法的流程示意图;
图9是本申请实施例提供的又一种内容填充的应用示意图;
图10是本申请实施例提供的又一种内容填充方法的流程示意图;
图11是本申请实施例提供的又一种内容填充方法的流程示意图;
图12是本申请实施例提供的一种高速缓冲存储器的结构示意图。
具体实施方式
本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
为了便于理解本申请的技术方案,首先介绍本申请所涉及的应用场景。
请参阅图1,图1是本申请实施例涉及的一种计算机系统。如图1所示,该计算机系统包括处理器10和存储器30。
其中,处理器10包括处理器核11~1N以及缓存1M,在此,缓存1M被配置在处理器核外,则缓存1M可以是核外缓存,处理器核11~1N可以作为缓存1M的访问源,此种情 况下访问源由其中运行的程序触发对缓存1M中的内容进行访问。在此,访问可以理解为对内容的读取或调用。如图1所示,每个处理器核内可以包括一个或多个访问源以及缓存。以处理器核11为例,处理器核11包括访问源111~11x以及缓存110。其中,缓存被配置在处理器核内,则缓存110可以是核内缓存,同理,缓存120~1N0也可以是核内缓存。访问源111~11x可以是处理器核11中运行的进程、线程、虚拟机(virtual machine)等;在此不予限定。其中,M,N,x,y,z均为正整数;M为1至N之外的任意一个正整数;x,y,z可以相同,可以不同,在此不予限定。
需要说明的是,图1中所示的处理器10可以由一个或多个处理器芯片实现,进而处理器10包括的处理器核11~1N可以来自不同的处理器芯片,在此不予限定。此外,缓存1M可以由一个或多个核外缓存芯片实现,在此不予限定。
其中,处理器10和存储器30连接。存储器30可以与处理器处于同一个芯片中,也可以是设置在处理器所处的芯片的外部,在此不予限定。
结合图1,图2示出了一种缓存的结构示意图。如图2所示,缓存20包括控制器201和存储单元203。
示例性地,缓存20可以是核内缓存,此时,缓存20被配置在处理器核内,访问源21~2P代表该处理器核内的访问源。例如,缓存20可以是图1中所示的缓存110至缓存1N0中的任意一个,以缓存20为图1中所示的缓存110为例,访问源21~2P即为图1中所示的访问源111~11x。
或者,缓存20可以是核外缓存,此时,缓存20被配置在处理器核外,访问源21~2P代表访问该缓存20的处理器核。例如,缓存20可以是图2中所示的缓存1M,此时访问源21~2P即为图1中所示的处理器核11~1N中的一个或多个。
需要说明的是,缓存20还可以是其他级别或类型的缓存,在此不予限定。
示例性地,缓存20所包括的控制器201可以是由应用集成电路、集成逻辑电路、芯片或者其他能够实现控制功能的器件实现的,在此不予限定。存储单元203可以包括一个或多个缓存条目(entry),分别为entry1至entryQ。具体的,当一个内容被回填至缓存中时,该内容被填放在一个缓存条目中。其中,内容被回填是指该内容从存储器中被复制到缓存中;缓存条目可以理解为一个单位存储单元,如图3所示,一个缓存条目可以包括一个内容和该内容对应的标签(tag),这里的标签是指该内容存储在存储器中的部分或全部存储地址。本申请中所描述的内容可以包括以下任意一种:指令、数据或页表项(Page Table Entry,PTE)。上述P,Q为正整数。
基于图2所示的缓存结构,在一种传统的实现方式中,假设访问源21需要一个从存储器中的一个存储地址访问一个内容时,如从该存储地址读或写一个内容时,该访问源21可以首先向缓存20中的控制器201发送访问请求,该访问请求用于请求访问该内容,该访问请求可以携带有请求访问的存储地址。控制器201可以根据缓存条目中的标签,来判断是否有该存储地址,如果控制器201通过标签判断出缓存条目中有该存储地址,则表明缓存条目中缓存有该存储地址中对应的内容,这种情况可以理解为缓存命中;如果控制器201通过标签判断出缓存条目中没有该存储地址,则表明缓存条目中没有缓存有该存储地址中对应的内容,这种情况可以理解为缓存未命中。如果缓存未命中,则控制器201可以根据 该存储地址从对应的存储器,例如图1所示的存储器30,中调取该内容,并将其填充至存储单元203中的一个缓存条目,并将该内容对应的存储地址(标签)也填充至该缓存条目中,例如,填充至缓存条目entry1中。当存储单元203中的缓存条目均被填充有内容,仍需要向存储单元203填充内容时,可以根据当前替换算法,例如最近最少使用(Least Recent Used,LRU),最近最多使用(Most Recent Used,MRU),随机(Random)或先进先出(First In First Out,FIFO)等,将上述内容填充至一个缓存条目中,例如,将该内容填充至缓存条目entry1中,则缓存条目entry1中原有的内容会被替换。如果原有的内容仍需要被访问,则需要将原有的内容重新填充至一个缓存条目中,基于相同的替换策略,该原有的内容很有可能被重新填充至缓存条目entry1中,这就导致缓存条目entry1中的内容重复替换,影响了访问源的访问效率,进而影响了处理器的性能。
结合上述系统及缓存结构,下面介绍本申请所提供的技术方案。
请参阅图4,图4是本申请实施例提供的一种内容填充的方法的流程示意图。如图4所示,该方法至少包括以下步骤。
步骤S401,当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组。
示例性地,需要将访问源所需内容填充至缓存条目的一种情况可以是:当访问源向控制器请求访问一个内容时,控制器可以首先在存储单元的缓存条目中查找是否有该内容,如果未查找到该内容,则可以从存储器中调取该内容,并将该内容填充至存储单元中,该过程也可以被理解为内容回填。
在此种情况下,可以首先确定发出访问请求的访问源所属的群组。其中,访问源与群组的所属关系是预设的。访问源与群组的所属关系可以包括以下任意一种:
(1)、根据访问源之间的关联关系将访问源划分成一个或多个群组。例如,可以根据访问源的类型、访问源通常访问内容的大小、访问源通常访问内容的类型、访问源访问缓存的频繁程度、访问源的标识中的一种或多种方式结合,对访问源进行划分,在此不予限定。访问源的类型可以包括进程、线程等。例如,若根据访问源的类型对访问源进行划分,相同类型的访问源可以划分至一个群组;若根据访问源的标识对访问源进行划分,标识相邻的N个访问源可以划分至一个群组,N可以是预设正整数;若根据访问源通常访问内容的大小,可以设置一个群组的访问内容的固定大小,根据该固定大小,确定一个群组中包括的访问源,使各访问源通常访问内容的大小之和小于或等于该固定大小;本申请实施例中,内容的第一类型包括指令、数据、页表项等,内容的第二类型是指在第一类型中的一个类型下的具体类型,如指令包括读取指令、写入指令、处理指令等等可以理解为内容的第二类型。在此,若根据访问源通常访问内容的类型对访问源进行划分,是指根据访问源的通常访问内容的第二类型对访问源进行划分;若根据访问源访问缓存的频繁程度对访问源进行划分,可以将访问频繁的访问源与访问不频繁的访问源划分为一个群组,从而能够避免过多的访问频繁的访问源在访问缓存时,产生内容相互踩踏的问题。当然,还可以根据上述方式的结合对访问源进行访问,在此不予限定。
(2)、如果访问该缓存的访问源的数量较少,可以将每个访问源分别划分成一个群组。每个群组可以对应一个或多个缓存条目。
(3)、通过预设算法(如哈希算法),确定访问源与群组的映射关系,从而确定访问源与群组的所属关系,例如,访问源A和访问源B分别通过预设算法映射至第一群组中,则可以确定访问源A和访问源B属于第一群组。
从而,根据上述访问源与群组预设的所属关系,即可确定访问源所属的群组。
或者,控制器可以根据访问源所携带的所属群组的群组标识,确定访问源所属的群组。在此不予限定。
其中,可以将存储单元中的多个缓存条目进行分组,每个分组包括一个或多个缓存条目,也就是说,访问源所属的一个群组对应一个分组,进而每个群组对应一个或多个缓存条目。其中,缓存条目的分组方式可以是硬件实现的,即存储单元中一个分组的缓存条目与其他分组中的缓存条目物理独立;或者,缓存条目的分组方式可以是软件实现的,控制器可以调用该分组方式以确定群组对应的缓存条目。例如,群组所对应的缓存条目的数量可以是通过以下任意一种方式确定的:群组所对应的缓存条目的数量是根据该群组中的访问源的数量确定的,或者,群组所对应的缓存条目的数量是根据该群组中的访问源通常所访问内容的大小确定的,例如可以统计各群组的访问源通常所访问内容的大小,进而根据各群组的访问内容的大小,确定与该群组对应的缓存条目的数量。当然,群组所对应的缓存条目的数量还可以通过其他方式确定,在此不予限定。此外,各群组分别对应的缓存条目可以存在交集,或者各群组分别对应的缓存条目完全独立,不存在交集,在此不予限定。
可选地,群组与缓存条目的对应关系可以预存储在一个对应关系表中,控制器通过查看该对应关系表,即可确定群组与缓存条目的对应关系;或者,群组与缓存条目的对应关系是通过控制器中的逻辑电路实现的,在此不予限定。
下面结合图5举例说明访问源与缓存中的缓存条目的对应关系。需要说明的是,图5中仅示例性的对访问源、所属群组以及与entry的对应关系进行说明,当然还可以包括其他实现方式,在此不予限定。
图5中,访问源A至访问源E均访问图5中所示的缓存。该缓存至少包括缓存条目entry1至entry8。其中,访问源A和访问源B属于群组1,访问源C属于群组2,访问源D和访问源E属于群组3。访问源划分群组的方式可参见上述方式或其他划分方式,在此不予限定。群组1对应于缓存中的entry1至entry3,即访问源A或访问源B可以优先访问entry1至3;群组2对应于缓存中的entry4至entry5,即访问源C可以优先访问entry4和entry5;群组3对应于缓存中的entry5至entry8,即访问源D或访问源E可以优先访问entry5至entry8。在此,entry5可以被访问源C、访问源D和访问源E访问,即可以被来自两个群组的访问源访问;也就是说,群组2和群组3各自对应的缓存条目存在交集,该交集包括缓存条目entry5。
在一种实现方式中,假设访问源A所需内容未在缓存中,控制器可以将访问源A所需内容从内存中调取出来,并将该内容作为待填充内容,填充至缓存条目entry1至entry8的其中一个缓存条目中。具体的,控制器可以首先确定该访问源所属群组,在确定出访问源A所属群组为群组1后,可以确定群组1所对应的缓存条目为entry1至entry3。进而,访问源A可以通过下述实施例中的任意一种实现方式,将待填充内容填充至entry1至entry3中的其中一个缓存条目中。
可选地,当确定出一个群组对应的缓存条目后,还可以为该群组中的各访问源分配上述缓存条目中的一个或多个。例如,图3中访问源A对应的缓存条目为entry1至entry2,访问源B对应的缓存条目为entry3等。在此不予限定。
步骤S402,检测所述第一群组对应的缓存条目是否空闲。
示例性地,可以通过检测缓存条目中包括的有效(valid)标志位来检测该缓存条目是否空闲,如果有效标志位被置为有效,则表明该缓存条目中缓存有有效内容,如果有效标识为被置为无效,则表明该缓存条目空闲,可以存储上述访问源所需内容。
示例性地,如果第一群组对应的缓存条目为多个,可以按照缓存条目的标识排序依次检测这多个缓存条目,或者按照预设顺序依次检测这多个缓存条目,看是否存在空闲的缓存条目,在此不予限定。
步骤S403,如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
示例性地,如果缓存条目中存在多个缓存条目空闲,可以根据缓存条目的优先级选取缓存条目,缓存条目的优先级可以基于该缓存条目的标识进行确定,或者基于该缓存条目被使用的频率确定,或者基于其他方式确定,在此不予限定。若缓存条目的优先级可以基于缓存条目的标识进行确定,可以选取标识最大、或标识最小、或最接近标识平均值的一个标识的缓存条目作为第一缓存条目来存储该待填充内容。或者,如果缓存条目中存在多个缓存条目空闲,还可以随机选取该多个空闲缓存条目中的一个。或者通过其他方式选取出第一缓存条目,在此不予限定。
可选地,如果第一群组对应的缓存条目中不存在空闲的缓存条目,则可以继续遍历检测第一群组中的缓存条目,直至存在空闲缓存条目;或者,选取第一群组对应的缓存条目中的任意一个缓存条目,将待填充内容填充至该缓存条目中;又或者,通过下述实施例描述的方式中的任意一种选取出缓存条目,在此不予限定。
本申请实施例中,当需要将访问源所需内容填充至缓存条目时,可以确定该访问源所属的第一群组,并将访问源所需内容优先填充至第一群组对应的空闲缓存条目中。通过对访问源进行分组,能够有效避免各访问源所需内容发生互相踩踏现象,降低了访问时延,进而提升了处理器性能。
请参阅图6,图6是本申请实施例提供的另一种内容填充方法的流程示意图。如图6所示,该方法至少包括以下步骤。
步骤S601,当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组。
步骤S602,检测所述第一群组对应的缓存条目是否空闲。
步骤S603,如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
其中,步骤S601至步骤S603的实现方式可以参见上述实施例中对应步骤的描述方式,在此不再赘述。
步骤S604,如果所述第一群组对应的缓存条目均不空闲,检测其他群组对应的缓存条目是否空闲。
示例性地,可以按照预设顺序依次检测其他群组对应的缓存条目。例如,当第一群组对应的缓存条目不空闲时,可以依次检测其他群组对应的缓存条目,或者,可以依次检测第一群组对应的缓存条目的相邻缓存条目,在此不予限定。通过检测其他群组对应的缓存条目,能够更好的利用缓存空间。
步骤S605,如果所述其他群组中的第二群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。
可选地,上述方法还包括:如果检测第二群组对应的第二缓存条目空闲,可以判断上述访问源是否有访问第二群组对应的缓存条目的权限。如果有权限,则控制器将访问源所需内容填充至第二缓存条目,如果没有权限,可以继续检测其他群组的缓存条目是否空闲,或者将该访问源所需内容填充至第一群组中的其中一个缓存条目中,以替换该缓存条目中的原内容等。
其中,访问源具备访问除所属群组外的其他群组的权限,是指控制器能够将该访问源所需内容填充至其他群组。具体的,当确定出群组与缓存条目之间的对应关系后,可以为各群组对应的缓存条目设置访问权限,进而限制不属于该群组的访问源对其对应的缓存条目的访问。例如,如果访问源所属第一群组,且访问源不具备访问第二群组的权限,则控制器可以将该访问源所需内容填充至第一群组对应的缓存条目中,而不能将该访问源所需内容填充至第二群组对应的缓存条目中。又例如,如果访问源所属第一群组,且访问源具备访问第三群组的权限,则控制器可以在第一群组对应的缓存条目均无空闲的情况下,将该访问源所需内容填充至第三群组对应的缓存条目中。此时,该访问源所需内容也可以理解成第三群组对应的缓存条目中所缓存的越组内容。
可选地,访问源的权限设置可以基于访问源的类型、访问源通常访问的内容的类型或访问源通常访问的内容的大小等因素确定,在此不予限定。
可选地,上述方法还包括:控制器在以串行访问的方式来访问缓存条目的情况下,将访问源所需内容填充至第二缓存条目后,可以继续检测第一群组对应的缓存条目是否空闲,如果第一群组对应的缓存条目中有空闲的缓存条目,则可以将填充至第二缓存条目中的内容转移至该空闲的缓存条目中进行缓存,并可以通知该内容对应的访问源,以便当访问源需要一个内容时,控制器在第一群组对应的缓存条目中查找该访问源所需的内容,以提升访问效率。其中,串行访问的方式是指控制器依次访问缓存中的缓存条目,即每次仅访问一个缓存条目,或者每次仅访问一个群组对应的缓存条目。
下面结合图7示例性地说明图6所示实施例中的方法。
如图7所示,当需要将访问源A所需内容填充至缓存条目时,控制器首先确定访问源A所属的群组为群组1,进而确定群组1对应的缓存条目为entry1至entry3。检测entry1至entry3是否空闲,如果如图7中所示,entry1至entry3均不空闲,则可以依次检测相邻缓存条目是否空闲,如依次检测entry4至entry8是否空闲,或者,依次检测相邻群组对应的缓存条目是否空闲,例如,依次检测群组2至群组3各自对应的缓存条目是否空闲,在此不作限定。当检测到entry5空闲时,可选地,可以确定访问源A是否有访问entry5的权限,如果访问源A有访问entry5的权限,则可以将访问源A所需内容填充至entry5,如果 没有权限,则可以进一步检测其他缓存条目是否空闲,例如,检测到entry6空闲,且访问源A具备访问entry6的权限,则可以将访问源A所需内容填充至entry6中,在此不作具体限定。可选地,可以记录将访问源A的访问源所需内容填充至entry6中,以便访问源A下次请求访问该内容时,控制器除访问entry1至entry3外,可以访问entry6。如此无需控制器访问所有缓存,提升访问效率。可选地,控制器在以串行访问的方式来访问缓存条目的情况下,当将访问源A所需内容填充至entry6后,还可以进一步检测entry1至entry3是否空闲,当entry1至entry3中的至少一个缓存条目出现空闲状态时,可以将上述填充至entry6的内容转移至entry1至entry3中进行缓存,以提升访问效率。
通过上述方式,当访问源所属的群组对应的缓存条目均不空闲时,可以利用其他群组对应的缓存条目来缓存该访问源所需内容,从而能够灵活利用缓存中的空间。
请参阅图8,图8是本申请实施例提供的又一种内容填充方法的流程示意图。如图8所示,该方法至少包括以下步骤。
步骤S801,当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组。
步骤S802,检测所述第一群组对应的缓存条目是否空闲。
步骤S803,如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
其中,步骤S801至步骤S803的实现方式可以参见上述实施例中对应步骤的描述方式,在此不再赘述。
步骤S804,如果所述第一群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目所缓存的内容是否为越组内容,所述越组内容的访问源不属于所述第一群组。
其中,在本发明实施例中所述越组内容是指当前群组对应的缓存条目中缓存有其他群组的访问源所请求的内容,所述其他群组是指除所述访问源所属群组之外的任意一个群组。也就是说,如果将一个访问源所需内容未填充至该访问源所属群组对应的缓存条目中,而是将其填充至其他群组中,则该内容可以被理解为是越组内容。例如,在第一群组对应的一个缓存条目中缓存有一个内容,该内容是访问源C所需内容,访问源C所属群组为第二群组,由于该访问源C所需内容被缓存在第一群组对应的一个缓存条目中,而非第二群组对应的一个缓存条目中,因此该访问源C所需的内容可以理解为是越组内容。
示例性地,除第一群组外的其他群组中的访问源由于其所属群组对应的缓存条目均不空闲,则控制单元将该访问源所需内容填充至第一群组对应的缓存条目中,这就导致第一群组对应的缓存条目中缓存的内容中有越组内容。
步骤S805,如果所述第一群组对应的缓存条目中的第三缓存条目缓存有所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。
可选地,上述方法还包括:控制器在以串行访问的方式来访问缓存条目的情况下,当检测到第一群组对应的缓存条目中存在缓存有越组内容的缓存条目时,可以进一步确定该越组内容的访问源。并检测当前该访问源所属群组对应的缓存条目中是否存在空闲的缓存条目,如果存在空闲的缓存条目,则可以将该越组内容填充至该空闲的缓存条目后,将访 问源所需内容填充至第三缓存条目中。
下面结合图9示例性地说明图8所示实施例中的方法。
如图9所示,当需要将访问源A所需内容填充至缓存条目时,,控制器首先确定访问源A所属的群组为群组1,进而确定群组1对应的缓存条目为entry1至entry3。检测entry1至entry3是否空闲,如果如图7中所示,entry1至entry3均不空闲,则进一步判断entry1至entry3中是否缓存有越组内容,如果检测出entry2中缓存有越组内容,也就是说该内容的访问源不属于群组1,例如,该内容的访问源属于群组2,则可以将访问源A的访问源所需内容填充至entry2中,以替换该越组内容。
可选地,控制器在以串行访问的方式来访问缓存条目的情况下,在检测entry2中缓存有越组内容后,还可以进一步地确定该越组内容的访问源所属的群组,假设该越组内容的访问源为访问源C,可以进一步判断访问源C所属的群组2对应的缓存条目中是否有空闲,如果此时entry5空闲,则可以首先将越组内容转移至entry5中,再将访问源A所需内容填充至entry2中。进一步地,如果上述访问源C的越组内容在缓存至entry2时,记录其缓存地址并通知访问源C时,当该越组内容转移至entry5后,可以记录其新的缓存地址并通知访问源C或其他访问源,以使访问源C或需要访问该越组内容的访问源能够在缓存中找到该内容。
请参阅图10,图10是本申请实施例提供的又一种内容填充方法的流程示意图。图10示出了一种结合图6和图8所示方法的实现方式。如图8所示,该方法至少包括以下步骤。
步骤S1001,当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组。
步骤S1002,检测所述第一群组对应的缓存条目是否空闲。
步骤S1003,如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
其中,步骤S1001至步骤S1003的实现方式可以参见上述实施例中对应步骤的描述方式,在此不再赘述。
步骤S1004,如果所述第第一群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目缓存的内容是否为越组内容,所述越组内容的访问源不属于所述第一群组。
步骤S1005,如果所述第一群组对应的缓存条目中的第三缓存条目缓存有所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。
步骤S1006,如果所述第一群组对应的缓存条目中没有缓存越组内容,检测其他群组对应的缓存条目是否空闲。
步骤S1007,如果所述其他群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。
步骤S1008,如果所述其他群组对应的缓存条目均不空闲,选取所述第一群组对应的缓存条目中的任意一个缓存条目作为第四缓存条目,并将所述访问源所需内容填充至所述第四缓存条目。
上述步骤的具体描述可参见上述实施例中的对应描述,在此不予赘述。
下面结合图9示例性地说明图10所示的方法。
如图9所示,当需要将访问源A所需内容填充至缓存条目时,,控制器可以首先确定访问源A所属的群组为群组1,进而确定群组1对应的缓存条目为entry1至entry3。检测entry1至entry3是否空闲,如果如图9中所示,entry1至entry3均不空闲,则进一步判断entry1至entry3中是否缓存有越组内容,如果检测出entry2中缓存有越组内容,则可以将访问源A所需内容填充至entry2中。假设entry1至entry3均没有缓存有越组内容,可以依次检测相邻缓存条目是否空闲,如依次检测entry4至entry8是否空闲,或者,依次检测相邻群组对应的缓存条目是否空闲,例如,依次检测群组2至群组3各自对应的缓存条目是否空闲,在此不作限定。当检测到entry5空闲时,
假设entry1至entry8均不空闲,且entry1和entry3中均未缓存有越组内容。在此种情况下,一种实现方式为:可以将访问源A所需内容填充至entry1至entry3中的任意一个,例如,采用随机算法随机选取一个缓存条目。另一种实现方式为:确定entry1至entry3的优先权,根据优先权选取优先权级别最高的一个缓存条目或者选取优先级别最低的一个缓存条目,在此不作限定。在确定缓存条目的优先级时,可以根据缓存条目的标识,或者根据缓存条目被使用的频率,或者根据缓存条目中内容被替换的频率等确定。假设一个群组的访问源各自对应一个或多个缓存条目时,可以根据各访问源的优先级,确定其对应的缓存条目的优先级。对于优先级的确定方式在此不予限定。
通过上述方式能够充分利用缓存中的缓存空间,实现将访问源所需内容填充至缓存中。
请参阅图11,图11是本申请实施例提供的又一种内容填充方法的流程示意图。图11示出了一种结合图6和图8所示方法的实现方式。如图11所示,该方法至少包括以下步骤。
步骤S1101,当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组。
步骤S1102,检测所述第一群组对应的缓存条目是否空闲。
步骤S1103,如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
步骤S1104,如果所述第一群组对应的缓存条目均不空闲,检测其他群组对应的缓存条目是否空闲。
步骤S1105,如果所述缓存群组中的第二群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。
步骤S1106,如果所述其他群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目中是否缓存有越组内容,所述越组内容的访问源不属于所述第一群组。
步骤S1107,如果所述第一群组对应的缓存条目中的第三缓存条目缓存有所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。
步骤S1108,如果所述第一群组对应的缓存条目中没有缓存越组内容,选取所述第一群组对应的缓存条目中的任意一个缓存条目作为第四缓存条目,并将所述访问源所需内容填充至所述第四缓存条目。
上述步骤的具体描述可参见上述实施例中的对应描述,在此不予赘述。
下面结合图9示例性地说明图11所示的方法。
如图9所示,当需要将访问源A所需内容填充至缓存条目时,控制器可以首先确定访问源A所属的群组为群组1,进而确定群组1对应的缓存条目为entry1至entry3。检测entry1至entry3是否空闲,如果如图9中所示,entry1至entry3均不空闲,则可以依次检测相邻缓存条目是否空闲,如依次检测entry4至entry8是否空闲,或者,依次检测相邻群组对应的缓存条目是否空闲,例如,依次检测群组2至群组3各自对应的缓存条目是否空闲,在此不作限定。当检测到entry5空闲时,可选地,可以确定访问源A是否有访问entry5的权限,如果访问源A有访问entry5的权限,则可以将访问源A所需内容填充至entry5,如果没有权限,则可以进一步检测其他缓存条目是否空闲,例如,检测到entry6空闲,且访问源A具备访问entry6的权限,则可以将上述访问源A所需内容填充至entry6中,在此不作具体限定。假设entry4至entry8均不空闲,则可以进一步判断entry1至entry3中是否缓存有越组内容,如果检测出entry2中缓存有越组内容,则可以将访问源A的访问源所需内容填充至entry2中。
假设entry1至entry8均不空闲,且entry1和entry3中均未缓存有越组内容。在此种情况下,可以通过上述方式选取entry1至entry3中的一个缓存条目,将访问源所需内容填充至选取的缓存条目中。
通过上述方式,能够充分利用缓存中的缓存空间,实现将访问源所需内容填充至缓存中。
可以理解地,控制器在以并行访问的方式来访问缓存条目的情况下,可以执行本申请实施例提供的任意一种方法。
下面介绍用于实现上述方法实施例的装置实施例。
请参阅图12,图12是本申请实施例提供的一种高速缓冲存储器的结构示意图。如图12所示,该缓存120可以包括控制器121和存储单元123。
其中,存储单元123包括多个缓存条目entry1~entryK,K为正整数。多个缓存条目可以被划分为至少一个分组,其中该划分方式可以是硬件实现,即存储单元中一个分组的缓存条目与其他分组中的缓存条目物理独立;或者,也可以是软件实现。在此不予限定。图12中示例性地示出了一种分组方式,应当理解,还可以存在其他分组方式,在此不予限定。其中,每个分组对应一个群组,每个群组包括一个或多个访问源。如图12所示,entry1~entryk1为分组1,entryk2~entryk3为分组2,直至entrykx~entryK为分组J。其中,k1、k2,k3至kx,J为正整数。
控制器121可以包括功能单元,例如,如图12中所示,控制器121可以包括分组单元1211、确定单元1213、检测单元1215和填充单元1217。
其中,分组单元1211,用于为所述多个缓存条目分组,每一组与一个群组相对应,所述群组包括至少一个访问源;
确定单元1213,用于当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组;
检测单元1215,用于检测所述第一群组对应的缓存条目是否空闲;
填充单元1217,用于如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
其中,上述功能单元可以指特定应用集成电路(application-specific integrated circuit,ASIC),,集成逻辑电路,和/或其他可以提供上述功能的器件,或者,上述功能单元也可以由软件实现,在此不作限定。
当然,控制器中还可以包括其他功能单元,用以实现上述方法中的一个或多个步骤。在此,不予限定。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (15)

  1. 一种内容填充方法,其特征在于,包括:
    当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组;
    检测所述第一群组对应的缓存条目是否空闲;
    如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
  2. 如权利要求1所述方法,其特征在于,还包括:
    如果所述第一群组对应的缓存条目均不空闲,检测其他群组对应的缓存条目是否空闲;
    如果所述其他群组中的第二群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。
  3. 如权利要求1或2所述方法,其特征在于,还包括:
    如果所述第一群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目所缓存的内容中是否存在越组内容,所述越组内容的访问源不属于所述第一群组;
    如果所述第一群组对应的缓存条目中的第三缓存条目缓存的内容为所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。
  4. 如权利要求3所述方法,其特征在于,还包括:
    如果所述第一群组对应的缓存条目中没有缓存越组内容,选取所述第一群组对应的缓存条目中的任意一个缓存条目作为第四缓存条目,并将所述访问源所需内容填充至所述第四缓存条目。
  5. 如权利要求1-4任一项所述方法,其特征在于,还包括:
    如果所述第一群组对应的缓存条目中有多个缓存条目空闲,确定所述多个缓存条目中优先级最高的缓存条目为所述第一缓存条目。
  6. 如权利要求5所述方法,其特征在于,所述第一群组对应的缓存条目的优先级是根据缓存条目的标识确定的。
  7. 如权利要求1-6任一项所述方法,其特征在于,访问源所属的群组是根据所述访问源的类型或者所述访问源的标识确定的;或者,访问源所属的群组是根据哈希算法确定的。
  8. 一种高速缓冲存储器,其特征在于,包括控制器及多个缓存条目;
    所述控制器用于:
    为所述多个缓存条目分组,每一组与一个群组相对应,所述群组包括至少一个访问源;
    当需要将访问源所需内容填充至缓存条目时,确定所述访问源所属的第一群组;
    检测所述第一群组对应的缓存条目是否空闲;
    如果所述第一群组对应的缓存条目中的第一缓存条目空闲,将所述访问源所需内容填充至所述第一缓存条目。
  9. 如权利要求8所述高速缓冲存储器,其特征在于,所述控制器还用于:
    如果所述第一群组对应的缓存条目均不空闲,检测其他群组对应的缓存条目是否空闲;
    如果所述其他群组中的第二群组对应的缓存条目中的第二缓存条目空闲,将所述访问源所需内容填充至所述第二缓存条目。
  10. 如权利要求8或9所述高速缓冲存储器,其特征在于,所述控制器还用于:
    如果所述第一群组对应的缓存条目均不空闲,检测所述第一群组对应的缓存条目所缓存的内容中是否存在越组内容,所述越组内容的访问源不属于所述第一群组;
    如果所述第一群组对应的缓存条目中的第三缓存条目缓存的内容为所述越组内容,将所述访问源所需内容填充至所述第三缓存条目。
  11. 如权利要求10所述高速缓冲存储器,其特征在于,所述控制器还用于:
    如果所述第一群组对应的缓存条目中没有缓存越组内容,选取所述第一群组对应的缓存条目中的任意一个缓存条目作为第四缓存条目,并将所述访问源所需内容填充至所述第四缓存条目。
  12. 如权利要求8-11任一项所述高速缓冲存储器,其特征在于,所述控制器还用于:
    如果所述第一群组对应的缓存条目中有多个缓存条目空闲,确定所述多个缓存条目中优先级最高的缓存条目为所述第一缓存条目。
  13. 如权利要求12所述高速缓冲存储器,其特征在于,所述第一群组对应的缓存条目的优先级是根据缓存条目的标识确定的。
  14. 如权利要求8-13任一项所述高速缓冲存储器,其特征在于,访问源所属的群组是根据所述访问源的类型或者所述访问源的标识确定的;或者,访问源所属的群组是根据哈希算法确定的。
  15. 一种存储计算机指令的可读非易失性存储介质,所述计算机指令用以执行如权利要求1至7任一所述的方法。
PCT/CN2018/105043 2017-09-14 2018-09-11 一种内容填充方法和存储器 WO2019052442A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710829610.6 2017-09-14
CN201710829610.6A CN109508302B (zh) 2017-09-14 2017-09-14 一种内容填充方法和存储器

Publications (1)

Publication Number Publication Date
WO2019052442A1 true WO2019052442A1 (zh) 2019-03-21

Family

ID=65722419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105043 WO2019052442A1 (zh) 2017-09-14 2018-09-11 一种内容填充方法和存储器

Country Status (2)

Country Link
CN (1) CN109508302B (zh)
WO (1) WO2019052442A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120198121A1 (en) * 2011-01-28 2012-08-02 International Business Machines Corporation Method and apparatus for minimizing cache conflict misses
CN104484288A (zh) * 2014-12-30 2015-04-01 浪潮电子信息产业股份有限公司 一种对目录条目进行替换的方法及装置
US20160154734A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Cache memory device and electronic system including the same
CN106201919A (zh) * 2015-06-01 2016-12-07 Arm 有限公司 缓存一致性
CN106537361A (zh) * 2014-07-17 2017-03-22 高通股份有限公司 用于通过组和通路将缓存灵活划分成组件缓存的方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2271201B (en) * 1992-10-01 1995-12-13 Digital Equipment Int Low-overhead,non-coherent cache refreshment mechanism
US5956744A (en) * 1995-09-08 1999-09-21 Texas Instruments Incorporated Memory configuration cache with multilevel hierarchy least recently used cache entry replacement
US20060143401A1 (en) * 2004-12-27 2006-06-29 Jacob Doweck Method and apparatus for prefetching based on cache fill buffer hits
AU2008225151B2 (en) * 2007-03-12 2012-06-28 Citrix Systems, Inc. Systems and methods for cache operations
US8627448B2 (en) * 2010-11-02 2014-01-07 Jose Renato Santos Selective invalidation of packet filtering results

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120198121A1 (en) * 2011-01-28 2012-08-02 International Business Machines Corporation Method and apparatus for minimizing cache conflict misses
CN106537361A (zh) * 2014-07-17 2017-03-22 高通股份有限公司 用于通过组和通路将缓存灵活划分成组件缓存的方法和装置
US20160154734A1 (en) * 2014-11-28 2016-06-02 Samsung Electronics Co., Ltd. Cache memory device and electronic system including the same
CN104484288A (zh) * 2014-12-30 2015-04-01 浪潮电子信息产业股份有限公司 一种对目录条目进行替换的方法及装置
CN106201919A (zh) * 2015-06-01 2016-12-07 Arm 有限公司 缓存一致性

Also Published As

Publication number Publication date
CN109508302B (zh) 2023-04-18
CN109508302A (zh) 2019-03-22

Similar Documents

Publication Publication Date Title
US8949544B2 (en) Bypassing a cache when handling memory requests
US8745334B2 (en) Sectored cache replacement algorithm for reducing memory writebacks
US10152428B1 (en) Virtual memory service levels
US9098417B2 (en) Partitioning caches for sub-entities in computing devices
US10929308B2 (en) Performing maintenance operations
US7380065B2 (en) Performance of a cache by detecting cache lines that have been reused
US20100325374A1 (en) Dynamically configuring memory interleaving for locality and performance isolation
US8185692B2 (en) Unified cache structure that facilitates accessing translation table entries
US8402248B2 (en) Explicitly regioned memory organization in a network element
US8583874B2 (en) Method and apparatus for caching prefetched data
WO2019127104A1 (zh) 高速缓存中资源调整方法、数据访问方法及装置
KR20060006794A (ko) 캐쉬 할당을 위한 장치, 시스템 및 방법과, 기계액세스가능 매체를 포함하는 물품
US11093410B2 (en) Cache management method, storage system and computer program product
KR101893966B1 (ko) 메모리 관리 방법 및 장치, 및 메모리 컨트롤러
US10831673B2 (en) Memory address translation
US10853262B2 (en) Memory address translation using stored key entries
WO2019052442A1 (zh) 一种内容填充方法和存储器
JP2020531950A (ja) サービスレベル合意に基づいたキャッシング用の方法及びシステム
JP2008511882A (ja) 一意のタスク識別子を用いてデータを共用する仮想アドレス・キャッシュ及び方法
US10579519B2 (en) Interleaved access of memory
WO2021008552A1 (zh) 数据读取方法和装置、计算机可读存储介质
US8028128B2 (en) Method for increasing cache directory associativity classes in a system with a register space memory
US11899642B2 (en) System and method using hash table with a set of frequently-accessed buckets and a set of less frequently-accessed buckets
US10866904B2 (en) Data storage for multiple data types
WO2020041583A1 (en) Method, apparatus, and system for storing memory encryption realm key ids

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18856206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18856206

Country of ref document: EP

Kind code of ref document: A1