WO2018076684A1 - Procédé d'attribution de ressources et mémoire cache à grande vitesse - Google Patents

Procédé d'attribution de ressources et mémoire cache à grande vitesse Download PDF

Info

Publication number
WO2018076684A1
WO2018076684A1 PCT/CN2017/086027 CN2017086027W WO2018076684A1 WO 2018076684 A1 WO2018076684 A1 WO 2018076684A1 CN 2017086027 W CN2017086027 W CN 2017086027W WO 2018076684 A1 WO2018076684 A1 WO 2018076684A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
processor
capacity
register
statistical
Prior art date
Application number
PCT/CN2017/086027
Other languages
English (en)
Chinese (zh)
Inventor
薛长花
孙志文
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2018076684A1 publication Critical patent/WO2018076684A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • the present invention relates to the field of multiprocessors, and in particular, to a resource allocation method and a cache cache.
  • the area cost is a crucial factor, and while reducing the area cost, how to ensure the performance of the processor has become a key issue that has been urgently solved.
  • multi-core shared cache is the most basic way to improve processor access performance.
  • each processor core handles different tasks, and each core processes tasks at different times. In this way, the usage requirements of each check Cache resource are different.
  • the use of the Cache capacity by the processor is statically allocated, and the processor pair is not considered.
  • the demand for Cache dynamic access changes. In this case, the NIC access performance of the Cache access request is not improved, and the Cache resource used by the Cache with a small Cache is wasted, which is not conducive to the area cost and access performance of the multi-core system. balance.
  • embodiments of the present invention are expected to provide a resource allocation method and cache storage.
  • the performance of multiple processors is guaranteed while reducing the cost of the area, improving the user experience.
  • an embodiment of the present invention provides a resource allocation method, where the method is applied to a Cache shared by a multi-processor, where the Cache includes: a Cache controller and a Cache register; and the Cache register includes: each processing Corresponding statistical register and a lock register corresponding to each of the processors; the method includes: each statistical register is configured to count the Cache capacity accessed by the processor corresponding to each statistical register within a preset time, and obtain a Cache access capacity of each processor, sending the Cache access capacity of each processor to the Cache controller; the Cache controller determines the each according to the Cache access capacity of each processor The Cache of the processors allocates capacity; the Cache controller writes the Cache Allocation Capacity of each of the processors into a lock register corresponding to each of the processors.
  • the Cache controller determines, according to the Cache access capacity of each processor, the Cache allocation capacity of each processor, including: the Cache controller is configured according to each processor.
  • the size of the Cache access capacity determines the Cache allocation capacity of each processor according to a positive correlation.
  • the Cache controller determines, according to the size of the Cache access capacity of each processor, the Cache allocation capacity of each processor according to a positive correlation, including: the Cache controller according to the The size of the Cache access capacity of each processor is determined, and the Cache allocation capacity of each processor is determined according to a proportional relationship.
  • the Cache register further includes: a counting register; correspondingly, each of the statistical registers counts a Cache capacity accessed by the processor corresponding to each of the statistical registers within a preset time, and obtains a
  • the Cache access capacity of each processor includes: the Cache controller controls the statistics register to start counting the Cache capacity accessed by the processor corresponding to each of the statistics registers, and starts counting the count registers; The Cache The controller controls the statistics register to end the statistics of the Cache capacity accessed by the processor corresponding to each of the statistical registers, and obtains the Cache access capacity of each processor.
  • the bit width of each of the statistical registers is positively correlated with the amount of software code running in the processor corresponding to each of the statistical registers.
  • the embodiment of the present invention provides a Cache, where the Cache includes: a Cache controller and a Cache register; the Cache register includes: a statistical register corresponding to each processor and a lock corresponding to each processor a register; wherein each of the statistics registers is configured to count the Cache capacity accessed by the processor corresponding to each of the statistical registers within a preset time, obtain a Cache access capacity of each processor, and send each of the Caches Cache access capacity of the processor to the Cache controller; the Cache controller is configured to determine, according to the Cache access capacity of each processor, a Cache allocation capacity of each processor; The Cache allocation capacity of each processor is written into the lock register corresponding to each processor.
  • the Cache controller is configured to determine a Cache allocation capacity of each processor according to a positive correlation according to a size of a Cache access capacity of each processor.
  • the Cache controller is further configured to determine, according to a size of the Cache access capacity of each processor, a Cache allocation capacity of each processor according to a proportional relationship.
  • the Cache register further includes: a count register; and correspondingly, the Cache controller is configured to control the statistics register to start counting statistics accessed by the processor corresponding to each of the statistic registers Cache capacity, and the counting of the counting register is started; when it is determined that the counting of the counting register is ended, controlling each of the statistical registers to end counting the Cache capacity accessed by the processor corresponding to each of the statistical registers, to obtain the Processing Cache access capacity.
  • the bit width of each of the statistical registers is positively correlated with the amount of software code running in the processor corresponding to each of the statistical registers.
  • the resource allocation method and the Cache provided by the embodiments of the present invention are applied to a Cache shared by multiple processors, where the Cache includes: a Cache controller and a Cache register; the Cache register includes: a statistical register corresponding to each processor and Each processor corresponds to a lock register; first, each statistical register counts the Cache capacity accessed by the processor corresponding to each statistical register within a preset time, thereby obtaining the Cache access capacity of each processor, and The Cache access capacity of each processor is sent to the Cache controller; then, the Cache controller determines the Cache allocation capacity of each processor according to the Cache access capacity of each processor; finally, the Cache controller will each processor The Cache allocation capacity is written into the lock register corresponding to each processor, so that the Cache resources allocated for each processor are re-allocated according to the actual needs of each processor, and the dynamic management multi-core shared Cache resource can be achieved.
  • the purpose of the implementation is to achieve the multi-processor access performance while reducing the area cost. High user experience.
  • FIG. 1 is a schematic flowchart of a resource allocation method according to an embodiment of the present invention.
  • FIG. 2 is an optional structural diagram of a multiprocessor and a shared Cache according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a Cache according to an embodiment of the present invention.
  • Embodiments of the present invention provide a resource allocation method, which is applied to multi-processor sharing.
  • each processor corresponds to an ID (ID), which is used to identify the processor; and, before using the Cache, the Cache capacity has been fixedly configured for each processor, here
  • ID ID
  • the Cache can be a group connection structure.
  • the Cache capacity can include multiple channels. Each path includes a fixed number of rows. For example, when the Cache capacity includes 4 processors, the Cache capacity can be 4 channels for each processor. Can include 10 lines;
  • the Cache includes: a Cache controller and a Cache register; the Cache register includes: a statistic register corresponding to each processor and a lock register corresponding to each processor;
  • FIG. 1 is a schematic flowchart of a resource allocation method according to an embodiment of the present invention. As shown in FIG. 1, the method includes:
  • each statistical register counts the Cache capacity accessed by the processor corresponding to each statistical register within a preset time, obtains a Cache access capacity of each processor, and sends a Cache access capacity of each processor to the Cache controller;
  • the Cache capacity has been fixedly configured for each processor before using the Cache. Therefore, at the beginning of each processor, the Cache capacity that has been fixedly configured for each processor is used first; each processor The Cache capacity of the fixed configuration may be evenly distributed, or may be allocated according to the frequency of use of the processor.
  • Each of the above statistical registers is a configurable multi-bit width register, wherein the bit width of each statistical register is positively correlated with the amount of software code running in the processor corresponding to each statistical register, that is, the statistical register
  • the bit width can be determined according to the actual application of each processor, avoiding the occurrence of statistical failure caused by the inappropriate register width.
  • the preset time may be preset in the Cache controller by using a code, for example, 1000 cycles, where the cycle is related to the frequency of the processor; the Cache access capacity of each of the processors may be in units of rows. For example, 20 lines, 5 lines, etc.
  • S101 may include :
  • the Cache controller controls each statistical register to start counting the Cache capacity accessed by the processor corresponding to each statistical register, and starts counting register counting; the Cache controller controls each statistical register to end each statistic when determining the counting of the counting register ends.
  • the Cache capacity accessed by the processor corresponding to the statistics register obtains the Cache access capacity of each processor.
  • the Cache controller when the Cache controller determines that one or more processors are running a program code for a time greater than a preset time threshold, the Cache controller triggers each of the statistics registers and the above-mentioned counting registers to generate a trigger signal, respectively triggering each
  • the statistical register and the counting register start to work, that is, each statistical register starts counting the Cache capacity accessed by the processor corresponding to each statistical register, and the counting register starts counting, wherein the trigger signal may be an enable signal, a rising edge trigger signal, or The falling edge trigger signal is not specifically limited in the embodiment of the present invention.
  • the above-mentioned counting register is a configurable multi-bit width register
  • the bit width of the counting register is related to the preset time, that is, when the preset time is long, the bit width of the counting register is large, when the above-mentioned pre- When the set time is short, the bit width of the count register is small, that is, the bit width of the count register can be artificially determined according to the preset time.
  • the value of the count register is configured to be 5000 in decimal, and the count register is in each The cycle of the processor is reduced by 1.
  • the Cache controller controls the statistics of each statistical register to obtain the Cache access capacity of each processor.
  • the Cache access capacity of each processor can be sent to the Cache controller, so that the Cache controller knows that each processor is within a preset time.
  • the Cache controller determines, according to the Cache access capacity of each processor, a Cache allocation capacity of each processor.
  • the Cache controller receives the Cache access capacity of each processor. After knowing the actual Cache access capacity of each processor within a preset time, the Cache capacity can be re-allocated for each processor according to actual conditions;
  • S102 may include: the Cache controller determines a Cache allocation capacity of each processor according to a positive correlation according to a size of a Cache access capacity of each processor.
  • the locality principle of Cache access includes spatial locality and temporal locality, where spatial locality: the address accessed by the processor in the future is likely to be near the currently accessed address; time locality: if an address is processed Access, then it is likely to be accessed again in the near future; then, according to the locality principle of Cache access, it is known that the current frequent access to the Cache is likely to be accessed frequently, so according to the size of the Cache access capacity of each processor, The Cache allocation capacity of each processor is determined according to a positive correlation.
  • the Cache controller sorts according to the size of the Cache access capacity of each processor, and determines the Cache access capacity of the processor.
  • the Cache allocation capacity is large. For a processor with a small Cache access capacity, the determined Cache allocation capacity is small;
  • the Cache controller is configured according to the Cache access capacity of each processor.
  • the Cache allocation capacity of each processor is determined according to the positive correlation, and the Cache controller determines the Cache allocation capacity of each processor according to a proportional relationship according to the size of the Cache access capacity of each processor.
  • the access capacity of the Caches of the two processors is 20 rows and 5 rows, respectively, and the total size of the Cache capacity is 4 channels, then, according to the proportional relationship for each processing
  • the Cache allocation capacity of the two processors determined is 3 channels and 1 channel;
  • the Cache controller writes the Cache allocation capacity of each processor into a lock register corresponding to each processor.
  • the Cache controller After determining the Cache allocation capacity of each processor, the Cache controller drives to configure a lock register corresponding to each processor, for example, when it is determined that the Cache allocation capacity of the two processors is 3 channels and 1 channel, then The corresponding lock registers are respectively written to the binary 10 and 00, thereby completing the purpose of dynamically adjusting the Cache capacity of each processor.
  • FIG. 2 is an optional structural diagram of a multiprocessor and a shared Cache according to an embodiment of the present invention; as shown in FIG. 2, including: n+1 processors and a Cache20;
  • the Cache 20 includes a Cache Controller 201, a Cache Cache 202, and a Cache Register 203;
  • a counting register In the Cache register 203, a counting register, n+1 locking registers corresponding to n+1 processors, and n+1 statistical registers one-to-one corresponding to n+1 processors are included;
  • the above resource allocation methods include:
  • Step A When the Cache controller 201 determines that one or more processors are running a program code for a time greater than a preset time threshold, respectively triggering n+1 statistical registers and the counting register to generate an enable signal as valid signals;
  • Step B Each statistical register starts to count the Cache capacity accessed by the processor corresponding to each statistical register, and the counting register starts counting;
  • Step C The cache controller 201 controls each statistical register to end the statistics of the Cache capacity accessed by the processor corresponding to each statistical register, and obtains the Cache access capacity of the corresponding processor.
  • Step D The Cache controller 201 determines the Cache allocation capacity of the corresponding processor according to the proportional relationship of the Cache access capacity of the corresponding processor.
  • Step E The Cache Controller 201 writes the Cache Allocation Capacity of the corresponding processor into the corresponding lock register.
  • the Cache access capacity of the statistical register can be allocated to the processor with a large capacity at a time, so that the performance of the processor that frequently accesses the Cache can be directly improved, and the smaller path size can be allocated to the frequent first.
  • Accessing the Cache processor in the following processor access, the Cache access capacity of each processor is also counted according to the above embodiment, and if the frequently accessed processor is accessed frequently in the subsequent time period, the continuation continues.
  • the Cache capacity of the path size is allocated to the processor, but if the locality of the program is not so good, the amount of access to the Cache cache is reduced after the currently accessed processor, which in turn reduces the Cache capacity for frequent access processing. In this way, you can flexibly and dynamically adjust the Cache capacity of each processor step by step to ensure the performance of multiple processors.
  • the Cache requirement of each processor is dynamically counted, and the Cache capacity of the processor is dynamically adjusted according to the change of requirements in different time periods, thereby avoiding the increase of the on-chip memory size in order to improve performance, and dynamically improving while saving costs.
  • Processor access performance is dynamically counted, and the Cache capacity of the processor is dynamically adjusted according to the change of requirements in different time periods, thereby avoiding the increase of the on-chip memory size in order to improve performance, and dynamically improving while saving costs.
  • the resource allocation method provided by the embodiment of the present invention is applied to a Cache shared by multiple processors, where the Cache includes: a Cache controller and a Cache register; the Cache register includes: a statistical register corresponding to each processor and each The lock register corresponding to the processor; first, each statistical register counts the Cache capacity accessed by the processor corresponding to each statistical register within a preset time, thereby obtaining the Cache access capacity of each processor, and each The Cache access capacity of the processor is sent to the Cache controller; then, the Cache controller is based on The Cache access capacity of each processor determines the Cache allocation capacity of each processor; finally, the Cache controller writes the Cache allocation capacity of each processor into the lock register corresponding to each processor, so that for each The Cache resources allocated by the processors are re-allocated according to the actual needs of each processor, which can achieve the purpose of dynamically managing multi-core shared Cache resources, thereby realizing the access performance of multiple processors while reducing the area cost. , to improve the user experience.
  • FIG. 3 is a schematic structural diagram of a Cache according to an embodiment of the present invention.
  • the Cache includes: a Cache Controller 31 and a Cache Register 32; and a Cache Register 32.
  • the method includes: a statistical register 321 corresponding to each processor and a lock register 322 corresponding to each processor; wherein
  • Each statistic register 321 is configured to count the Cache capacity accessed by the processor corresponding to each statistic register 321 within a preset time, obtain a Cache access capacity of each processor, and send a Cache access capacity of each processor to the Cache.
  • the controller 31 is configured to determine, according to the Cache access capacity of each processor, a Cache allocation capacity of each processor; and write a Cache allocation capacity of each processor to a lock corresponding to each processor. In register 322.
  • Each of the statistical registers 321 is a configurable multi-bit width register, and the bit width of each statistical register 321 is positively correlated with the amount of software code running in the processor corresponding to each statistical register 321 .
  • the Cache controller 31 receives the Cache access capacity of each processor. After knowing the actual Cache access capacity of each processor within a preset time, the Cache capacity can be re-allocated for each processor according to the actual situation. In an optional embodiment, the Cache controller 31 is configured to determine the Cache allocation capacity of each processor according to a positive correlation according to the size of the Cache access capacity of each processor.
  • the Cache controller 31 is configured to determine the Cache allocation capacity of each processor according to a proportional relationship according to the size of the Cache access capacity of each processor.
  • the preset time may also be set by hardware.
  • the Cache control The controller 31 is configured to control each of the statistics registers 321 to start counting the Cache capacity accessed by the processor corresponding to each of the statistical registers 321 and to start the counting of the counting registers; when determining the end of the counting of the counting registers, control each statistical register 321 to end the statistics.
  • the Cache capacity accessed by the processor corresponding to each statistical register 321 obtains the Cache access capacity of each processor.
  • the resource allocation method provided by the embodiment of the present invention is applied to a Cache shared by multiple processors, where the Cache includes: a Cache controller and a Cache register; the Cache register includes: a statistical register corresponding to each processor and each The lock register corresponding to the processor; first, each statistical register counts the Cache capacity accessed by the processor corresponding to each statistical register within a preset time, thereby obtaining the Cache access capacity of each processor, and each The Cache access capacity of the processor is sent to the Cache controller; then, the Cache controller determines the Cache allocation capacity of each processor according to the Cache access capacity of each processor; finally, the Cache controller will cache each processor.
  • the allocated capacity is written into the lock register corresponding to each processor, so that the Cache resources allocated for each processor are re-allocated according to the actual needs of each processor, and the purpose of dynamically managing the multi-core shared Cache resources can be achieved. , thereby achieving the reduction of the area cost while ensuring the access performance of the multi-processor, and improving User experience degrees.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated.
  • the components displayed as the unit may be, or may not be, physical units; they may be located in one place or on multiple network units; some or all of the units may be selected according to actual needs to implement the solution of the embodiment. purpose.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and when executed, the program includes The foregoing steps of the method embodiment; and the foregoing storage medium includes: a removable storage device, a read only memory (ROM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM read only memory
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

L'invention concerne un procédé d'attribution de ressource. Le procédé est applicable dans une mémoire cache partagée par de multiples processeurs. La mémoire cache comprend : un contrôleur de mémoire cache et un registre de memoire cache. Le registre de mémoire cache comprend : un registre statistique correspondant à chaque processeur et un registre de verrouillage correspondant à chaque processeur. Le procédé comprend les étapes suivantes : chaque registre statistique compile des statistiques sur le volume de mémoire cache faisant l'objet d'un accès dans un temps prédéfini par le processeur correspondant à chaque registre statistique, acquiert le volume de mémoire cache auquel accède chaque processeur, et transmet le volume de mémoire cache auquel accède chaque processeur au contrôleur de mémoire cache; le contrôleur de mémoire cache détermine, sur la base du volume de mémoire cache auquel accède chaque processeur, le volume de mémoire cache attribué à chaque processeur; et le contrôleur de mémoire cache écrit le volume de cache attribué à chaque processeur dans le registre de verrouillage correspondant à chaque processeur. La présente invention concerne également une mémoire cache.
PCT/CN2017/086027 2016-10-31 2017-05-25 Procédé d'attribution de ressources et mémoire cache à grande vitesse WO2018076684A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610931953.9 2016-10-31
CN201610931953.9A CN108021437A (zh) 2016-10-31 2016-10-31 一种资源分配方法和高速缓冲存储器Cache

Publications (1)

Publication Number Publication Date
WO2018076684A1 true WO2018076684A1 (fr) 2018-05-03

Family

ID=62023079

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/086027 WO2018076684A1 (fr) 2016-10-31 2017-05-25 Procédé d'attribution de ressources et mémoire cache à grande vitesse

Country Status (2)

Country Link
CN (1) CN108021437A (fr)
WO (1) WO2018076684A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398786A (zh) * 2008-09-28 2009-04-01 东南大学 一种面向嵌入式应用的软件可控Cache的实现方法
CN101883046A (zh) * 2010-06-21 2010-11-10 杭州开鼎科技有限公司 一种应用于epon终端系统的数据缓存架构
US20120151144A1 (en) * 2010-12-08 2012-06-14 William Judge Yohn Method and system for determining a cache memory configuration for testing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08147218A (ja) * 1994-11-24 1996-06-07 Fujitsu Ltd キャッシュ制御装置
CN101989236B (zh) * 2010-11-04 2012-05-09 浙江大学 一种指令缓存锁实现方法
JP6260303B2 (ja) * 2014-01-29 2018-01-17 富士通株式会社 演算処理装置及び演算処理装置の制御方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398786A (zh) * 2008-09-28 2009-04-01 东南大学 一种面向嵌入式应用的软件可控Cache的实现方法
CN101883046A (zh) * 2010-06-21 2010-11-10 杭州开鼎科技有限公司 一种应用于epon终端系统的数据缓存架构
US20120151144A1 (en) * 2010-12-08 2012-06-14 William Judge Yohn Method and system for determining a cache memory configuration for testing

Also Published As

Publication number Publication date
CN108021437A (zh) 2018-05-11

Similar Documents

Publication Publication Date Title
US9626295B2 (en) Systems and methods for scheduling tasks in a heterogeneous processor cluster architecture using cache demand monitoring
US8079031B2 (en) Method, apparatus, and a system for dynamically configuring a prefetcher based on a thread specific latency metric
US10355966B2 (en) Managing variations among nodes in parallel system frameworks
KR101572079B1 (ko) 시스템 관리 모드의 프로세서에 상태 스토리지를 제공하기 위한 장치, 방법 및 시스템
JP5073673B2 (ja) マルチスレッド・プロセッサにおける性能の優先順位付け
US20070156971A1 (en) Monitor implementation in a multicore processor with inclusive LLC
US20120297216A1 (en) Dynamically selecting active polling or timed waits
US20130124810A1 (en) Increasing memory capacity in power-constrained systems
US20160203083A1 (en) Systems and methods for providing dynamic cache extension in a multi-cluster heterogeneous processor architecture
US10331499B2 (en) Method, apparatus, and chip for implementing mutually-exclusive operation of multiple threads
US20130246781A1 (en) Multi-core system energy consumption optimization
TW201015318A (en) Performance based cache management
US9836396B2 (en) Method for managing a last level cache and apparatus utilizing the same
US20130054896A1 (en) System memory controller having a cache
CN109308220B (zh) 共享资源分配方法及装置
Bostancı et al. DR-STRaNGe: end-to-end system design for DRAM-based true random number generators
WO2014206078A1 (fr) Procédé, dispositif et système d'accès à une mémoire
US9069621B2 (en) Submitting operations to a shared resource based on busy-to-success ratios
Gupta et al. Timecube: A manycore embedded processor with interference-agnostic progress tracking
US10248331B2 (en) Delayed read indication
WO2018076684A1 (fr) Procédé d'attribution de ressources et mémoire cache à grande vitesse
CN108009121B (zh) 面向应用的动态多核配置方法
Selfa et al. A hardware approach to fairly balance the inter-thread interference in shared caches
US11552892B2 (en) Dynamic control of latency tolerance reporting values
US7603522B1 (en) Blocking aggressive neighbors in a cache subsystem

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864470

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864470

Country of ref document: EP

Kind code of ref document: A1