CN102708190A - Directory cache method for node control chip in cache coherent non-uniform memory access (CC-NUMA) system - Google Patents
Directory cache method for node control chip in cache coherent non-uniform memory access (CC-NUMA) system Download PDFInfo
- Publication number
- CN102708190A CN102708190A CN2012101492273A CN201210149227A CN102708190A CN 102708190 A CN102708190 A CN 102708190A CN 2012101492273 A CN2012101492273 A CN 2012101492273A CN 201210149227 A CN201210149227 A CN 201210149227A CN 102708190 A CN102708190 A CN 102708190A
- Authority
- CN
- China
- Prior art keywords
- cache
- catalogue
- data
- directory
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention provides a directory cache method for a node control chip in a cache coherent non-uniform memory access (CC-NUMA) system. A directory cache module is designed for achieving and optimizing the access control of a memory. In researches and designs of computer architectures, locality of application program access is often considered, wherein the phenomenon that recently visited data are revisited before long is called temporal locality; and based on the characteristic, the cache is introduced in the CC-NUMA system based on a directory for caching directory entries, and a least recently used (LRU) replacement algorithm is utilized, so that visiting pressure of the directory is well reduced, and a bottleneck effect of the visiting of the memory is relieved.
Description
Technical field
The present invention relates to field of computer technology, the method for node control chip catalogue Cache in specifically a kind of CC-NUMA (Cache Coherent Non-Uniform Memory Access) system.
Background technology
Along with to the application of high-performance calculation more and more widely with deeply, the framework of high-performance computer is faced with increasingly high requirement with realizing, the CC-NUMA structure is wherein a kind of important architecture.Construct extensive CC-NUMA system and receive many factor restrictions, wherein the Cache consistency protocol is the key factor of system for restricting extensibility.In order to address this problem, except designing effective agreement, extendible bibliographic structure and directory stores access mechanism efficiently can guarantee the efficient realization based on the Cache consistency protocol of catalogue in the research system.
Bibliographic structure is another key factor that influences the CC-NUMA system expandability.Adopt the full bibliographic structure that shines upon such as some systems,, but in the realization of bibliographic structure, can not expand even if the use of such system is extendible consistency protocol.
Summary of the invention
The method that the purpose of this invention is to provide node control chip catalogue Cache in a kind of CC-NUMA system.
The objective of the invention is to realize by following mode; Introducing the Cache module in the node control chip in the CC-NUMA system realizes and optimizes the Cache consistency protocol; Not only reduced the visit pressure of storer; And improve the treatment effeciency that finishes the point control device, and reducing the expense that the Cache consistency protocol is handled, particular content is following:
Design a catalogue Cache module, the access control to storer is also optimized in completion: in the research and design of Computer Architecture, often can take into account the locality of application program memory access, wherein; The data of visit can visited in the near future once more recently, and this situation is called as temporal locality, based on this characteristic; In based on the CC-NUMA system of catalogue, introduce Cache and come the CACHE DIRECTORY item, and adopt least recently used replacement algorithm, reduce the pressure of directory access; Alleviate the ink-bottle effect of memory access, catalogue Cache is used for the catalogue entry that buffer memory often uses recently, and purpose is the access delay that reduces catalogue; Reduce the protocol processes time of CC message, improve the handling capacity that the node controller is handled message, all get into the CC consistance message of node controller; Necessary access catalog Cache is to obtain the corresponding catalogue of data, so that carry out follow-up protocol processes; Because the finite capacity of catalogue Cache can not be preserved all catalogue entries, therefore; When the catalogue entry of CC message needs does not hit in catalogue Cache, then need access external memory to obtain catalogue, simultaneously; In order to improve the efficient of concurrent access, adopt the unblock working method, promptly previous uncompleted accessing operation can not block the execution of subsequent access; And aspect the mapping of catalogue Cache and memory bank; Adopt the mode of 8 road set associatives, from angle of practical application, the effect of 8 road set associatives aspect the reduction crash rate is the same with complete association effective; Can reduce the expense of system better, implementation step is following:
Catalogue Cache module is made up of 4 catalogue Cache bodies, 1 data bypass module and 1 control and status register module, wherein:
1) 4 catalogue Cache bodies are separate, corresponding 4 memory addresss, and wherein the design of each catalogue Cache body all is identical; Capacity is 128KB, and mapping mode is 8 road set associatives, and each Cache is capable, and size is 64B; Be 512b, this is to be determined by the width with storage controller interface, amounts to 256 row; Catalogue Cache body adopts least recently used replacement algorithm; Improve chip performance, and adopt the unblock working method, promptly previous uncompleted accessing operation can not block the execution of subsequent access;
2) in order to increase the fault-tolerance of system, also comprise a catalogue data bypass module in the whole module, when being in debugging mode; Data path through the catalogue data bypass module transmits data, in order to simplify realization, reduces taking of logical resource; Adopt and block working method, promptly all accessing operations are carried out in proper order, before previous operation is not accomplished; The operation of back can not flow into, and waits for that the operation of front is accomplished;
3) in order to increase controllability and observability, comprise control and status register module CSR in the module, preserve the control information of user's setting and the wrong status information of each catalogue Cache body;
On this basis, the operation of catalogue Cache comprises two kinds: a kind of is to read catalogue, and two kinds is to deposit catalogue, introduces the detailed process of these two kinds of operations below respectively:
Read directory operation for one,, be divided into following two kinds of situation and handle according to whether hitting:
1) if hits, then directly from returning the data of corresponding 4 bytes (32b);
2) if do not hit, then send the read request of a memory access, be written into the data (512) that Cache is capable, create the directory entry that Cache is capable, then, confirm corresponding 32 directory entries, return data according to the lowest order of the address of request message;
Write directory operation for one, according to whether hitting, and the factor that whether need replace, be divided into following three kinds of situation and handle:
1) if hit, then directly 32 catalogue datas in the request message is write, just really write external memory storage when replacing after waiting for;
2) if miss; But free remainder is arranged during Cache is capable; Then send a read command to depositing control; Be written into 512 bit data that Cache is capable; Create the directory entry of a Cache; Then, 32 directory entries in the message are write the capable relevant position of Cache, can really write external memory storage during replacement after waiting for based on the lowest order of the address of request message;
3) as if miss, and Cache is capable full, then according to LRU, selects one the tunnel to replace out array, and the data that the Cache that replaces out is capable write corresponding external memory storage; Then send a read command, be written into 512 bit data of a Cache line, create the directory entry of a Cache to depositing control; Then, 32 directory entries in the message are write the capable relevant position of Cache according to the lowest order of the address of request message;
It is pointed out that the difference of aforesaid operations is: at first for catalogue Cache body and data bypass module; In the data bypass module; Adopt the strategy of serving FCFS first earlier, strictly carry out in order, do not allow out of order; The operation of back just can be processed after will waiting for that the operation of front is accomplished; Secondly, do not exist a plurality of catalogue Cache capable in the data bypass module, each read-write all must be read a corresponding Cache from storer capable.
Introduced the Cache module, allowed teledata to get into processor Cache, through the consistance of data among each Cache of hardware maintenance.
The node control chip links to each other with native processor in the node, and links to each other with other node control chips to constitute large-scale system through router, and major function is processor interface control, the control of Cache consistance and the control of interconnection network interface.
The characteristic of the Cache module of being introduced is to adopt the mapping mode of 8 road set associatives, and adopts least recently used (Least Recently Used, LRU) replacement algorithm, and the flowing water working method of unblock, raising chip performance.
The invention has the beneficial effects as follows: the memory access bottleneck in the extensive CC-NUMA system has proposed a kind of effective solution; In efficient that has improved system significantly and extensibility; Reduced the complexity that realizes as much as possible, this also determines the present invention to be of very high actual application value and further technical research is worth.
Embodiment
Design realizes a catalogue Cache module, and completion is also optimized the access control to storer.In the research and design of Computer Architecture, often can take into account the locality of application program memory access.Wherein, The data of visit can based on this characteristic, be introduced Cache and come the CACHE DIRECTORY item in that this is called as temporal locality by visit once more in the near future in based on the CC-NUMA system of catalogue recently; And adopt least recently used (Least Recently Used; LRU) replace algorithm, can reduce the pressure of directory access well, alleviate the ink-bottle effect of memory access.Catalogue Cache is used for the catalogue entry that buffer memory often uses recently, and purpose is the access delay that reduces catalogue, reduces the protocol processes time of CC message, improves the handling capacity that the node controller is handled message.All get into all necessary access catalog Cache of CC consistance message of node controller, to obtain the corresponding catalogue of data, so that carry out follow-up protocol processes.Because the finite capacity of catalogue Cache can not be preserved all catalogue entries, therefore, when the catalogue entry that needs when the CC message does not hit in catalogue Cache, then need access external memory to obtain catalogue.Simultaneously, in order to improve the efficient of concurrent access, we adopt unblock (Non-Blocking) working method, and promptly previous uncompleted accessing operation can not block the execution of subsequent access.And aspect the mapping of catalogue Cache and memory bank; We have adopted the mode of 8 road set associatives (4 groups); From angle of practical application, the effect of 8 road set associatives aspect the reduction crash rate is the same with complete association effective, and has reduced the expense of system as much as possible.
Embodiment
Catalogue Cache module among the present invention is made up of 4 catalogue Cache bodies, 1 data bypass module and 1 control and status register module.Wherein:
1) 4 catalogue Cache bodies are separate; Corresponding 4 memory addresss, wherein the design of each catalogue Cache body all is identical, capacity is 128KB; Mapping mode is 8 road set associatives; The capable size of each Cache is 64B (be 512b, this is to be determined by the width with storage controller interface), amounts to 256 row.Catalogue Cache body adopts least recently used (Least Recently Used; LRU) replacement algorithm; Improve chip performance, and adopt unblock (Non-Blocking) working method, promptly previous uncompleted accessing operation can not block the execution of subsequent access;
4) in order to increase the fault-tolerance of system, also comprise a catalogue data bypass module in the whole module.When being in debugging mode, can transmit data through the data path of catalogue data bypass module.In order to simplify realization, reduce taking of logical resource, adopted obstruction (Blocking) working method, promptly all accessing operations are carried out in proper order, and before previous operation was not accomplished, the operation of back can not flow into, and waited for that the operation of front is accomplished;
5) in order to increase controllability and observability, comprise control and status register module CSR in the module, mainly preserve the control information of user's setting and the wrong status information of each catalogue Cache body etc.
On this basis, the operation of catalogue Cache mainly comprises two kinds: a kind of is to read catalogue, and a kind of is to deposit catalogue.Introduce the detailed process of these two kinds of operations below respectively.
Read directory operation for one,, can be divided into following two kinds of situation and handle according to whether hitting:
1) if hits, then directly from returning the data of corresponding 4 bytes (32b);
2) if do not hit, then send the read request of a memory access, be written into the data (512) that Cache is capable, create the directory entry that Cache is capable, then, confirm corresponding 32 directory entries, return data according to the lowest order of the address of request message.
Write directory operation for one, according to whether hitting, and factor such as whether need replace, can be divided into following three kinds of situation and handle:
1) if hit, then directly 32 catalogue datas in the request message is write, just really write external memory storage when replacing after waiting for;
2) if miss; But free remainder is arranged during Cache is capable; Then send a read command to depositing control; Be written into the data (512) that Cache is capable; Create the directory entry of a Cache; Then, 32 directory entries in the message are write the capable relevant position of Cache, can really write external memory storage during replacement after waiting for based on the lowest order of the address of request message;
3) as if miss, and Cache is capable full, then according to least recently used (LRU) method, selects one the tunnel to replace out array, and the data that the Cache that replaces out is capable write corresponding external memory storage; Then send a read command, be written into the data (512) of a Cache line, create the directory entry of a Cache to depositing control; Then, 32 directory entries in the message are write the capable relevant position of Cache according to the lowest order of the address of request message.
It is pointed out that the difference of aforesaid operations is: at first for catalogue Cache body and data bypass module; In the data bypass module; Adopt the strategy of serving FCFS first earlier, strictly carry out in order, do not allow out of order; The operation of back just can be processed after will waiting for that the operation of front is accomplished; Secondly, do not exist a plurality of catalogue Cache capable in the data bypass module, each read-write all must be read a corresponding Cache from storer capable.
Except that the described technical characterictic of instructions, be the known technology of those skilled in the art.
Claims (4)
1. the method for node control chip catalogue Cache in the CC-NUMA system; It is characterized in that in the node control chip in the CC-NUMA system introducing the Cache module realizes and optimizes the Cache consistency protocol; Not only reduced the visit pressure of storer; And improve the treatment effeciency that finishes the point control device, and reducing the expense that the Cache consistency protocol is handled, particular content is following:
Design a catalogue Cache module, the access control to storer is also optimized in completion: in the research and design of Computer Architecture, often can take into account the locality of application program memory access, wherein; The data of visit can visited in the near future once more recently, and this situation is called as temporal locality, based on this characteristic; In based on the CC-NUMA system of catalogue, introduce Cache and come the CACHE DIRECTORY item, and adopt least recently used replacement algorithm, reduce the pressure of directory access; Alleviate the ink-bottle effect of memory access, catalogue Cache is used for the catalogue entry that buffer memory often uses recently, and purpose is the access delay that reduces catalogue; Reduce the protocol processes time of CC message, improve the handling capacity that the node controller is handled message, all get into the CC consistance message of node controller; Necessary access catalog Cache is to obtain the corresponding catalogue of data, so that carry out follow-up protocol processes; Because the finite capacity of catalogue Cache can not be preserved all catalogue entries, therefore; When the catalogue entry of CC message needs does not hit in catalogue Cache, then need access external memory to obtain catalogue, simultaneously; In order to improve the efficient of concurrent access, adopt the unblock working method, promptly previous uncompleted accessing operation can not block the execution of subsequent access; And aspect the mapping of catalogue Cache and memory bank; Adopt the mode of 8 road set associatives, from angle of practical application, the effect of 8 road set associatives aspect the reduction crash rate is the same with complete association effective; Can reduce the expense of system better, implementation step is following:
Catalogue Cache module is made up of 4 catalogue Cache bodies, 1 data bypass module and 1 control and status register module, wherein:
1) 4 catalogue Cache bodies are separate, corresponding 4 memory addresss, and wherein the design of each catalogue Cache body all is identical; Capacity is 128KB, and mapping mode is 8 road set associatives, and each Cache is capable, and size is 64B; Be 512b, this is to be determined by the width with storage controller interface, amounts to 256 row; Catalogue Cache body adopts least recently used replacement algorithm; Improve chip performance, and adopt the unblock working method, promptly previous uncompleted accessing operation can not block the execution of subsequent access;
In order to increase the fault-tolerance of system, also comprise a catalogue data bypass module in the whole module, when being in debugging mode; Data path through the catalogue data bypass module transmits data, in order to simplify realization, reduces taking of logical resource; Adopt and block working method, promptly all accessing operations are carried out in proper order, before previous operation is not accomplished; The operation of back can not flow into, and waits for that the operation of front is accomplished;
In order to increase controllability and observability, comprise control and status register module CSR in the module, preserve the control information of user's setting and the wrong status information of each catalogue Cache body;
On this basis, the operation of catalogue Cache comprises two kinds: a kind of is to read catalogue, and two kinds is to deposit catalogue, introduces the detailed process of these two kinds of operations below respectively:
Read directory operation for one,, be divided into following two kinds of situation and handle according to whether hitting:
1) if hits, then directly from returning the data of corresponding 4 bytes (32b);
2) if do not hit, then send the read request of a memory access, be written into the data that Cache is capable, create the directory entry that Cache is capable, then, confirm corresponding 32 directory entries, return data according to the lowest order of the address of request message;
Write directory operation for one, according to whether hitting, and the factor that whether need replace, be divided into following three kinds of situation and handle:
1) if hit, then directly 32 catalogue datas in the request message is write, just really write external memory storage when replacing after waiting for;
2) if miss; But free remainder is arranged during Cache is capable; Then send a read command to depositing control; Be written into the data that Cache is capable; Create the directory entry of a Cache; Then, 32 directory entries in the message are write the capable relevant position of Cache, can really write external memory storage during replacement after waiting for based on the lowest order of the address of request message;
3) as if miss, and Cache is capable full, then according to LRU, selects one the tunnel to replace out array, and the data that the Cache that replaces out is capable write corresponding external memory storage; Then send a read command, be written into the data of a Cache line, create the directory entry of a Cache to depositing control; Then, 32 directory entries in the message are write the capable relevant position of Cache according to the lowest order of the address of request message;
It is pointed out that the difference of aforesaid operations is: at first for catalogue Cache body and data bypass module; In the data bypass module; Adopt the strategy of serving FCFS first earlier, strictly carry out in order, do not allow out of order; The operation of back just can be processed after will waiting for that the operation of front is accomplished; Secondly, do not exist a plurality of catalogue Cache capable in the data bypass module, each read-write all must be read a corresponding Cache from storer capable.
2. method according to claim 1 is characterized in that having introduced the Cache module, allows teledata to get into processor Cache, through the consistance of data among each Cache of hardware maintenance.
3. method according to claim 1; It is characterized in that the node control chip links to each other with native processor in the node; And link to each other with other node control chips constituting large-scale system through router, major function is processor interface control, the control of Cache consistance and the control of interconnection network interface.
4. method according to claim 1, the characteristic of the Cache module that it is characterized in that being introduced are to adopt the mapping mode of 8 road set associatives, and adopt least recently used replacement algorithm, and the flowing water working method of unblock, improve chip performance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210149227.3A CN102708190B (en) | 2012-05-15 | 2012-05-15 | A kind of method of node control chip catalogue Cache in CC-NUMA system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210149227.3A CN102708190B (en) | 2012-05-15 | 2012-05-15 | A kind of method of node control chip catalogue Cache in CC-NUMA system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102708190A true CN102708190A (en) | 2012-10-03 |
CN102708190B CN102708190B (en) | 2016-09-28 |
Family
ID=46900956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210149227.3A Active CN102708190B (en) | 2012-05-15 | 2012-05-15 | A kind of method of node control chip catalogue Cache in CC-NUMA system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102708190B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103729309A (en) * | 2014-01-15 | 2014-04-16 | 浪潮电子信息产业股份有限公司 | Method for cataloging Cache consistency |
CN104506362A (en) * | 2014-12-29 | 2015-04-08 | 浪潮电子信息产业股份有限公司 | Method for system state switching and monitoring on CC-NUMA (cache coherent-non uniform memory access architecture) multi-node server |
CN104965797A (en) * | 2015-05-22 | 2015-10-07 | 浪潮电子信息产业股份有限公司 | High-end fault-tolerant computer directory architecture implementation method |
CN104978283A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Memory access control method and device |
CN105740168A (en) * | 2016-01-23 | 2016-07-06 | 中国人民解放军国防科学技术大学 | Fault-tolerant directory cache controller |
CN107634982A (en) * | 2017-07-27 | 2018-01-26 | 郑州云海信息技术有限公司 | A kind of multipath server interconnects chip remote agent's catalogue implementation method |
CN113703958A (en) * | 2021-07-15 | 2021-11-26 | 山东云海国创云计算装备产业创新中心有限公司 | Data access method, device, equipment and storage medium among multi-architecture processors |
GB2617302A (en) * | 2020-12-23 | 2023-10-04 | Ibm | Tuning query generation patterns |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050021913A1 (en) * | 2003-06-25 | 2005-01-27 | International Business Machines Corporation | Multiprocessor computer system having multiple coherency regions and software process migration between coherency regions without cache purges |
CN1664795A (en) * | 2005-03-30 | 2005-09-07 | 中国人民解放军国防科学技术大学 | Method for supporting multiple processor node internal organ data sharing by directory protocol |
CN102318275A (en) * | 2011-08-02 | 2012-01-11 | 华为技术有限公司 | Method, device, and system for processing messages based on CC-NUMA |
CN102346714A (en) * | 2011-10-09 | 2012-02-08 | 西安交通大学 | Consistency maintenance device for multi-kernel processor and consistency interaction method |
-
2012
- 2012-05-15 CN CN201210149227.3A patent/CN102708190B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050021913A1 (en) * | 2003-06-25 | 2005-01-27 | International Business Machines Corporation | Multiprocessor computer system having multiple coherency regions and software process migration between coherency regions without cache purges |
CN1664795A (en) * | 2005-03-30 | 2005-09-07 | 中国人民解放军国防科学技术大学 | Method for supporting multiple processor node internal organ data sharing by directory protocol |
CN102318275A (en) * | 2011-08-02 | 2012-01-11 | 华为技术有限公司 | Method, device, and system for processing messages based on CC-NUMA |
CN102346714A (en) * | 2011-10-09 | 2012-02-08 | 西安交通大学 | Consistency maintenance device for multi-kernel processor and consistency interaction method |
Non-Patent Citations (1)
Title |
---|
潘国腾等: "基于目录的Cache一致性协议的可扩展性研究", 《计算机工程与科学》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103729309A (en) * | 2014-01-15 | 2014-04-16 | 浪潮电子信息产业股份有限公司 | Method for cataloging Cache consistency |
CN103729309B (en) * | 2014-01-15 | 2017-06-30 | 浪潮电子信息产业股份有限公司 | A kind of catalogue Cache coherence methods |
CN104978283A (en) * | 2014-04-10 | 2015-10-14 | 华为技术有限公司 | Memory access control method and device |
CN104978283B (en) * | 2014-04-10 | 2018-06-05 | 华为技术有限公司 | A kind of memory access control method and device |
CN104506362A (en) * | 2014-12-29 | 2015-04-08 | 浪潮电子信息产业股份有限公司 | Method for system state switching and monitoring on CC-NUMA (cache coherent-non uniform memory access architecture) multi-node server |
CN104965797A (en) * | 2015-05-22 | 2015-10-07 | 浪潮电子信息产业股份有限公司 | High-end fault-tolerant computer directory architecture implementation method |
CN105740168A (en) * | 2016-01-23 | 2016-07-06 | 中国人民解放军国防科学技术大学 | Fault-tolerant directory cache controller |
CN105740168B (en) * | 2016-01-23 | 2018-07-13 | 中国人民解放军国防科学技术大学 | A kind of fault-tolerant directory caching controller |
CN107634982A (en) * | 2017-07-27 | 2018-01-26 | 郑州云海信息技术有限公司 | A kind of multipath server interconnects chip remote agent's catalogue implementation method |
GB2617302A (en) * | 2020-12-23 | 2023-10-04 | Ibm | Tuning query generation patterns |
CN113703958A (en) * | 2021-07-15 | 2021-11-26 | 山东云海国创云计算装备产业创新中心有限公司 | Data access method, device, equipment and storage medium among multi-architecture processors |
CN113703958B (en) * | 2021-07-15 | 2024-03-29 | 山东云海国创云计算装备产业创新中心有限公司 | Method, device, equipment and storage medium for accessing data among multi-architecture processors |
Also Published As
Publication number | Publication date |
---|---|
CN102708190B (en) | 2016-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102708190A (en) | Directory cache method for node control chip in cache coherent non-uniform memory access (CC-NUMA) system | |
US11803486B2 (en) | Write merging on stores with different privilege levels | |
KR102101622B1 (en) | Memory controlling device and computing device including the same | |
US8180981B2 (en) | Cache coherent support for flash in a memory hierarchy | |
JP5417879B2 (en) | Cache device | |
CN104765575B (en) | information storage processing method | |
TWI454909B (en) | Memory device, method and system to reduce the power consumption of a memory device | |
US7516275B2 (en) | Pseudo-LRU virtual counter for a locking cache | |
US9411728B2 (en) | Methods and apparatus for efficient communication between caches in hierarchical caching design | |
CN110018971B (en) | cache replacement technique | |
US20150177986A1 (en) | Information processing device | |
CN102541761B (en) | Read-only cache memory applying on embedded chips | |
CN102541510A (en) | Instruction cache system and its instruction acquiring method | |
CN207008602U (en) | A kind of storage array control device based on Nand Flash memorizer multichannel | |
US20170109277A1 (en) | Memory system | |
CN100520739C (en) | Rapid virtual-to-physical address converting device and its method | |
US10929291B2 (en) | Memory controlling device and computing device including the same | |
JP6679570B2 (en) | Data processing device | |
CN111124297B (en) | Performance improving method for stacked DRAM cache | |
CN100456232C (en) | Storage access and dispatching device aimed at stream processing | |
US8510493B2 (en) | Circuit to efficiently handle data movement within a cache controller or on-chip memory peripheral | |
CN111736900A (en) | Parallel double-channel cache design method and device | |
US20220011966A1 (en) | Reduced network load with combined put or get and receiver-managed offset | |
US11921634B2 (en) | Leveraging processing-in-memory (PIM) resources to expedite non-PIM instructions executed on a host | |
CN111694777B (en) | DMA transmission method based on PCIe interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |