WO2006109421A1 - キャッシュメモリ - Google Patents
キャッシュメモリ Download PDFInfo
- Publication number
- WO2006109421A1 WO2006109421A1 PCT/JP2006/305389 JP2006305389W WO2006109421A1 WO 2006109421 A1 WO2006109421 A1 WO 2006109421A1 JP 2006305389 W JP2006305389 W JP 2006305389W WO 2006109421 A1 WO2006109421 A1 WO 2006109421A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cache memory
- cache
- data
- address
- memory
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0875—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0844—Multiple simultaneous or quasi-simultaneous cache accessing
- G06F12/0846—Cache with multiple tag or data arrays being simultaneously accessible
- G06F12/0848—Partitioned cache, e.g. separate instruction and operand caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0888—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
Definitions
- the present invention relates to a cache memory for speeding up memory access of a processor.
- Patent Document 1 and Non-Patent Document 1 disclose a victim cache as a technique for reducing cache misses!
- FIG. 1 is a block diagram showing an example of a system having a victim cache in the prior art.
- the system shown in FIG. 1 includes a CPU 501, a cache memory 502, and a fully associative victim cache 503.
- the victim cache 503 has at least one entry that holds a tag address and line data.
- the victim cache 503 always holds at least one line data that was last discarded from the cache memory 502.
- temporal locality (property that is easily accessed in the near future) is very strong and concentrated in a very short time
- spatial locality data power near the accessed data is close
- Patent Document 1 U.S. Pat.No. 5261066
- Non-Patent Literature 1 Jouppi, NP [1990], "Improving direct- mapped cacne performance by the addition of a small fully-associative cache and prefetch buffers," Proc. 17th Annual Int'l Symposium on Computer Architecture, 364—73
- the present invention has a strong spatial locality that is concentrated only in a very small area where temporal locality is strong and concentrated in a very short time, and spatial spatiality is also strong, and
- An object of the present invention is to provide a cache memory that improves the memory access efficiency even for data with weak temporal locality, and further improves the usage efficiency of the entire cache memory.
- a cache memory of the present invention includes a first cache memory, a first cache memory, (1) When the second cache memory that operates in parallel with the cache memory and both the first cache memory and the second cache memory make a miss-hit, determine the authenticity related to the attribute of the memory access target data that has been missed. And determining means for storing memory data in the second cache memory when it is determined to be true, and storing memory data in the first cache memory when it is determined to be false. Prepare.
- the attribute of the data to be accessed is an access address
- the determination means determines whether or not the access address is within a specific address range.
- the address range is strong in spatial locality, but has strong spatial locality by making it correspond to a memory area that stores data with weak temporal locality.
- the weak data is stored in the second cache memory.
- the first cache memory has a general-purpose configuration
- the second cache memory has a particularly efficient configuration for data with strong spatial locality but weak temporal locality. It is possible to improve the efficiency of memory access to data with strong spatial locality in cache memory but weak temporal locality.
- the use efficiency of the first cache memory is improved. Can be improved. In this way, the usage efficiency of the entire cache memory can be improved.
- the capacity of the second cache memory may be smaller than that of the first cache memory.
- the first cache memory may be a set associative cache memory
- the second cache memory may be a fully associative cache memory! /.
- the first cache memory has N1 ways, and the first cache memory Each way has N2 entries, and the second cache memory has M entries, where M is smaller than N1 and smaller than N2.
- the M may be 2.
- the holding means may be accessed by a processor.
- the address range can be set programmable from the processor.
- the cache memory of the present invention the cache is physically separated by the address space, non-uniform caching is realized according to the address, and the memory data within the address range is used to store the first cache memory. Since the data is not replaced, the usage efficiency of the first cache memory can be improved.
- the usage efficiency of the first cache memory can be improved, and the usage efficiency of the entire cache memory can be improved.
- the address range can be set in a programmable manner by the processor power.
- FIG. 1 is a block diagram showing an example of a system having a victim cache in the prior art.
- FIG. 2 is a block diagram showing a configuration of a cache memory in the first embodiment.
- FIG. 3 is an explanatory diagram showing a structure of a data entry.
- FIG. 4 is an explanatory diagram showing a configuration of an address entry in the main cache.
- FIG. 5 is an explanatory diagram showing a configuration of an address entry in the sub-cache.
- FIG. 6 is an explanatory diagram showing the structure of an address entry in the address table.
- FIG. 7 is an explanatory diagram showing a program example for setting the table entry register.
- FIG. 8 is an explanatory diagram showing the relationship between the memory area in which the subcache attribute is set and the subcache 2.
- FIG. 9 is a diagram showing an example of a pattern indicating a correspondence relationship between the number of memory accesses to each data.
- FIG. 10 is a flowchart showing the operation of the cache memory under the control of the control unit 6 when the processor accesses the memory.
- FIG. 11 is a block diagram showing a configuration of a cache memory in the second embodiment. Explanation of symbols
- FIG. 2 is a block diagram showing a configuration of the cache system in the first embodiment.
- the cache memory 1000 includes a main cache 1, a sub cache 2, an address register 3, an address table 4, a comparator 5, and a control unit 6, and stores data within the address range set in the address table 4.
- data is cached in the second cache memory, and data outside the address range is cached in the first cache memory.
- the main cache 1 is a set associative cache memory, and includes 16 ways 00 to 15, a decoder 120, 16 comparators 150 to 165, and a bus I / F 170.
- Way 00 includes 16 even-numbered HJ0000-Jen! J0015.
- Ways 01 to 15 are the same as way 00, and will not be described.
- the entry 0000 includes an address entry 0000a for holding a tag address and a data entry OOOOd for holding line data. Entry 0001 to 0015 [Tips!] The description is omitted because it has the same configuration as entry 0000.
- the decoder 120 decodes a part (referred to as a set index) in the memory access address held in the address register 3 and adds one entry from each of the 16 ways 00 to 15. select.
- the 16 entries with the entry power selected from each way are called a set.
- the selected 16 entries are held in each address entry.
- the tag address is output to the comparators 150 to 165.
- Comparator 150 is provided corresponding to way 00, compares the valid tag address output from the entry selected by decoder 120 with the tag output from address register 3, and matches. In this case, a hit signal indicating that the way 00 has been hit is output to the control unit 6. Whether or not the tag address is valid depends on the nodding bit output from the entry. That is, the comparator 150 outputs the comparison result only when the NORTH bit is valid. Since the comparators 151 to 165 are the same as the comparator 150 except that they correspond to the ways 01 to 15, description thereof will be omitted.
- the bus IZF 170 is an interface for inputting / outputting data between the data entry in the hit entry in the set selected by the decoder 120 and the data bus.
- the sub cache 2 is a fully associative cache memory, and includes a way 20, a way 21, a comparator 250, a comparator 251, and a bus IZF 270.
- the way 20 has one entry 200.
- the entry 200 includes an address entry 200a for holding a tag address and a data entry 200d for holding line data.
- the description of ui 21 is omitted because it has the same configuration.
- Comparator 250 is provided corresponding to way 20, and compares the valid tag address output from address entry 200a with the address portion (tag and set index) output from address register 3. If they match, a hit signal indicating that the way 20 has been hit is output to the control unit 6. Whether or not the address part is valid depends on the valid bit output from the entry. That is, the comparator 250 outputs the comparison result only when the NORTH bit is valid. Since the comparator 251 is the same as the comparator 250 except for the point corresponding to the way 21, the description is omitted.
- the address register 3 holds a memory access address output from the processor.
- address register 3 is 32 bits long.
- the bit weight is also shown.
- the tag and set index (bits 31-7) in address register 3 specify 128 bytes of line data.
- the set index (bits 10-7) identifies one of the 16 sets.
- the subline address (SL: bits 6, 5) is Identify one of the four sublines.
- the byte address (byte—A) specifies one byte data in the subline.
- the address table 4 holds information indicating an address range and a subcache attribute indicating whether or not to use the subcache 2 in association with each other. This information indicating the address range is set by the processor and indicates the address range of data to be used by the subcache 2.
- Comparator 5 includes an address portion in which information indicating the address range held in address table 4 and the output of address register 3 are also output when both the first cache memory and the second cache memory miss. And compare. Thereby, the comparator 5 determines whether or not the missed memory access address is within the above address range.
- FIG. 3 is an explanatory diagram showing the data structure of the data entry in the main cache 1 and the sub-cache 2. Each data entry holds 128 bytes of line data. One line data is divided into four subline data 1 to subline data 4. As a result, cache operations such as write-back and replacement can be performed even if there is a shift in the sub-line data unit of the line data unit.
- FIG. 4 is an explanatory diagram showing the configuration of the address entry in the main cache 1.
- the tag address corresponds to the tag in address register 3.
- the NORMAL bits V1 to V4 correspond to subline data 1 to 4, and indicate whether the corresponding subline data is valid or invalid.
- Dirty bits D1 to D4 correspond to sub-line data 1 to 4, and indicate whether or not there was a write to the corresponding sub-line data.
- LRU bit L indicates the access order from the processor for the 16 entries in the set. In order to accurately represent the access order from No. 1 to No. 16, originally at least 4 bits are necessary. However, here, 1 LRU bit is 1 (new) for each entry in the set. And 2 (old) access order. The replacement target is the LRU One entry is selected from the entries whose L is the second (old! /,).
- the weak bit W indicates whether or not the 16 entries in the set may be immediately replaced.
- the weak bit W is a bit for forcibly setting the access order to the oldest.
- An entry with a weak bit of 1 (oldest) is selected for replacement regardless of the LRU bit.
- FIG. 5 is an explanatory diagram showing the configuration of the address entry in the subcache 2.
- the tag address corresponds to bits 31 to 7 (tag and set index) of address register 3.
- the nodding bits V1 to V4, dirty bits D1 to D4, and LRU bit L are the same as those in FIG.
- FIG. 6 is an explanatory diagram showing the configuration of the table entry register included in the address table 4.
- Address table 4 has at least one table entry register.
- the table entry register TER1 in the figure holds the base address BA, page size PS, sub-cache attribute SC, and the nodding bit V.
- Base address BA indicates the beginning of the address range.
- the subcache attribute SC indicates whether memory data corresponding to the address range is allocated to the subcache 2 or not.
- the noble bit indicates whether the table entry register TER1 is valid.
- the table entry register can be written and read directly from the processor.
- FIG. 7 is an explanatory diagram showing a program example for setting the table entry register TER 1 in the address table 4.
- "equ" in the first and second lines is a pseudo-instruction for the assembler to define the value of the variable.
- Each line below “ ⁇ ” means a comment.
- the first line defines the address of table entry register TER1 as the value of variable AD R—TER1!
- the data (0x90002205) to be set in the table entry register TER1 is defined as the value of the variable DAT—TER1.
- the base address BA is 0x90002000
- the page size PS is 10 (128 kbytes)
- the subcache attribute SC is 1 (assigned to the subcache)
- the nodding bit V is 1 (valid).
- the mov instruction on line 6 transfers the value of variable ADR TER1 to register r28. Is an instruction.
- the mov instruction on the 7th line is an instruction to transfer the value of variable DAT-TER1 to register r29.
- the st instruction on the eighth line is an instruction to write the data in the register r29 using the contents of the register r28 as an address.
- the value of the variable DAT-TER1 is set in the table entry register TER1.
- FIG. 8 is an explanatory diagram showing the relationship between the memory area in which the subcache attribute is set and the subcache 2.
- the figure shows the memory area allocated to the sub-cache by the example program shown in FIG. That is, the memory data in the memory area from the base address BA (address 0x90002000) to the page size PS (128 kbytes) is cached in the sub cache 2 instead of the main cache 1.
- This memory area is suitable for storing data with strong spatial locality such as array data A and B but weak temporal locality.
- memory data in areas other than this memory area is cached in the main cache 1.
- temporal locality is strong, access is concentrated in a very short period of time, and spatial locality is also strong and small area (for example, below the size of a line or the size of sub cache 2). It is clear that it is suitable for storing closed data in
- FIG. 9 is a diagram illustrating an example of a memory access count pattern for each data.
- This figure shows, for example, the number of times each data is accessed in the compression / decompression processing of moving images.
- data with a small number of accesses exists widely, and conversely, there is a tendency to concentrate on data with few accesses. Therefore, using this trend, for example, a sub-cache 2 has a data area with a small number of accesses and a wide data area, and a main cache 1 has a data area with a large number of accesses that fits the size of the main cache 1.
- main cache 1 and sub-cache 2 can be used efficiently.
- FIG. 10 is a flowchart showing the operation of the cache memory under the control of the control unit 6 when the processor accesses the memory.
- the control unit 6 indicates that the memory access matches with any tag address held in the hash 1 or sub-cache 2. If YES (S91: yes), the entry that has been hit is read or written (S92).
- the cache memory of the present embodiment different cache memories are used depending on whether or not the memory data is data within the address range set in the address table 4.
- the cache is physically separated by the address space, and heterogeneous caching is realized according to the address. Since the data in the main cache 1 is not replaced by the memory data in the address range, the usage efficiency of the main cache 1 can be improved.
- the address range set in the address table 4 corresponds to a memory area that stores data having strong spatial locality but weak temporal locality
- the spatial locality is strong.
- Data with weak temporal locality is held in sub-cache 2, and other data is held in main cache 1.
- the use efficiency of the first cache memory can be improved. In this way, the usage efficiency of the entire cache memory can be improved.
- the sub cache 2 is much smaller than the main cache 1 and has a smaller capacity, the use efficiency of the entire cache memory can be improved without adding manufacturing cost by adding less hardware. be able to.
- the address table 4 can be set programmable from the processor, the main cache 1 and the sub cache 2 can be flexibly set for each application or task. Can be used.
- address table 4 may be assigned statically without being programmable.
- main cache 1 is a 16-way set associative cache memory
- main cache 1 may be an n (n is other than 16) way set associative cache memory.
- entries of force m m is other than 16 described in the example in which each way has 16 entries.
- the number of entries in the subcache 2 may be two. In order to suppress the increase in hardware scale and the increase in the noise cost, the number of entries in subcache 2 may be one to several.
- the main cache 1 may be a full-associative cache memory or a direct-mapped cache memory described in the example of the set associative cache memory.
- the sub-cache 2 may be a direct mapped cache memory or a set-associative cache memory.
- victim cache shown in FIG. 1 may be attached to the main cache 1. You can also add the victim cache shown in Figure 1 to sub-cache 2!
- FIG. 11 is a block diagram showing a configuration of the cache memory according to the second embodiment.
- the cache memory 2000 shown in FIG. 2 includes a sub cache 2a, an address table 4a, and a comparator 5a, and a control unit 6a instead of the control unit 6. It is different from the point provided with. In the following, the description of the same points as in Fig. 2 will be omitted, and different points will be mainly described.
- the sub-cache 2 a is a fully associative cache memory like the sub-cache 2.
- the address table 4a holds an address range indicating a memory area to be allocated to the sub-cache 2a.
- the comparator 5a determines whether or not the tag of the address register 3 is included in the address range held in the address table 4a.
- the control unit 6a controls the sub-cache 2a in addition to the function of the control unit 6.
- the subcache 2a may be operated simultaneously with the subcache 2 or may be operated alternatively depending on the application or task.
- a plurality of sub-caches can be operated simultaneously or alternatively, and can be used flexibly according to an application or task.
- the utilization efficiency of the cache memory 2000 can be improved.
- the present invention is suitable for a cache memory for high-speed memory access.
- the present invention is suitable for an on-chip cache memory, an off-chip cache memory, a data cache memory, an instruction cache memory, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006522577A JPWO2006109421A1 (ja) | 2005-04-08 | 2006-03-17 | キャッシュメモリ |
US11/910,831 US7970998B2 (en) | 2005-04-08 | 2006-03-17 | Parallel caches operating in exclusive address ranges |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2005112840 | 2005-04-08 | ||
JP2005-112840 | 2005-04-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006109421A1 true WO2006109421A1 (ja) | 2006-10-19 |
Family
ID=37086712
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2006/305389 WO2006109421A1 (ja) | 2005-04-08 | 2006-03-17 | キャッシュメモリ |
Country Status (5)
Country | Link |
---|---|
US (1) | US7970998B2 (ja) |
JP (1) | JPWO2006109421A1 (ja) |
CN (1) | CN101156139A (ja) |
TW (1) | TW200639636A (ja) |
WO (1) | WO2006109421A1 (ja) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4767361B2 (ja) * | 2008-03-31 | 2011-09-07 | パナソニック株式会社 | キャッシュメモリ装置、キャッシュメモリシステム、プロセッサシステム |
US8312219B2 (en) * | 2009-03-02 | 2012-11-13 | International Business Machines Corporation | Hybrid caching techniques and garbage collection using hybrid caching techniques |
KR20140066392A (ko) * | 2012-11-23 | 2014-06-02 | 삼성전자주식회사 | 캐시 메모리 및 캐시 메모리를 포함하는 어플리케이션 프로세서의 데이터 관리 방법 |
US11741020B2 (en) * | 2019-05-24 | 2023-08-29 | Texas Instruments Incorporated | Methods and apparatus to facilitate fully pipelined read-modify-write support in level 1 data cache using store queue and data forwarding |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0281241A (ja) * | 1988-09-19 | 1990-03-22 | Matsushita Electric Ind Co Ltd | データ処理装置 |
JPH02236651A (ja) * | 1989-02-08 | 1990-09-19 | Hitachi Ltd | メモリバッファ |
JPH04303248A (ja) * | 1991-01-15 | 1992-10-27 | Philips Gloeilampenfab:Nv | マルチバッファデータキャッシュを具えているコンピュータシステム |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5257359A (en) * | 1989-02-08 | 1993-10-26 | Hitachi Microsystems, Inc. | Instruction cache buffer with program-flow control |
US5317718A (en) * | 1990-03-27 | 1994-05-31 | Digital Equipment Corporation | Data processing system and method with prefetch buffers |
US5261066A (en) * | 1990-03-27 | 1993-11-09 | Digital Equipment Corporation | Data processing system and method with small fully-associative cache and prefetch buffers |
JPH06348593A (ja) | 1993-06-03 | 1994-12-22 | Sumitomo Electric Ind Ltd | データ転送制御装置 |
US5870599A (en) * | 1994-03-01 | 1999-02-09 | Intel Corporation | Computer system employing streaming buffer for instruction preetching |
JP3068451B2 (ja) | 1996-01-26 | 2000-07-24 | 日本電気通信システム株式会社 | 電子計算機 |
JPH10207773A (ja) | 1997-01-27 | 1998-08-07 | Nec Corp | バス接続装置 |
US6138213A (en) * | 1997-06-27 | 2000-10-24 | Advanced Micro Devices, Inc. | Cache including a prefetch way for storing prefetch cache lines and configured to move a prefetched cache line to a non-prefetch way upon access to the prefetched cache line |
JP2000148584A (ja) | 1998-11-10 | 2000-05-30 | Matsushita Electric Ind Co Ltd | プリフェッチ方法および装置 |
US6418525B1 (en) * | 1999-01-29 | 2002-07-09 | International Business Machines Corporation | Method and apparatus for reducing latency in set-associative caches using set prediction |
JP2001256107A (ja) | 2000-03-10 | 2001-09-21 | Matsushita Electric Ind Co Ltd | データ処理装置 |
KR100459708B1 (ko) | 2002-03-29 | 2004-12-04 | 삼성전자주식회사 | 자동 레이저 출력 제어 기능을 가지는 레이저 다이오드 드라이버 |
JP4024247B2 (ja) | 2002-09-30 | 2007-12-19 | 株式会社ルネサステクノロジ | 半導体データプロセッサ |
US20050044320A1 (en) * | 2003-08-19 | 2005-02-24 | Sun Microsystems, Inc. | Cache bank interface unit |
KR100562906B1 (ko) * | 2003-10-08 | 2006-03-21 | 삼성전자주식회사 | 시리얼 플래시 메모리에서의 xip를 위한 우선순위기반의 플래시 메모리 제어 장치 및 이를 이용한 메모리관리 방법, 이에 따른 플래시 메모리 칩 |
US7502887B2 (en) * | 2003-11-12 | 2009-03-10 | Panasonic Corporation | N-way set associative cache memory and control method thereof |
-
2006
- 2006-03-17 US US11/910,831 patent/US7970998B2/en active Active
- 2006-03-17 JP JP2006522577A patent/JPWO2006109421A1/ja active Pending
- 2006-03-17 CN CNA2006800113978A patent/CN101156139A/zh active Pending
- 2006-03-17 WO PCT/JP2006/305389 patent/WO2006109421A1/ja active Application Filing
- 2006-03-23 TW TW095110088A patent/TW200639636A/zh unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0281241A (ja) * | 1988-09-19 | 1990-03-22 | Matsushita Electric Ind Co Ltd | データ処理装置 |
JPH02236651A (ja) * | 1989-02-08 | 1990-09-19 | Hitachi Ltd | メモリバッファ |
JPH04303248A (ja) * | 1991-01-15 | 1992-10-27 | Philips Gloeilampenfab:Nv | マルチバッファデータキャッシュを具えているコンピュータシステム |
Also Published As
Publication number | Publication date |
---|---|
TW200639636A (en) | 2006-11-16 |
CN101156139A (zh) | 2008-04-02 |
US7970998B2 (en) | 2011-06-28 |
US20090077318A1 (en) | 2009-03-19 |
JPWO2006109421A1 (ja) | 2008-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7913041B2 (en) | Cache reconfiguration based on analyzing one or more characteristics of run-time performance data or software hint | |
JP4098347B2 (ja) | キャッシュメモリおよびその制御方法 | |
USRE45078E1 (en) | Highly efficient design of storage array utilizing multiple pointers to indicate valid and invalid lines for use in first and second cache spaces and memory subsystems | |
US8176255B2 (en) | Allocating space in dedicated cache ways | |
US9311246B2 (en) | Cache memory system | |
US4811209A (en) | Cache memory with multiple valid bits for each data indication the validity within different contents | |
JP6009589B2 (ja) | マルチレベルのキャッシュ階層におけるキャストアウトを低減するための装置および方法 | |
US8140759B2 (en) | Specifying an access hint for prefetching partial cache block data in a cache hierarchy | |
JP5087676B2 (ja) | 階層型キャッシュタグアーキテクチャ | |
JPH09190382A (ja) | コンピュータメモリシステムの競合キャッシュ | |
JPWO2010035426A1 (ja) | バッファメモリ装置、メモリシステム及びデータ転送方法 | |
WO2010032435A1 (ja) | キャッシュメモリ、メモリシステム、データコピー方法及びデータ書き換え方法 | |
US20100011165A1 (en) | Cache management systems and methods | |
EP2866148B1 (en) | Storage system having tag storage device with multiple tag entries associated with same data storage line for data recycling and related tag storage device | |
US20210042120A1 (en) | Data prefetching auxiliary circuit, data prefetching method, and microprocessor | |
JP4009306B2 (ja) | キャッシュメモリおよびその制御方法 | |
WO2006109421A1 (ja) | キャッシュメモリ | |
US7302530B2 (en) | Method of updating cache state information where stores only read the cache state information upon entering the queue | |
US7325101B1 (en) | Techniques for reducing off-chip cache memory accesses | |
JP6249120B1 (ja) | プロセッサ | |
Lee et al. | Application-adaptive intelligent cache memory system | |
JP5224959B2 (ja) | キャッシュシステム | |
US20120102271A1 (en) | Cache memory system and cache memory control method | |
Dandamudi | Cache Memory | |
JP2010191754A (ja) | キャッシュ記憶装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200680011397.8 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006522577 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11910831 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 06729379 Country of ref document: EP Kind code of ref document: A1 |