WO2013098919A1 - データ処理装置 - Google Patents
データ処理装置 Download PDFInfo
- Publication number
- WO2013098919A1 WO2013098919A1 PCT/JP2011/080078 JP2011080078W WO2013098919A1 WO 2013098919 A1 WO2013098919 A1 WO 2013098919A1 JP 2011080078 W JP2011080078 W JP 2011080078W WO 2013098919 A1 WO2013098919 A1 WO 2013098919A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cache
- data
- tag
- address information
- ways
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0864—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1064—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in cache or content addressable memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
- G06F12/121—Replacement control using replacement algorithms
- G06F12/126—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning
- G06F12/127—Replacement control using replacement algorithms with special data handling, e.g. priority of data or instructions, handling errors or pinning using additional replacement algorithms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1028—Power efficiency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/60—Details of cache memory
- G06F2212/6032—Way prediction in set-associative cache
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates to a data processing apparatus including a set associative type cache memory, for example, a technique effective when applied to a microcomputer.
- a cache memory is arranged between a large-capacity work memory and the CPU to enable high-speed access of the CPU to operand data and instructions.
- a set associative cache memory (hereinafter also simply referred to as a set associative cache memory) is used as a circuit having a relatively small circuit scale and capable of obtaining a relatively high cache hit rate.
- the set associative cache memory has a structure capable of storing a plurality of data with different tags in the same cache entry. When there are a plurality of pairs of tag ways and data ways and the number of cache entries of these ways is 256, for example, the cache entry of each way is selected by the index address information of the lower 8 bits of the address information.
- tag address information of multiple bits on the upper side of the index address information is stored as a cache tag, and for the dataway, for each cache entry specified by the index address The data of the address specified by the index address information and the tag address information to be stored is stored.
- all tag ways are read by the index address information of the access address, and when the read cache tag matches the tag address information of the access address information, the data way that forms a pair with the tag way The corresponding cache entry becomes an operation target by data read or data write.
- the contents of the set associative cache memory described here are widely known.
- Patent Document 1 tries to realize low power consumption by enabling switching between a direct map format and a set associative format. Further, Patent Document 2 determines the number of way divisions, accesses the ways in order, and eliminates the parallel index operation for a plurality of ways.
- Patent Document 3 using a parity check eliminates the need for a parity check circuit for each memory array by performing a parity check on access address data, thereby reducing the number of parity check circuits. Try to achieve low power consumption.
- Patent Document 3 only confirms whether there is an error in the access request address itself. It is not possible to expect a reduction in the circuit that detects the bit reversal error of the data to be held. Patent Document 1 does not consider a parity check for an undesired bit inversion error. Patent Document 2 considers error detection regarding the transfer timing of data read from the data array, and does not consider detection of an undesired bit inversion error in data held by the way.
- An object of the present invention is to achieve low power consumption without sacrificing cache entry selection operation speed in terms of way selection in a set associative cache memory.
- Another object of the present invention is to reduce the scale of a circuit that detects a bit inversion error of a way in a set associative cache memory.
- a part of ways is selected from a plurality of ways according to the value of selection data generated based on tag address information that is part of address information, and the cache tag is read. Further, when performing cache fill, the cache memory performs cache fill on a cache entry selected from a part of ways corresponding to the value of the selected data.
- parity data for tag address information is used as selection data used for way selection. Based on the value of the parity data, a way for reading a cache tag is selected, and further, a way for a cache entry for performing cache fill is selected. .
- the scale of the circuit that detects the bit reversal error of the way in the set associative cache memory can be reduced.
- FIG. 1 is a block diagram illustrating a microcomputer as an embodiment of a data processing apparatus.
- FIG. 2 is a block diagram illustrating a schematic configuration of a memory mat when the function of the set associative cache memory is realized.
- FIG. 3 is a block diagram illustrating a basic configuration of the parity function for the tag way of the cache memory.
- FIG. 4 is a block diagram representatively showing a configuration focusing on a read operation system for a cache entry.
- FIG. 5 is a flowchart illustrating a cache operation flow of a read operation system for a cache entry.
- FIG. 6 is a block diagram illustrating a configuration focusing on a fill operation system combined with the configuration of the read operation system of FIG. FIG.
- FIG. 7 is a flowchart illustrating a cache operation flow of a fill operation system for a cache entry.
- FIG. 8 is a block diagram illustrating a configuration focusing on a read operation system for a cache entry when the parity check function can be selectively turned on / off.
- FIG. 9 is a block diagram illustrating a configuration focusing on a fill operation system for a cache entry when the parity check function can be selectively turned on / off.
- FIG. 10 is a block diagram showing an example in which a configuration using a plurality of bits of parity data is applied to the configuration focusing on the read operation system for the cache entry of FIG.
- FIG. 11 is a block diagram illustrating a configuration focusing on the read operation system for a cache entry when two ways are stored for each memory block.
- FIG. 12 is a block diagram illustrating a configuration focusing on a fill operation system for a cache entry when two ways are stored for each memory block.
- FIG. 13 is an explanatory diagram exemplifying a configuration of a tag entry when a virtual CPU number (Virtual CPU ID) for performing processing of each thread is set as the information ⁇ when processing is performed in multithread.
- FIG. 14 is a block diagram illustrating a configuration focusing on a read operation system for a cache entry when the tag entry of FIG. 13 is used.
- FIG. 15 is a block diagram illustrating a configuration focusing on a fill operation system for a cache entry when the tag entry of FIG. 13 is used.
- FIG. 16 is a block diagram showing a specific example of the memory blocks 16, 16A, 15, 15A.
- FIG. 17 is an explanatory diagram illustrating the arrangement of tag entries stored in the memory cell array of TagWAY # 0 configured by the memory block 16 as an example of the tag way.
- FIG. 18 is an explanatory diagram illustrating the arrangement of data entries stored in the memory cell array of DataWAY # 0 configured by the memory block 16 as an example of the data way.
- FIG. 19 is an explanatory diagram illustrating the arrangement of tag entries stored in the memory cell arrays of TagWAY # 0 and TagWAY # 2 configured by the memory block 16A as another example of the tagway.
- FIG. 20 is an explanatory diagram illustrating the arrangement of data entries stored in the memory cell arrays of DataWAY # 0 and DataWAY # 2 configured by the memory block 16 as an example of the data way.
- FIG. 21 is an explanatory diagram illustrating the arrangement of LRU entries stored in the memory cell arrays of the LRU arrays 15 and 15A.
- FIG. 22 is an explanatory diagram showing the main aspects of the index operation mode for the tagway as a summary.
- a data processing apparatus (1) includes a set associative cache memory (3) that stores a plurality of cache entries in a plurality of ways.
- the cache memory reads a cache tag, a value of selection data (PRTdat) generated based on tag address information (TAGadrs) which is a part of address information from the plurality of ways (12, 13).
- PRTdat a value of selection data generated based on tag address information
- TAGadrs tag address information
- some ways are selected, and the cache tag is read from the selected ways using the index address in the address information.
- the cache memory performs the cache fill on the cache entry (14) selected from some ways according to the value of the selected data.
- the parity data that limits the range of the way to be used is used for replacement of the cache entry. Therefore, if it is assumed that there is no error in the address information supplied to the cache memory, the read target of the cache tag is a way corresponding to the value of the parity data related to the address information, so the parity bit is stored according to the cache entry. There is no need to do. Therefore, it is possible to suppress a malfunction in which a bit-reversed cache tag is erroneously processed as a normal cache tag without requiring a circuit for generating parity data for comparison from the read cache tag.
- the way where the cache tag with the bit inversion is located does not become the original way according to the value of the parity data. It is not done.
- the number of parity generation circuits can be halved while improving the reliability of bit inversion of the cache tag.
- further lower power consumption can be achieved in that the number of parity generation circuits can be halved.
- the cache memory compares each cache tag read from the way with the tag address, and determines whether the comparison results are all inconsistent, only one matched, or multiple matched. If a plurality of matches are determined, a cache error signal (41) is generated.
- the tag address does not match due to bit inversion of the cache tag, it can be dealt with in the same way as a normal cache miss. Even if a cache entry having the same cache tag as the regular cache tag is apparently generated by the multi-bit inversion of the cache tag, such an abnormality can be dealt with by determining the plurality of matches.
- Item 3 further includes an interrupt controller (9) for inputting the cache error signal as an exception factor or an interrupt factor.
- the cache memory sets the cache entry of the cache tag related to the comparison result that is only one match as a data operation target.
- the way includes a tag way (12) for storing the cache tag corresponding to the index address, and a data way (13) for storing data corresponding to the index address.
- the cache entry includes the cache tag and data corresponding thereto, and each of the plurality of tag ways is constituted by a memory block (16) in which activation or deactivation is selected for each tag way.
- the cache memory selects the part of tagways by activating a memory block using the selection data.
- low power consumption can be realized by selectively deactivating a plurality of memory blocks in tagway units.
- the selection data is 1-bit parity data PRTdat for all bits of the tag address information which is a part of the address information
- the parity data of the first logical value is a selection of half of the plurality of memory blocks.
- the parity data of the second logical value is used for selecting the remaining half of the plurality of memory blocks.
- the selection data is a plurality of bits of parity data (PRTdat [1: 0]) including parity bits for each of a plurality of divided portions of the tag address information which is a part of the address information, and the value of the parity data Determines a tag way to select from among a plurality of tag ways.
- the first mode is set when priority is given to low power consumption
- the second mode is set when priority is given to a high cache hit rate.
- the degree of freedom can be changed to a minimum as compared with the circuit configuration capable of only the first mode.
- the way includes a tag way for storing the cache tag corresponding to the index address, and a data way for storing data corresponding to the index address.
- the cache entry includes the cache tag and corresponding data.
- the plurality of tagways are configured to be aggregated into one memory block (16A) every predetermined number, and the plurality of tagways configured in the same memory block are selected by mutually different selection data.
- the cache memory reads a cache tag
- the cache memory reads the cache tag using the selection data and the index address information.
- the number of sense amplifiers and buffer amplifiers included in one memory block is one in one memory block.
- the amount of leakage current such as sub-threshold leakage current in the inactive state is reduced as much as the total number of memory blocks is reduced, and this contributes to further lower power consumption. There are cases where it is possible.
- the selection data is 1-bit parity data (PRTdat) for all bits of the tag address information which is a part of the address information.
- the parity data of the first logical value is used for selecting one tag way in each of the memory blocks.
- the parity data of the second logical value is used for selecting the other tag way in each of the memory blocks.
- the selection data is a plurality of bits of parity data (PRTdat [1: 0]) including parity bits for each of a plurality of divided portions of the tag address information which is a part of the address information, and the value of the parity data Determines the tagway to be selected from each memory block.
- LRU data array when the cache memory determines a cache entry to be cache filled, LRU data (LRU [1: 0], LRU [2: 0]) to store the LRU data array (15, 15A).
- the LRU data array has an area for storing a plurality of bits of history data indicating a usage history for each way selected by the selection data for each index address for a cache entry.
- the cache memory selects a cache entry for performing cache fill based on the history data read from the LRU data array using the index address information and the corresponding selection data.
- the selection data is used together with the history data to select the cache entry for performing the cache fill, the number of bits of the history data can be reduced by the number of bits of the selection data. This contributes to circuit scale reduction and power consumption reduction.
- each of the plurality of ways is configured by a memory block (16) in which activation or deactivation is selected for each tag way.
- the cache memory reads a cache tag
- the cache memory selects some ways by the selection data by activating the memory block using the selection data.
- the cache memory performs a cache fill, among the cache entries indicated by the index address in each memory block, the history data (multiple bits) read from the LRU data array (15) based on the index address information ( LRU [1: 0]) and the selection data (PRTdat) generated based on the tag address are used to select a cache entry to be cache filled.
- the plurality of ways are configured to be aggregated into one memory block (16A) for each predetermined plurality, and the plurality of ways configured in the same memory block are selected by different selection data.
- the cache memory reads a cache tag, it is specified by the selection data which way in each activated memory block is selected, and which cache tag in the specified way is selected This is specified by the index address information in the address information.
- the cache memory performs a cache fill, which memory block is selected is determined by history data (LRU [2: 0]) read from the LRU data array (15A) according to index address information. Designated, which way is selected from the designated memory block is designated by the selection data (PRTdat), and which cache entry is targeted for cache fill from the designated way , Specified by the index address (IDXadrs).
- the value of the history data when storing the history data, may be determined according to the value of the selected data, and any cache entry of any memory block is selected for the value of the indexed history data. This may be determined by the value of the selected data and the index address information at that time.
- a data processing apparatus (1) includes a set associative cache memory (3) that stores a plurality of cache entries in a plurality of ways.
- a parity generated based on tag address information TAGadrs
- TAGadrs tag address information
- PRTdat value of data
- the read cache tag is compared with the tag address, and it is determined whether the comparison results are all inconsistent, only one matched, or multiple matched, and when the multiple matched is determined, a cache error signal (41) is generated.
- the cache memory performs a cache fill, the cache memory performs a cache fill on a cache entry selected from some ways according to the value of the selected data.
- the correspondence between the way holding the cache tag to be read in the cache operation and the way selected by the selection data is easily maintained, contributing to low power consumption.
- the tag address does not coincide with the bit inversion of the cache tag, it can be dealt with in the same way as a normal cache miss. Even if a cache entry having the same cache tag as the regular cache tag is apparently generated by the multi-bit inversion of the cache tag, such an abnormality can be dealt with by determining the plurality of matches.
- a data processing apparatus (1) includes a set associative cache memory (3) for storing a plurality of cache entries in a plurality of ways.
- a parity generated based on tag address information TAGadrs
- TAGadrs tag address information
- PRTdat the value of data
- IDXadrs index address information
- the read cache tag is compared with the tag address, and it is determined whether the comparison results are all inconsistent, only one matched, or multiple matched, and when the multiple matched is determined, a cache error signal (41) is generated.
- LRU data LRU [1: 0], LRU [2: 0]
- LRU data array 15, 15A
- the LRU data array has an area for storing a plurality of bits of history data indicating a usage history for each way selected by parity data for each index address for a cache entry.
- the cache memory selects a cache entry for performing cache fill based on the history data read from the LRU data array using the index address information and the corresponding selection data.
- the correspondence between the way holding the cache tag to be read in the cache operation and the way selected by the selection data is easily maintained, contributing to low power consumption.
- the tag address does not coincide with the bit inversion of the cache tag, it can be dealt with in the same way as a normal cache miss. Even if a cache entry having the same cache tag as the regular cache tag is apparently generated by the multi-bit inversion of the cache tag, such an abnormality can be dealt with by determining the plurality of matches.
- the number of bits of the history data can be reduced by the number of bits of the selection data. This contributes to reducing the circuit scale of the data array and reducing power consumption in the data array.
- a data processing apparatus (1) includes a set associative cache memory (3) that uses a plurality of ways to store a plurality of cache entries.
- the cache memory operates a way based on address information
- the cache memory operates from a part of ways corresponding to selection data (PRTdat) generated based on tag address information (TAGadrs) which is a part of the address information. Select the target cache entry.
- the cache entry to be operated for cache tag read or cache fill may be selected from a part of ways corresponding to the value of the selected data, all ways are operated in parallel. It does not need to contribute to low power consumption. Since the cache entry to be operated is selected using the selected data, the correspondence between the wait from which the cache tag is read and the way to be cache filled can be easily maintained.
- a data processing apparatus (1) includes a set associative cache memory (3) that uses a plurality of ways to store a plurality of cache entries.
- the cache memory reads a cache tag to be compared with an address tag from some ways corresponding to selection data generated based on tag address information (TAGadrs) which is a part of address information, and cache Select the cache entry to be filled.
- TAGadrs tag address information
- the cache entry to be operated for the read or cache fill of the cache tag may be selected from some ways according to the value of the selected data, it is necessary to operate all the ways in parallel. It contributes to low power consumption. Since the cache entry to be operated is selected using the selected data, the correspondence between the wait from which the cache tag is read and the way to be cache filled can be easily maintained.
- a data processing apparatus (1) includes a set associative cache memory (3) that uses a plurality of ways to store a plurality of cache entries.
- the cache memory reads a cache tag, it selects which way of a plurality of ways to select from selection data (PRTdat) generated based on tag address information (TAGadrs) which is a part of address information. It is instructed according to the value, and which cache tag is to be read from the instructed way is instructed by the index address (IDXadrs) in the address information.
- the usage history (LRU [1: 0], LRU [2: 0]) referred to by the index address unit for the cache entries of all ways and the selection data (PRTdat)
- the cache entry to be cache filled is selected according to the combination with the value of.
- the cache entry to be operated for the read or cache fill of the cache tag may be selected from some ways according to the value of the selected data, it is necessary to operate all the ways in parallel. It contributes to low power consumption. Since the cache entry to be operated is selected using the selected data, the correspondence between the wait from which the cache tag is read and the way to be cache filled can be easily maintained. Further, since the selection data is used together with the history data to select the cache entry for performing the cache fill, the number of bits of the history data can be reduced by the number of bits of the selection data. Contributes to scale reduction and power consumption reduction.
- FIG. 1 illustrates a microcomputer (MCU) 1 as an embodiment of a data processing apparatus.
- the microcomputer 1 shown in the figure is not particularly limited, but is formed on a single semiconductor substrate such as single crystal silicon by using a CMOS integrated circuit manufacturing technique.
- the microcomputer 1 has a CPU (central processing unit) 2 that fetches and decodes an instruction, fetches necessary operand data according to the decoding result, and performs arithmetic processing, although not particularly limited.
- a CPU central processing unit
- the program execution form of the CPU 2 can be applied not only to a single thread but also to a multi-thread. When a single program is processed by a multi-thread, or a plurality of programs are multi-threaded as a program unit as a whole. Any of the cases where processing is performed may be used.
- the internal bus 4 includes an interrupt controller (INTC) 9 representatively shown, a RAM 5 composed of a static random access memory (SRAM), a direct memory access controller (DMAC) 6, and an electrically rewritable flash memory. (FLASH) 7 and other peripheral circuit (PRPHRL) 8 are connected.
- the peripheral circuit 8 includes a timer and an input / output port to the outside of the microcomputer 1.
- the RAM 5 and the FLSH 7 are used for storing programs and data. For example, they are cached by the cache memory 3.
- the cache memory 3 is not particularly limited, the cache memory 3 includes a cache control circuit (CACHCNT) 10 and a memory mat (MRYMAT) 11, and the function of the set associative cache memory is realized.
- CACHCNT cache control circuit
- MRYMAT memory mat
- FIG. 2 illustrates a schematic configuration of the memory mat 11 when the function of the set associative cache memory is realized.
- the set associative cache memory has a plurality of ways, and each way has a tag way (TagWAY) 12 and a data way (DataWAY) 13, and a plurality of cache entries (CachENTRY) 14 are formed in each way.
- the tag way (TagWAY) 12 has a cache tag, a valid bit, a lock bit, and the like for each cache entry 14.
- the data way (DataWAY) 13 holds cache data (a program executed by the CPU and operand data used by the CPU) corresponding to the cache tag for each cache entry 14.
- an example is a 4-way set associative cache memory in which one data way (DataWAY) 14 holds 16-byte cache data and the number of cache entries (CachENTRY) 14 is 256 entries.
- the least significant 4 bits are offset address information for selecting 32 bits (4 bytes) from the cache data of one data way (DataWAY) 13, its upper side
- the 8 bits are index address information for selecting one cache entry (CachENTRY) 14, and the upper side is tag address information.
- tag address information corresponding to the cache data of the corresponding cache entry 14 is used.
- the LRU data array (LRUARY) 15 holds history data used as an index for selecting a cache entry (CachENTRY) 14 to be replaced.
- the history data is information for specifying a cache entry having the same index address that has not been used recently.
- the LRU data array (LRUARY) 15 has 256 pieces of history data, and the history data is accessed by index address information.
- 20 is a write path for the cache entry 14 and 21 is a read path for the cache entry.
- Reference numeral 22 denotes a selection path for selecting a way for the tag way 12 and the data way 13 and for indexing the way to be selected.
- 23 is a history data write path to the LRU array 15, 24 is a history data read path, and 25 is a history data selection path.
- the cache memory 3 has a parity function for the tagway, but simplification of the circuit scale and low power consumption are considered for the addition of the function.
- simplification of the circuit scale and low power consumption are considered for the addition of the function.
- FIG. 3 illustrates a basic configuration of the parity function for the tag way of the cache memory.
- attention is paid to n tagways TagWAT # 0 to TagWAT # n-1 in the n-way set associative cache memory.
- ACCadrs is access address information generated by the CPU.
- TAGadrs is tag address information included in the access address information ACCadrs
- IDXadrs is index address information included in the access address information ACCadrs.
- PRTYG parity generation circuit
- TAGCMP tag comparison circuit
- MLTHIT multi-hit detection circuit
- the parity generation circuit 30 generates, for example, one parity bit for all bits of the tag address information TAGadrs as parity data PRTYdat. Whether to use even parity or odd parity is specified by an even / odd selection signal (ODSEL) 42. Specifically, when the even / odd selection signal 42 has a logical value 1, the parity generation circuit performs an exclusive OR EXOR for all the bits of the tag address information TAGadrs, and performs logic operation when there is an even number of logical values 1 in all the bits. Parity data PRTdate having a value of 0 is output. When all the bits have an odd number of logical values 1, the parity data PRTdate having a logical value of 1 is output.
- parity generation circuit When the even / odd selection signal 42 has a logical value of 0, the parity generation circuit performs an exclusive negative OR EXNOR for all the bits of the tag address information TAGadrs. Parity data PRTdate is output. When all the bits have an odd number of logical values 1, parity data PRTdate of logical value 0 is output.
- Parity data PRTdat is supplied to the tagway 12.
- the logical value 1 of the parity data PRTYdat indicates selection of a group of odd-numbered tagways TagWAY # 1, TagWAY # 3,... Instructs selection of a group of. Therefore, when performing the cache fill, the cache fill is performed on the cache entry selected from the tagway of one group corresponding to the value of the parity data PRTdat.
- the tagway operation of one group is selected according to the value of the parity data PRTdat among the tagways TagWAY # 0 to TagWAY # n-1, and an index is selected from the selected ways.
- the cache tag is read using the address IDXadrs.
- one tagway TagWAY # 0 is used when the parity data PRTdat has a logical value 0
- the other tagway TagWAY # 1 is used when the parity data PRTdat has a logical value 1.
- the parity data PRTdat that limits the range of the way to be used is used for replacement of the cache entry by the cache fill. If it is assumed that there is no error in the access address information ACCADrs supplied to the cache memory 3, the cache tag read target is a way corresponding to the value of the parity data PRTdat related to the tag address information TAGadrs, so a cache entry (CashENTRY) It is not necessary to store parity bits according to 14. Therefore, it is possible to suppress a malfunction in which a bit-reversed cache tag is erroneously processed as a normal cache tag without requiring a circuit for generating parity data for comparison from the read cache tag.
- the way where the cache tag with the bit inversion is located does not become the original way according to the value of the parity data PRTdat. Not targeted.
- the number of parity generation circuits can be halved while improving the reliability of bit inversion of the cache tag. Further lower power consumption can be achieved in that the number of parity generation circuits can be halved.
- the tags are illustrated as CTAG # 0 to CTAG # n / 2.
- the output cache tags CTAG # 0 to CTAG # n / 2 are compared with the tag address TAGadrs by the tag comparison circuit 31, and the comparison result is associated with the corresponding tagway to indicate a signal indicating the match or mismatch of the comparison result (hit way Discrimination signal: HITWAY) 40 or the like is output.
- the hit way determination signal 40 is supplied to the multi-hit detection circuit 32 and is used for selecting a cache entry indexed by the data way related to the hit, although not shown in FIG. Based on the hit way discrimination signal 40, the multi-hit detection circuit 32 discriminates whether the comparison results are all inconsistent, only one coincident, or plural coincidence. (CERR) 41 is generated. The discriminating result of all mismatches is a cache miss, and the discriminating result of only one match is a cache hit.
- the hit way discrimination signal 40 and the cache error signal 41 By generating the hit way discrimination signal 40 and the cache error signal 41, when the comparison result with the tag address becomes inconsistent due to the bit inversion of the cache tag, it can be dealt with in the same way as a normal cache miss. Further, even if a cache entry having the same cache tag as the regular cache tag is apparently generated due to the multi-bit inversion of the cache tag, such an abnormality can be dealt with by determining the plurality of matches. For example, if the cache error signal 41 is given to the interrupt controller 9 as an exception factor or an interrupt factor, if the occurrence of the cache error is considered to be affected by the bit inversion, the CPU 2 performs processing corresponding to the situation. This can be performed flexibly through interrupt processing or exception processing.
- FIG. 4 representatively shows a configuration focusing on a read operation system for a cache entry.
- tagway TagWAY # 0 to TagWAY # 3 storing the cache tag corresponding to the index address IDXadrs and dataway DataWAY # 0 to DataWAY # 3 storing data corresponding to the index address IDXadrs.
- Each is constituted by a memory block 16 in which activation or deactivation is selected for each tagway and dataway.
- cen0 to cen3 are enable signals (block enable signals) of the memory block 16 for each way. When the corresponding block enable signals cen0 to cen3 are activated (for example, high level), the internal circuit of the memory block 16 is activated. Enabled.
- the address amplification is performed in response to the input address signal to enable the sense amplification operation for the read data.
- the parity data PRTdat is a logical value 0 (low level)
- the tagway TagWAY # 0, TagWAY # 1 and the dataway DataWAY # 0, DataWAY # 1 are included.
- One way group is activated.
- the way of the activated way group is indexed by the index address information IDXadrs, and the cache entry is selected.
- Reference numeral 52 conceptually shows a selection circuit that decodes the index address information IDXadrs and selects a corresponding cache line. In other figures, each way is described as including its function.
- the cache tag output from the tag way for which the cache entry is selected is compared with the tag address information TAGadrs by the discrimination circuits 53 and 54, hit way discrimination signals 40a and 40b, hit signals 55a and 55b, and a plurality of ways according to the comparison result. Hit signals 56a and 56b are generated.
- the discrimination circuits 53 and 54 have the functions of the tag comparison circuit 31 and the multi-hit detection circuit 32 described with reference to FIG.
- the hit way discrimination signals 40a and 40b cause the selectors 57a and 57b to select the output data of the data way corresponding to the tag way related to the match.
- the hit signals 55a and 55b are signals indicating whether or not one cache tag matches the tag address in each of the discrimination circuits 53 and 54.
- the multi-way hit signals 56a and 56b are signals indicating whether or not both cache tags match the tag address in the discrimination circuits 53 and 54, respectively.
- the selector 58 selects the output of the discrimination circuit 53 on the tagway side activated by the parity data PRTdat when the value is 0, and the discrimination on the tagway side activated by the parity data PRTdat when the parity data PRTdat is the value 1.
- the output of the circuit 54 is selected.
- the hit signal 55a or 55b selected by the selector 58 is ANDed with the read request signal RDreq and used as the cache hit signal CHIT.
- the multiple way hit signal 56 a or 56 b selected by the selector 58 is used as the cache error signal 41.
- the selector 59 selects the cache data to be selected and supplied on the data way DataWAY # 0, dataWAY # 1 side activated by the parity data PRTdat, and when the parity data PRTdat has the value 1
- the cache data selected and supplied on the data way DataWAY # 2 and dataWAY # 3 side activated thereby is selected.
- the data selected by the selector 59 is output to the CPU 2 as the cache data CDAT related to the cache hit.
- the flip-flop (FF) 60 latches the input signal according to the memory cycle of the way, and secures a desired operation timing.
- configurations other than TagWAT # 0 to TagWAY # 3, DataWAY # 0 to DataWAY3 are included in the cache control circuit 10, for example.
- FIG. 4 illustrates the case of data read as a cache entry data operation related to a hit in the case of a cache hit, and the case where the data operation is a write is omitted, but the cache data CDAT is read. It should be understood that a writing path opposite to the path is provided.
- FIG. 5 illustrates a read operation type cache operation flow for a cache entry.
- the parity data PRTdat of the tag address information TAGadrs is generated from the access address information ACCADrs (S1). If the value of the generated parity data PRTdat is 0 (S2), the tag tags TagWAY # 0 and TagWAY # 1 cache tags are read (S3), and it is determined from the read result that neither cache tag is a hit. (S4) If the tagway TagWAY # 0 is a hit (S5), the corresponding cache entry of the dataway DataWAY # 0 is set as a read or write operation target (S6). If the tag way TagWAY # 1 is a hit (S7), the corresponding cache entry of the data way DataWAY # 1 is set as a read or write operation target (S8). In the case of step S7 or S8, if there is no cache hit and there is no hit in both tagway TagWAY # 0 and tagway TagWAY # 1, a cache miss is determined.
- step S2 the cache tags of the tagways TagWAY # 2 and TagWAY # 3 are read (S9), and based on the read result, both cache tags are not hit (S10). If the tag way TagWAY # 2 is a hit (S11), the corresponding cache entry of the data way DataWAY # 2 is set as a read or write operation target (S12). If the tag way TagWAY # 3 is a hit (S13), the corresponding cache entry of the data way DataWAY # 3 is set as a read or write operation target (S14). In the case of step S12 or S14, if there is no cache hit and there is no hit in both tagway TagWAY # 2 and tagway TagWAY # 3, a cache miss is determined.
- step S4 If a hit occurs in both tagways in step S4 or step S10, a cache error is assumed.
- FIG. 6 illustrates a configuration focusing on the fill operation system combined with the configuration of the read operation system of FIG.
- the LRU data array 15 stores LRU data LRUdat used as an index for specifying a cache entry to be cache filled by a pseudo LRU when the cache memory 3 determines a cache entry to be cache filled.
- This LRU data array 15 has 2-bit history data LRU [1: 0] indicating the usage history for each way selected for each index address IDXadrs for the cache entry corresponding to the value of the parity data PRTdat (each Bits are represented as LRU [0] and LRU [1]).
- the history data LRU [1: 0] is initialized to a logical value 0 by purging the cache entry.
- the history data LRU [0] is a valid cache entry of either way # 0 (tagway TagWAY # 0 and dataway DataWAY # 0) or way # 1 (tagway TagWAY # 1 and dataway DataWAY # 1). Indicates whether to give priority to replacement.
- the history data LRU [1] is a valid cache entry of either way # 2 (tagway TagWAY # 2 and dataway DataWAY # 2) or way # 3 (tagway TagWAY # 3 and dataway DataWAY # 3). Indicates whether to give priority to replacement.
- LRU [0] stored in the LRU data array 15 corresponding to the index address information is inverted to the logical value 1 when the cache entry of the way # 1 corresponding to the index address information is used.
- LRU [1] is inverted to logical value 1 when the cache entry of way # 3 corresponding to the index address information is used.
- LRU [0] and LRU [1] for each bit of the history data LRU [0] and LRU [1], for example, when a logical value is 0, a way with a small way number is a replacement target, and when a logical value is 1, a way with a large way number is a replacement target. Each time replacement is performed, the corresponding history data bit is inverted and updated.
- LRU [0] For example, for LRU [0], if the initial value is a logical value of 0, way # 0 is used for the next fill of initialization, and LRU [0] is inverted to a logical value of 1. In the next replacement, the way # 1 that has not been used recently is replaced with the replacement target, and LRU [0] is inverted to the logical value 0. LRU [0] may be toggled in the same manner.
- Reference numeral 70 conceptually shows a selection circuit that decodes the index address information IDXadrs and selects the corresponding history data LRU [1: 0].
- the LRU array 15 is described as including the function. Has been.
- the way selection circuit 71 is a way selection circuit for cache fill.
- the way selection circuit 71 selects the way used for the cache fill based on the history data LRU [1: 0] read from the LRU data array 15 using the index address information IDXadrs, the parity data PRTdat, the valid bit V, and the like.
- a block selection signal is generated for this purpose.
- the block selection signal corresponds to block enable signals cen0 to cen3 supplied to the memory block 16, respectively.
- the selection logic of the block selection signal is as follows.
- the way selection circuit 71 refers to the valid bit V of the cache entry indexed sequentially from the tag ways TagWAT # 0 and TagWAT # 1.
- a cache entry in which the valid bit V is invalid is set as a cache fill target, and the way selection circuit 71 is adapted so that one tag way (either TagWAT # 0 or TagWAT # 1) to which the cache entry belongs is selected.
- the block selection signal (one of cen0 and cen1) is set to the logical value 1.
- the other three block selection signals indicate a logical value 0, and each corresponding tagway is in a non-selected state.
- the way selection circuit 71 further updates the history data LRU [0] so that the priority of the non-selected way of the tag ways TagWAT # 0 and TagWAT # 1 becomes higher in the next cache fill. Do.
- the way selection circuit 71 selects a block selection signal (cen0 or cen1) so that one of the prioritized ways is selected according to the value of the history data bit LRU [0]. Is the logical value 1 and the indexed cache entry is the target of the cache fill.
- Each of the other three block selection signals puts the corresponding tagway into a non-selected state. In this case, the way selection circuit 71 performs an update that inverts the value of the history data LRU [0].
- the history data is updated by, for example, rewriting history data with the update information 73.
- the block selection signals cen0 to cen3 are supplied to the corresponding ways TagWAY # 0 to TagWAT # 3 and DataWAT # 0 to DataWAY # 3 via the AND gate 72 when the cache fill request signal FLreq is activated. .
- each valid bit V indicates the validity of the corresponding cache entry.
- These valid bits V are stored in a storage device (such as a flip-flop) (not shown) separate from the memory block 16.
- a storage device such as a flip-flop
- four valid bits V are identified from 1024 valid bits and supplied to the way selection circuit 71.
- the way selection circuit 71 specifies two valid bits V to be referred to based on the parity data PRTdat. When the parity data PRTdat has a logical value of 0, two valid bits V corresponding to TagWAY # 0 and TagWAY # 1 are referred to. When the parity data PRTdat has a logical value of 1, 2 corresponding to TagWAY # 1 and TagWAY # 2 respectively. Two valid bits V are referenced.
- FIG. 7 illustrates a cache operation flow of a fill operation system for a cache entry.
- the parity data PRTdat of the tag address information TAGadrs is generated from the fill address information FLadrs corresponding to the access address information ACCADrs (S21). If the value of the generated parity data PRTdat is 0 (S22), if the way # 0 is empty (invalid by the valid bit V) (S23), a cache fill is performed on the way # 0 (S26). If the way # 0 is not empty and the way # 1 is empty instead (S24), a cache fill is performed on the way # 1 (S27). When both the way # 0 and the way # 1 are valid, the priority of the way is determined by the value of the history bit LRU [0] (S25).
- the cache fill to the way # 0 is performed. If the way # 1 has priority, the cache fill is performed on the way # 1 (S27). After step S26 or S27, the history bit LRU [1] is updated as described above (S28).
- step S21 If it is determined in step S21 that the value of the parity data PRTdat is 1, if the way # 2 is empty (invalid due to the valid bit V) (S29), a cache fill is performed on the way # 2 (S32). If the way # 2 is not empty and the way # 3 is empty instead (S30), a cache fill is performed on the way # 1 (S33). When both the way # 2 and the way # 3 are valid, the priority of the way is determined by the value of the history bit LRU [1] (S31). If the way # 3 has priority, the cache fill is performed on the way # 3 (S33). After step S32 or S33, the history bit LRU [1] is updated as described above (S34).
- the number of bits of the history data is equal to the number of bits of the parity data PRTdat. In this respect, it contributes to the reduction in the circuit scale of the LRU data array 15 and the reduction in power consumption.
- the cache control circuit 10 uses the tagway TagWAY # 0 to TagWAY # 3 and the dataway DataWAY # 0 to # 3. Except for the configuration shown in FIG. 4 and the configuration shown in FIG.
- the method of supplying the block enable signal to each of the tag ways TagWAY # 0 to TagWAY # 3 and the data ways DataWAY # 0 to # 3 is as follows.
- an OR operation is performed between the block enable signal cen0 input to the tagway TagWAY # 0 shown in FIG. 4 and the block enable signal cen0 input to the tagway TagWAY # 0 shown in FIG.
- An OR circuit (not shown) that performs the operation and supplies the calculation result to the tagway TagWAY # 0 is provided corresponding to the tagway WAY # 0.
- Tagway TagWAY # 0 receives a block enable signal of logical value 1 when tagway TagWAY # 0 is selected during a read operation and a fill operation.
- an OR operation between the block enable signal cen0 input to the data way DataWAY # 0 shown in FIG. 4 and the block enable signal cen0 input to the data way DataWAY # 0 shown in FIG. 6 is performed.
- An OR circuit (not shown) is provided that performs the operation and supplies the calculation result to the data way DataWAY # 0.
- FIG. 8 illustrates a configuration focusing on the read operation system for the cache entry when the parity check function can be selectively turned on / off.
- the mode signal MOD instructing selection / non-selection of the parity check function is input to FIG. 4, and the way activation control mode, the cache hit signal CHIT generation mode, and the cache data are selected according to the value of the mode signal MOD.
- the output control form of CDAT is different.
- FIG. 9 illustrates a configuration that focuses on a fill operation system for a cache entry when the parity check function described above can be selectively turned on / off.
- FIG. 9 illustrates a configuration that focuses on a fill operation system for a cache entry when the parity check function described above can be selectively turned on / off.
- FIGS. 4 and 6 shows a mode signal MOD for instructing selection / non-selection of the parity check function, and a cache fill target way by a pseudo LRU using a history data update function and history data according to the value of the mode signal MOD.
- the selection function is different. These differences will be described below. Components having the same functions as those in FIGS. 4 and 6 are denoted by the same reference numerals, and detailed description thereof is omitted.
- a logical value 1 of the mode signal MOD indicates that the parity check function is selected, and a logical value 0 of the mode signal MOD indicates that the parity check function is not selected.
- An inverted signal of the mode signal MOD is supplied to one input side of an AND gate that receives inverted data or non-inverted data of the parity data PRTdat via an OR gate (OR gate) 82.
- the cache error signal 41 is fixed at the inactive level.
- the history bit LRU [0] indicates the priority between the way # 0 and the way # 1
- the history bit LRU [1] indicates the priority between the way # E2 and the way # 3
- the history bit LRU [ 2] indicates the priority between the ways # 0 and # 1 and the ways # 2 and # 3.
- MOD 0, 3-bit history data LRU [2: 0] is significant.
- the cache control circuit 10 uses the tagway TagWAY # 0 to TagWAY # 3 and the dataway DataWAY # 0 to Except for # 3, the configuration shown in FIG. 8 and the configuration shown in FIG. 9 are combined.
- the method of supplying the block enable signals to the tagways TagWAY # 0 to TagWAY # 3 and the dataways DataWAY # 0 to # 3 is the same as in the case of FIG. 4 and FIG.
- Multi-bit parity data A case where parity data is used for the multi-bit cache memory 3 will be described.
- FIG. 10 shows an example in which the configuration using a plurality of bits of parity data is applied to the configuration focusing on the read operation system for the cache entry in FIG.
- the parity data is 1-bit parity data for all the bits of the tag address information TAGadrs that are a part of the access address information ACCadrs, and the parity data with a logical value of 0 is half of the plurality of memory blocks 16 in all ways.
- the parity data of logical value 1 was used to select the remaining half of the plurality of memory blocks 16 in all ways. As a result, the power consumption required to read the cache tag for address comparison can be roughly halved.
- the number of ways is eight, that is, eight tag ways TagWAY # 0 to TagWAT # 7 and eight data ways DataWAY # 0 to DataWAT # 7 are each configured by a memory block 16c.
- 2-bit parity data PRTdat [1: 0] is generated and used for way selection.
- the first bit (PRTdat [0]) of the parity data PRTdat “1: 0” is the parity bit for the lower 10 bits of the tag address information TAGadrs
- the second bit (PRTdat [1]) is the tag address information TAGadrs. This is a parity bit for the upper 10 bits.
- the 2-bit parity data PRTdat [1: 0] is coded by the decoder 90, and a way to be selected from a plurality of ways is determined according to the decoding result.
- PRTdat “1: 0” 10 activates the wayway # 4 and # 5 tagway and dataway memory blocks 16.
- PRTdat “1: 0” 11 activates the wayway # 6 and # 7 tagway and dataway memory blocks 16. That is, the number of memory blocks activated by the cache read operation for tag comparison is reduced to 1 ⁇ 4 of the total, and further low power consumption is realized.
- the cache tags indexed by the activated tag ways TagWAY # 0 to TagWAT # 7 are compared with the tag address information TAGadrs by the determination circuits 54, 54, 53A, 54A in the activation unit by the parity data PRTdat [1: 0]. 4, the comparison results 55a, 56a, 55b, 56b, 55aA, 56aA, 55bA, and 56bA are selected by the selector 58A according to the value of the parity data PRTdat [1: 0], A cache hit signal CHIT and a cache error signal 41 are formed. In the case of a cache hit, the data of the cache entry related to the hit is manipulated. In the case of the cache operation responding to the read access illustrated in FIG. 10, when a cache hit is made, the selection data on the data way related to the hit is selected via the selection circuits 57a, 57b, 57aA, 57bA and cached. Output as data CDAT.
- the number of bits of the parity data PRTdat is not limited to 2 bits, and the number of divisions for tag address information may be increased to 3 bits or more as required.
- FIG. 11 illustrates a configuration focusing on a read operation system for a cache entry when two ways are stored for each memory block.
- FIG. 12 illustrates a configuration focusing on the fill operation system for a cache entry when two ways are stored for each memory block.
- the difference from FIGS. 4 and 6 is that the number of memory blocks is halved, and accordingly, the selection control form of the cache entry to be operated and the discrimination control circuit for the cache tag are different.
- the two aggregated ways are selected by different values of the parity data PRTdat.
- the logical value 0 of the parity data PRTdat means way selection of way numbers # 0 and # 1
- the logical value 1 of the parity data PRTdat means way selection of way numbers # 2 and # 3
- the way numbers # 0 and # 2 are stored in the same memory block
- the way numbers # 1 and # 3 are stored in the same memory block
- a plurality of ways are not selected in one memory block.
- tagway TagWAY # 0 and TagWAY # 2 are set as one set
- tagway TagWAY # 1 and TagWAY # 3 are set as one set
- dataway DataWAY # 0 and DataWAY # 2 are set as one set
- DataWAY # 3 as a set, and each set is stored in a separate memory block 16A.
- the number of entries held by one memory block 16A is 512 entries, which is twice that of 256 in the case of FIGS.
- the lower bits addr [7: 0] are set as 7-bit index address information INDadrs ([1: 4]).
- the most significant bit addr [8] is set as parity data PRTdat.
- the parity data PRTdat that determines the most significant bit addr [8] has significance as a selection bit indicating which of the two ways to be collected in one memory block 16A is selected.
- the read request signal RDreq can be used in common for the block enable signals cena and cenb of the four memory blocks 16A.
- the circuit 100 for comparing the cache tag read from the two memory blocks with the tag address information TAGadrs for determination is configured to have the functions of the determination circuits 53 and 54 and the selection circuit 58 of FIG. Just do it.
- the number of sense amplifiers SA included in each memory block 16A as an amplifier circuit for outputting cache tags and cache data indexed in the memory block 16A is equal to the number of sense amplifiers included in each memory block 16 in FIG. It is enough. In short, the total number of sense amplifiers included in all memory blocks 16A in the configuration of FIG. 11 is approximately half of the total number of sense amplifiers included in all memory blocks 16 in the configuration of FIG. Accordingly, the amount of standby current consumed by the sense amplifier is roughly halved.
- the cache fill way selection circuit 71A has the same logic as the way selection circuit 71 described in FIG. 6 based on the parity data PRTdat, LRU [1: 0] and the valid bit V.
- the way to perform cache fill is determined, but the output selection signal 101 is logical value 0 when the number # 0 or # 2 way is to be filled, and logical value when the number # 1 or # 3 way is to be filled. Set to 1.
- the output selection signal 101 is set to complementary level block enable signals cena and cenb via the inverter 102 and the AND gate 103.
- the block enable signal cena is a block enable signal for the two memory blocks 16A assigned to the tag way and data way of number # 0 or # 2, and the block enable signal cen is the tag way and data way of number # 1 or # 3.
- This is a block enable signal for the two memory blocks 16A assigned to. Which way is selected in each memory block 16A is determined by using 1-bit parity data PRTdat as the address bit a [8] of the memory block 16A as described above.
- the cache fill way selection result by the pseudo LRU is made exactly the same as in FIG. Since other configurations such as history data update by the update information 73 for the LRU array 15 and the meaning of the history data are the same as those in FIG. 6, detailed description thereof will be omitted.
- the cache control circuit 10 uses the tagway TagWAY # 0 to TagWAY # 3 and the dataway DataWAY # 0 to Except for # 3, the configuration shown in FIG. 11 and the configuration shown in FIG.
- the method of supplying the block enable signals to the tagways TagWAY # 0 to TagWAY # 3 and the dataways DataWAY # 0 to # 3 is the same as in the case of FIG. 4 and FIG.
- the present invention can also be applied to the configuration using the parity bits.
- the number of bits of the parity data PRTdat is not limited to 2 bits, and may be 3 bits or more.
- tag address information is also a part of the physical address information. It goes without saying that the tag address information may be a part of the logical address information even when the address information is a logical address. However, in the case of a logical address, information necessary for converting the logical address into a physical address. In some cases, ( ⁇ ) may be included. In this case, it is a good idea to include information ⁇ as tag address information. Naturally, parity data is generated for tag address information including information ⁇ . For example, in the case of performing processing in multi-threads, the virtual CPU number (Virtual CPU ID) for processing each thread is used as the information ⁇ .
- a tag entry includes a valid bit V indicating the validity of the cache entry, a lock bit L indicating whether or not replacement of the cache entry is prohibited, a Virtual CPU ID indicating a virtual CPU number, and a Logical indicating a logical address cache tag. Address [31:12].
- FIG. 14 illustrates a configuration focusing on the read operation system for the cache entry when the tag entry of FIG. 13 is used.
- the tag CPU's Virtual CPU ID and Logical Address [31:12] are stored in the illustrated tagway, and the valid bit V and the lock bit L are stored in another storage device (such as a flip-flop) not illustrated.
- the parity generation circuit 30B generates parity data PRTdat using LogicalLogicAddress [31:12] and Virtual CPU ID [3: 0], and the determination circuits 53B and 54B have Logical Address [ 31:12] and Virtual CPU ID [3: 0] are set as tag address information TAGadrs, and are compared with the indexed cache tag.
- FIG. 4 illustrates a configuration focusing on the read operation system for the cache entry when the tag entry of FIG. 13 is used.
- the tag CPU's Virtual CPU ID and Logical Address [31:12] are stored in the illustrated tagway, and the valid bit V and the lock bit L are stored in another storage device (such
- FIG. 15 shows the cache entry when using the tag entry of FIG. A configuration focusing on the fill operation system is exemplified.
- the difference from FIG. 6 is that the entry information to be written as a cache tag is Virtual CPU ID [3: 0] and Logical Address [31:12] corresponding to the tag address information TAGadrs.
- the cache control circuit 10 uses the tagway TagWAY # 0 to TagWAY # 3 and the dataway DataWAY # 0 to Except for # 3, the configuration shown in FIG. 13 and the configuration shown in FIG. 14 are combined.
- the method of supplying the block enable signals to the tagways TagWAY # 0 to TagWAY # 3 and the dataways DataWAY # 0 to # 3 is the same as in the case of FIG. 4 and FIG.
- FIG. 16 shows a specific example of the memory blocks 16, 16A, 15, 15A.
- the memory blocks 16, 16A, 15, 15A are configured, for example, as static random access memories (SRAM), and a plurality of static memory cells MC (one representative is shown in the figure) in the memory array (MCA) 110. Are arranged in a matrix.
- the selection terminal of the memory cell is connected to the word line WL shown representatively, and the data input / output terminal of the memory cell is connected to the complementary bit lines BLt and BLb shown representatively.
- the word line WL is driven by a word driver (WDRV) 111.
- the row address decoder (RDEC) 112 decodes the row address signal RAdd to generate a word line selection signal, and the word driver (WDRV) 111 drives the word line WL according to the selection level of the generated word line selection signal.
- the plurality of complementary bit lines BLt and BLb are selected by the column switch circuit (CSW) 113 and made conductive to the sense amplifier (SA) 114 or the write amplifier (WA) 115.
- the column address decoder (CDEC) 116 decodes the column address signal CAdd to generate a complementary bit line selection signal, and the column switch circuit 113 that receives the generated complementary bit line selection signal selects the complementary bit lines BLt and BLb.
- Timing controller (TCNT) 117 receives block enable signal CEN, write enable signal WEN, and address signal Add from the outside.
- the block enable signal CEN corresponds to the block enable signal cen described above.
- the timing controller 117 is activated when the block enable signal CEN is set to a selection level. That is, upon receiving an address signal ADD input from the outside, the row address signal RAdd is supplied to the row address decoder 112 and the column address signal CAdd is supplied to the column address decoder 116, and the address decoders 111 and 116 can perform a selection operation. . Further, the sense amplifier 114 is activated in response to a read operation instruction by the write enable signal WEN, and the write amplifier 115 is activated in response to a write operation instruction by the write enable signal WEN.
- the activated sense amplifier 114 amplifies weak read data transferred from the memory cell MC to the complementary bit lines BLt and BLb connected via the column switch 113, and generates a data signal Dout.
- the activated write amplifier 115 transfers the input data Din to the complementary bit lines BLt and BLb connected via the column switch 113, and writes it to the memory cell MC selected by the word line WL.
- the number of bits of the output data Dout and the input data Din is the number of bits corresponding to the number of bits of the tag entry in the case of the tagway, and the number of bits corresponding to the number of bits of the data entry in the case of the dataway.
- FIG. 17 illustrates the arrangement of tag entries stored in the memory cell array (MCA) 110 of TagWAY # 0 configured by the memory block 16 as an example of the tagway.
- the total number of tag entries is 256, and four tag entries are stored per row of the memory cell array, and the number of rows is 64.
- One tag entry consists of M bits (M is an arbitrary integer), and FIG. 17 illustrates 20-bit tag address information ⁇ ⁇ ⁇ ⁇ [31:12]. That is, in FIG. 17, the tagways of FIGS. 4, 6, 8, 9, 14, and 15 are schematically represented.
- one tag entry out of 256 is selected by the 8-bit address signal Add (index address information [11: 4]). Specifically, four tag entries on the word line are selected by the upper 6 bits (row address) of the address signal Add signal, and one of the four tag entries is selected by the column switch by the lower 2 bits (column address).
- the complementary bit lines BLt and BLb of one selected tag entry are electrically coupled to the sense amplifier 114.
- TagWAT # 1 to TagWAY # 3 including the memory block 16 are the same.
- FIG. 18 exemplifies the arrangement of data entries stored in the memory cell array (MCA) 110 of DataWAY # 0 configured by the memory block 16 as an example of the data way.
- the total number of data entries is 256, and four data entries are stored per row of the memory cell array, and the number of rows is 64.
- One data entry, that is, one line of the data way consists of L bits.
- L is an arbitrary integer, which is larger than the tagway entry size.
- one of 256 data entries is selected by an 8-bit address signal Add (index address information [11: 4]).
- four data entries on the word line are selected by the upper 6 bits (row address) of the address signal Add signal, and one of the four data entries is selected by the column switch by the lower 2 bits (column address).
- the complementary bit lines BLt and BLb of one selected data entry are electrically coupled to the sense amplifier 114.
- FIG. 19 illustrates the arrangement of tag entries stored in the memory cell array (MCA) 110 of TagWAY # 0 and TagWAY # 2 configured by the memory block 16A as another example of the tagway.
- the total number of tag entries is 512, and four tag entries are stored per row of the memory cell array, and the number of rows is 128.
- One tag entry consists of M bits (M is an arbitrary integer), and FIG. 19 illustrates 20-bit tag address information [31:12]. That is, in FIG. 19, the tagways of FIGS. 11 and 12 are schematically represented.
- one tag entry out of 512 is selected by 8-bit address signal Add (index address information [11: 4]) and 1-bit parity data PRTdat (addr [8]).
- TagWAT # 1 and TagWAY # 3 configured by the memory block 16A are the same.
- the 20 exemplifies the arrangement of data entries stored in the memory cell array (MCA) 110 of DataWAY # 0 and DataWAY # 2 configured by the memory block 16 as an example of the data way.
- the total number of data entries is 512, and four data entries are stored per row of the memory cell array, and the number of rows is 128.
- One data entry, that is, one line of the data way consists of L bits.
- L is an arbitrary integer, which is larger than the tagway entry size.
- one tag entry out of 512 is selected by 8-bit address signal Add (index address information [11: 4]) and 1-bit parity data PRTdat (addr [8]).
- FIG. 21 illustrates the arrangement of LRU entries stored in the memory cell array (MCA) 110 of the LRU arrays 15 and 15A.
- the total number of LRU entries is 256, and four LRU entries are stored per row of the memory cell array, and the number of rows is 64.
- One LRU entry consists of 2 bits or 3 bits. 9 bits are required in the case of FIG.
- one of the 256 LRU entries is selected by the 8-bit address signal Add (index address information [11: 4]). Specifically, four LRU entries on the word line are selected by the upper 6 bits (row address) of the address signal Add signal, and one of the four LRU entries is selected by the column switch by the lower 2 bits (column address).
- the complementary bit lines BLt and BLb of one selected LRU entry are electrically coupled to the sense amplifier 114.
- FIG. 22 summarizes the main aspects of the index operation mode for the tagway.
- An operation for four tagways TagWAY # 0 to TagWAY # 3 is taken as an example.
- the four memory blocks 16 are accessed at the same time, so that the power consumption is the largest compared to the following operation forms.
- the number of memory blocks 16 used for storing the tagways TagWAY # 0 to TagWAY # 3 is as described above.
- the number of memory blocks 16 to be operated when detecting a cache hit or cache miss is two.
- Tagways assigned to one memory block 16A are tagways that are not simultaneously selected according to the value of parity data PRTdat. That is, tagway TagWAY # 0 and TagWAY # 1 are set as one pair, and tagway TagWAY # 2 and TagWAY # 3 are set as the other pair.
- the parity data PRTdat 0, the two memory blocks 16A storing the tag ways TagWAY # 0 and TagWAY # 2 are simultaneously accessed to select a required tagway from one of the memory blocks.
- the two memory blocks 16A in which the tag ways TagWAY # 1 and TagWAY # 3 are stored are simultaneously accessed to select a desired tagway from one of the memory blocks. Since there are two memory blocks to be operated and each memory block 16A has twice the storage capacity of the memory block 16, the power consumption is expected to be almost the same as the selection mode in which the memory blocks are selected in units of ways. Therefore, it is considered that the overall power consumption is smaller than that of the selection mode in which all memory blocks in units of ways are selected. The power consumption greatly affects the sense amplifier operation and particularly the standby current electricity such as the subthreshold leakage current.
- the tagway and the dataway have been described as being indexed in parallel, but the present invention is not limited to this.
- the selection data is not limited to parity data, and a specific bit of index address information may be used when the parity check function is unnecessary.
- the data configuration of the cache entry, the data configuration of the tag entry, the number of bits thereof, and the like can be changed as appropriate.
- the tag address information is not limited to the virtual CPU number / logical address pair of FIG. 13, but the CPU's virtual address space row configuration and data processing thread such as an address space number (ASID) / logical address pair. It can be appropriately changed according to the configuration.
- the microcomputer is not limited to a single CPU core, but can be applied to a multi-CPU core.
- index address information may be configured using a virtual CPU number corresponding to a multi-thread for each CPU core.
- the cache memory may be any of an instruction cache, a data cache, and a unified cache, and is applicable to either a primary cache or a secondary cache.
- parity data is generated for the tag address TAGadrs, one or more tagways and corresponding dataways are selected from a plurality of tagways based on the parity data, and other tagways and corresponding dataways are non- Although it is selected, low power consumption is realized even if data for selecting a tag way is generated from the tag address TAGadrs by an arbitrary method. For example, as a simple method, any one of the 20 bits of the tag address TAGadrs can be used as a way selection signal.
- the present invention is widely applied to a semiconductor integrated circuit that performs caching using a set associative method, a microcomputer having a primary or secondary cache, a semiconductor integrated circuit such as a system-on-chip so-called SoC, a modularized semiconductor device, and the like. can do.
- Microcomputer 2 CPU (Central Processing Unit) 3 Cache memory (CACHMRY) 4 Internal bus 9 Interrupt controller (INTC) 5 Random access memory (RAM) 6 Direct memory access controller (DMAC) 7 Flash memory (FLASH) 8 Other peripheral circuits (PRPHRL) 10 Cache control circuit (CACHCNT) 11 Memory mat (MRYMAT) 12 Tagway (TagWAY) 13 Dataway (DataWAY) 14 Cache entry (CachENTRY) 15 LRU data array (LRUARY) 16 Memory block TagWAT # 0 to TagWAT # n-1 Tagway ACCadrs Access address information TAGadrs Tag address information IDXadrs Index address information 30 Parity generation circuit (PRTYG) 31 Tag comparison circuit (TAGCMP) 32 Multi-hit detection circuit (MLTHIT) PRTdat Parity data cen0 to cen3 cena, cenb enable signal (block enable signal)
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
先ず、本願において開示される発明の代表的な実施の形態について概要を説明する。代表的な実施の形態についての概要説明で括弧を付して参照する図面中の参照符号はそれが付された構成要素の概念に含まれるものを例示するに過ぎない。
本発明の代表的な実施の形態に係るデータ処理装置(1)は、複数のウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイ(12,13)の中から、アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成される選択データ(PRTdat)の値に応じて一部のウェイを選択し、選択したウェイの中から、前記アドレス情報の中のインデックスアドレスを用いてキャッシュタグをリードする。また、前記キャッシュメモリはキャッシュフィルを行うとき、前記選択データの値に応ずる一部のウェイの中から選んだキャッシュエントリ(14)にキャッシュフィルを行う。
項1において、前記キャッシュメモリは、前記アドレス情報の一部であるタグアドレス情報に対するパリティデータ(PRTdat)を生成して前記選択データとする。
項1において、前記キャッシュメモリは、前記ウェイからリードされた夫々のキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号(41)を生成する。
項3において、前記キャッシュエラーの信号を例外要因又は割込み要因として入力する割込みコントローラ(9)をさらに有する。
項3において、前記キャッシュメモリは、前記一つだけ一致である比較結果に係るキャッシュタグのキャッシュエントリをデータ操作の対象とする。
項1において、前記ウェイは前記インデックスアドレスに対応して前記キャッシュタグを格納するタグウェイ(12)と、前記インデックスアドレスに対応してデータを格納するデータウェイ(13)とを有する。キャッシュエントリは前記キャッシュタグとそれに対応するデータを含み、前記複数のタグウェイのそれぞれはタグウェイ毎に活性又は非活性が選択されるメモリブロック(16)によって構成される。前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データを用いたメモリブロックの活性化によって前記一部のタグウェイの選択を行う。
項6において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の全ビットに対する1ビットのパリティデータPRTdatであり、第1の論理値のパリティデータは前記複数のメモリブロックの半分の選択に用い、第2の論理値のパリティデータは前記複数のメモリブロックの残りの半分の選択に用いる。
項6において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の複数分割部分毎のパリティビットから成る複数ビットのパリティデータ(PRTdat[1:0])であり、前記パリティデータの値は複数のタグウェイの中から選択するタグウェイを決める。
項6において、前記キャッシュメモリは、前記複数のウェイの中から前記選択データにより一部のタグウェイを選択してキャッシュタグをリードする第1モード(PBS=on)と、キャッシュタグをリードする対象タグウェイをすべてのタグウェイとする第2モード(PBS=off)とを有し、前記第1モード又は第2モードを選択するモード選択信号(MOD)を入力する。
項1において、前記ウェイは前記インデックスアドレスに対応して前記キャッシュタグを格納するタグウェイと、前記インデックスアドレスに対応してデータを格納するデータウェイとを有する。キャッシュエントリは前記キャッシュタグとそれに対応するデータを含む。前記複数のタグウェイは所定複数個毎に一つのメモリブロック(16A)に集約されて構成され、同一メモリブロックに構成される複数のタグウェイは相互に異なる選択データによって選択される。前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データと前記インデックスアドレス情報とを用いてキャッシュタグをリードする。
項10において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の全ビットに対する1ビットのパリティデータ(PRTdat)である。第1の論理値のパリティデータは夫々の前記メモリブロックの中の一方のタグウェイの選択に用いる。第2の論理値のパリティデータは夫々の前記メモリブロックの中の他方のタグウェイの選択に用いる。
項10において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の複数分割部分毎のパリティビットから成る複数ビットのパリティデータ(PRTdat[1:0])であり、前記パリティデータの値は夫々のメモリブロックの中から選択するタグウェイを決める。
項1において、前記キャッシュメモリがキャッシュフィルを行うキャッシュエントリを決めるとき、キャッシュフィル対象とするキャッシュエントリを疑似LRUにより特定するための指標として用いるLRUデータ(LRU[1:0]、LRU[2:0])を格納するLRUデータアレイ(15,15A)を有する。前記LRUデータアレイは、キャッシュエントリに対するインデックスアドレス毎に、選択データで選択される一部のウェイ毎の利用履歴を示す複数ビットの履歴データを格納する領域を有する。前記キャッシュメモリは、インデックスアドレス情報を用いてLRUデータアレイから読み出した前記履歴データと、対応する選択データとに基づいてキャッシュフィルを行うキャッシュエントリを選ぶ。
項13において、前記複数のウェイのそれぞれはタグウェイ毎に活性又は非活性が選択されるメモリブロック(16)によって構成される。前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データを用いたメモリブロックの活性化によって前記選択データによる一部のウェイの選択を行う。前記キャッシュメモリはキャッシュフィルを行うとき、夫々のメモリブロックにおいて前記インデックスアドレスが指し示すキャッシュエントリの内、インデックスアドレス情報に基づいて前記LRUデータアレイ(15)から読み出された複数ビットの前記履歴データ(LRU[1:0])と、前記タグアドレスに基づいて生成される前記選択データ(PRTdat)とによって、キャッシュフィルを行うキャッシュエントリを選択する。
項13において、前記複数のウェイは所定複個毎に一つのメモリブロック(16A)に集約されて構成され、同一メモリブロックに構成される複数のウェイは相互に異なる選択データによって選択される。前記キャッシュメモリはキャッシュタグをリードするとき、活性化された夫々のメモリブロックの中の何れのウェイを選択するかを前記選択データによって指定され、指定されたウェイの中のいずれのキャッシュタグを選択するかを前記アドレス情報の中のインデックスアドレス情報によって指定される。前記キャッシュメモリはキャッシュフィルを行うとき、いずれのメモリブロックを選択するかを、前記LRUデータアレイ(15A)からインデックスアドレス情報によって読み出された複数ビットの履歴データ(LRU[2:0」)によって指定され、指定されたメモリブロックの中から何れのウェイを選択するかを、前記選択データ(PRTdat)によって指定され、指定されたウェイの中から何れのキャッシュエントリをキャッシュフィルの対象とするかを、前記インデックスアドレス(IDXadrs)によって指定される。
本発明の別の実施の形態に係るデータ処理装置(1)は、複数ウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイの中から何れの一部のウェイを選択するかを、アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成されるパリティデータ(PRTdat)の値に応じて指定され、指定されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレス情報によって指定される。リードされたキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号(41)を生成する。前記キャッシュメモリはキャッシュフィルを行うとき、前記選択データの値に応ずる一部のウェイの中から選んだキャッシュエントリにキャッシュフィルを行う。
本発明の更に別の実施の形態に係るデータ処理装置(1)は、複数にウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイの中から何れの一部のウェイを選択するかを、アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成されるパリティデータ(PRTdat)の値に応じて指定され、指定されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレス情報(IDXadrs)によって指定される。リードされたキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号(41)を生成する。前記キャッシュメモリは、キャッシュフィルを行うキャッシュエントリを決めるとき、キャッシュフィルの対象とするキャッシュエントリを疑似LRUにより特定するための指標として用いるLRUデータ(LRU[1:0],LRU[2:0])を格納するLRUデータアレイ(15,15A)を有する。前記LRUデータアレイは、キャッシュエントリに対するインデックスアドレス毎に、パリティデータで選択される一部のウェイ毎の利用履歴を示す複数ビットの履歴データを格納する領域を有する。前記キャッシュメモリは、インデックスアドレス情報を用いてLRUデータアレイから読み出した前記履歴データと、対応する選択データとに基づいてキャッシュフィルを行うキャッシュエントリを選ぶ。
本発明の更に別の実施の形態に係るデータ処理装置(1)は、複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリはアドレス情報に基づいてウェイを操作するとき、前記アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成される選択データ(PRTdat)に応ずる一部のウェイの中から操作対象とするキャッシュエントリを選ぶ。
本発明の更に別の実施の形態に係るデータ処理装置(1)は、複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリは、アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成される選択データに応ずる一部のウェイの中から、アドレスタグと比較するキャッシュタグをリードし、且つ、キャッシュフィルを行うキャッシュエントリを選ぶ。
本発明の更に別の実施の形態に係るデータ処理装置(1)は、複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリ(3)を含む。前記キャッシュメモリはキャッシュタグをリードするとき、複数のウェイの内のどのウェイを選択するかを、アドレス情報の一部であるタグアドレス情報(TAGadrs)に基づいて生成される選択データ(PRTdat)の値に応じて指示され、指示されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレス(IDXadrs)によって指示される。前記キャッシュメモリはキャッシュフィルを行うとき、全てのウェイのキャッシュエントリについての前記インデックスアドレス単位で参照される使用履歴(LRU[1:0],LRU[2:0])と前記選択データ(PRTdat)の値との組み合わせにしたがって、キャッシュフィルを行うキャッシュエントリの選択を行う。
実施の形態について更に詳述する。なお、発明を実施するための形態を説明するための全図において、同一の機能を有する要素には同一の符号を付して、その繰り返しの説明を省略する。
図1にはデータ処理装置の一実施の形態としてマイクロコンピュータ(MCU)1が例示される。同図に示されるマイクロコンピュータ1は、特に制限されないが、単結晶シリコンのような1個の半導体基板にCMOS集積回路製造技術を用いて形成される。
図3にはキャッシュメモリのタグウェイに対するパリティ機能の基本的な構成が例示される。ここではnウェイ・セットアソシアティブキャッシュメモリにおけるn個のタグウェイTagWAT#0~TagWAT#n-1に着目する。
ここではパリティデータを用いてウェイ毎のメモリブロックに対する活性/非活性制御を行う場合について説明する。
次に、上述したパリティチェック機能(単にPBSとも記す)を選択的にオン/オフできるようにする場合について説明する。
複数ビットのキャッシュメモリ3にパリティデータを用いる場合について説明する。
次に、メモリブロック毎に複数個のウェイを格納してウェイの選択を行う場合について説明する。図11にはメモリブロック毎に2個のウェイを格納するようにした場合におけるキャッシュエントリに対するリード動作系に着目した構成が例示される。図12にはメモリブロック毎に2個のウェイを格納するようにした場合におけるキャッシュエントリに対するフィル動作系に着目した構成が例示される。図4及び図6との相違点はメモリブロックの数が半減され、それに応じて操作対象とするキャッシュエントリの選択制御形態とキャッシュタグに対する判別制御回路が相違される。それら相違点について以下で説明するが、図4及び図6と同一機能を有する構成要素についてはそれと同一符号を付してその詳細な説明を省略する。
アドレス情報が物理アドレスであればタグアドレス情報もその物理アドレス情報の一部の情報とされる。アドレス情報が論理アドレスのときもタグアドレス情報はその論理アドレス情報の一部の情報とされてよいことは言うまでもないが、論理アドレス場合には論理アドレスを物理アドレスに変換するときに必要となる情報(α)が含まれる場合もあり、そのときは、タグアドレス情報として情報αを含めて考えることが得策であり、当然、情報αも含めたタグアドレス情報に対してパリティデータを生成する。例えばマルチスレッドで処理を行う場合には夫々のスレッドの処理を行うバーチャルCPU番号(Virtual CPU ID)が上記情報αとされる。図13にはその場合のタグエントリの構成が例示される。図13においてタグエントリは、当該キャッシュエントリの有効性を示すバリッドビットV、当該キャッシュエントリのリプレース禁止か否かを示すロックビットL、バーチャルCPU番号を示すVirtual CPU ID、論理アドレスキャッシュタグを示すLogical Address[31:12]を有する。
図16にはメモリブロック16,16A,15,15Aの具体例が示される。メモリブロック16,16A,15,15Aは例えばスタティックランダムアクセスメモリ(SRAM)として構成され、メモリアレイ(MCA)110には複数のスタティック型のメモリセルMC(図には代表的に1個が図示される)がマトリクス配置される。メモリセルの選択端子は代表的に示されたワード線WLに接続され、メモリセルのデータ入出力端子は代表的示された相補ビット線BLt、BLbに接続される。
図22にはタグウェイに対するインデックス動作形態の主な態様をまとめとして示す。4個のタグウェイTagWAY#0~TagWAY#3に対する動作を一例とする。インデックス動作でウェイ単位のメモリブロック(BLCK)16のすべてを活性化する形態では、4個のメモリブロック16を同時アクセスするからそれによる消費電力は以下の動作形態に比べて最も大きくなる。
2 CPU(中央処理装置)
3 キャッシュメモリ(CACHMRY)
4 内部バス
9 割込みコントローラ(INTC)
5 ランダムアクセスメモリ(RAM)
6 ダイレクトメモリアクセスコントローラ(DMAC)
7 フラッシュメモリ(FLASH)
8 その他周辺回路(PRPHRL)
10 キャッシュ制御回路(CACHCNT)
11 メモリマット(MRYMAT)
12 タグウェイ(TagWAY)
13 データウェイ(DataWAY)
14 キャッシュエントリ(CachENTRY)
15 LRUデータアレイ(LRUARY)
16 メモリブロック
TagWAT#0~TagWAT#n-1 タグウェイ
ACCadrs アクセスアドレス情報
TAGadrs タグアドレス情報
IDXadrs インデックスアドレス情報
30 パリティ生成回路(PRTYG)
31 タグ比較回路(TAGCMP)
32 マルチヒット検出回路(MLTHIT)
PRTdat パリティデータ
cen0~cen3 cena、cenb イネーブル信号(ブロックイネーブル信号)
Claims (20)
- 複数のウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイの中から、アドレス情報の一部であるタグアドレス情報に基づいて生成される選択データの値に応じて一部のウェイを選択し、選択したウェイの中から、前記アドレス情報の中のインデックスアドレスを用いてキャッシュタグをリードし、
前記キャッシュメモリはキャッシュフィルを行うとき、前記選択データの値に応ずる一部のウェイの中から選んだキャッシュエントリにキャッシュフィルを行う、データ処理装置。 - 請求項1において、前記キャッシュメモリは、前記アドレス情報の一部であるタグアドレス情報に対するパリティデータを生成して前記選択データとする、データ処理装置。
- 請求項1において、前記キャッシュメモリは、前記ウェイからリードされた夫々のキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号を生成する、データ処理装置。
- 請求項3において、前記キャッシュエラーの信号を例外要因又は割込み要因として入力する割込みコントローラをさらに有する、データ処理装置。
- 請求項3において、前記キャッシュメモリは、前記一つだけ一致である比較結果に係るキャッシュタグのキャッシュエントリをデータ操作の対象とする、データ処理装置。
- 請求項1において、前記ウェイは前記インデックスアドレスに対応して前記キャッシュタグを格納するタグウェイと、前記インデックスアドレスに対応してデータを格納するデータウェイとを有し、キャッシュエントリは前記キャッシュタグとそれに対応するデータを含み、前記複数のタグウェイのそれぞれはタグウェイ毎に活性又は非活性が選択されるメモリブロックによって構成され、
前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データを用いたメモリブロックの活性化によって前記一部のタグウェイの選択を行う、データ処理装置。 - 請求項6において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の全ビットに対する1ビットのパリティデータであり、第1の論理値のパリティデータは前記複数のメモリブロックの半分の選択に用い、第2の論理値のパリティデータは前記複数のメモリブロックの残りの半分の選択に用いる、データ処理装置。
- 請求項6において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の複数分割部分毎のパリティビットから成る複数ビットのパリティデータであり、前記パリティデータの値は複数のタグウェイの中から選択するタグウェイを決める、データ処理装置。
- 請求項6において、前記キャッシュメモリは、前記複数のウェイの中から前記選択データにより一部のタグウェイを選択してキャッシュタグをリードする第1モードと、キャッシュタグをリードする対象タグウェイをすべてのタグウェイとする第2モードとを有し、前記第1モード又は第2モードを選択するモード選択信号を入力する、データ処理装置。
- 請求項1において、前記ウェイは前記インデックスアドレスに対応して前記キャッシュタグを格納するタグウェイと、前記インデックスアドレスに対応してデータを格納するデータウェイとを有し、キャッシュエントリは前記キャッシュタグとそれに対応するデータを含み、前記複数のタグウェイは所定複数個毎に一つのメモリブロックに集約されて構成され、同一メモリブロックに構成される複数のタグウェイは相互に異なる選択データによって選択され、
前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データと前記インデックスアドレス情報とを用いてキャッシュタグをリードする、データ処理装置。 - 請求項10において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の全ビットに対する1ビットのパリティデータであり、第1の論理値のパリティデータは夫々の前記メモリブロックの中の一方のタグウェイの選択に用い、第2の論理値のパリティデータは夫々の前記メモリブロックの中の他方のタグウェイの選択に用いる、データ処理装置。
- 請求項10において、前記選択データは前記アドレス情報の一部であるタグアドレス情報の複数分割部分毎のパリティビットから成る複数ビットのパリティデータであり、前記パリティデータの値は夫々のメモリブロックの中から選択するタグウェイを決める、データ処理装置。
- 請求項1において、前記キャッシュメモリがキャッシュフィルを行うキャッシュエントリを決めるとき、キャッシュフィル対象とするキャッシュエントリを疑似LRUにより特定するための指標として用いるLRUデータを格納するLRUデータアレイを有し、
前記LRUデータアレイは、キャッシュエントリに対するインデックスアドレス毎に、選択データで選択される一部のウェイ毎の利用履歴を示す複数ビットの履歴データを格納する領域を有し、
前記キャッシュメモリは、インデックスアドレス情報を用いてLRUデータアレイから読み出した前記履歴データと、対応する選択データとに基づいてキャッシュフィルを行うキャッシュエントリを選ぶ、データ処理装置。 - 請求項13において、前記複数のウェイのそれぞれはタグウェイ毎に活性又は非活性が選択されるメモリブロックによって構成され、
前記キャッシュメモリはキャッシュタグをリードするとき、前記選択データを用いたメモリブロックの活性化によって前記選択データによる一部のウェイの選択を行い、
前記キャッシュメモリはキャッシュフィルを行うとき、夫々のメモリブロックにおいて前記インデックスアドレスが指し示すキャッシュエントリの内、インデックスアドレス情報に基づいて前記LRUデータアレイから読み出された複数ビットの前記履歴データと、前記タグアドレスに基づいて生成される前記選択データとによって、キャッシュフィルを行うキャッシュエントリを選択する、データ処理装置。 - 請求項13において、前記複数のウェイは所定複個毎に一つのメモリブロックに集約されて構成され、同一メモリブロックに構成される複数のウェイは相互に異なる選択データによって選択され、
前記キャッシュメモリはキャッシュタグをリードするとき、活性化された夫々のメモリブロックの中の何れのウェイを選択するかを前記選択データによって指定され、指定されたウェイの中のいずれのキャッシュタグを選択するかを前記アドレス情報の中のインデックスアドレス情報によって指定され、
前記キャッシュメモリはキャッシュフィルを行うとき、いずれのメモリブロックを選択するかを、前記LRUデータアレイからインデックスアドレス情報によって読み出された複数ビットの履歴データによって指定され、指定されたメモリブロックの中から何れのウェイを選択するかを、前記選択データによって指定され、指定されたウェイの中から何れのキャッシュエントリをキャッシュフィルの対象とするかを、前記インデックスアドレスによって指定される、データ処理装置。 - 複数ウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイの中から何れの一部のウェイを選択するかを、アドレス情報の一部であるタグアドレス情報に基づいて生成されるパリティデータの値に応じて指定され、指定されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレス情報によって指定され、リードされたキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号を生成し、
前記キャッシュメモリはキャッシュフィルを行うとき、前記選択データの値に応ずる一部のウェイの中から選んだキャッシュエントリにキャッシュフィルを行う、データ処理装置。 - 複数にウェイに複数のキャッシュエントリを格納するセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリはキャッシュタグをリードするとき、前記複数のウェイの中から何れの一部のウェイを選択するかを、アドレス情報の一部であるタグアドレス情報に基づいて生成されるパリティデータの値に応じて指定され、指定されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレス情報によって指定され、リードされたキャッシュタグを前記タグアドレスと比較し、それらの比較結果が、全て不一致、一つだけ一致、又は複数一致の何れであるかを判別し、前記複数一致を判別したときはキャッシュエラー信号を生成し、
前記キャッシュメモリは、キャッシュフィルを行うキャッシュエントリを決めるとき、キャッシュフィルの対象とするキャッシュエントリを疑似LRUにより特定するための指標として用いるLRUデータを格納するLRUデータアレイを有し、
前記LRUデータアレイは、キャッシュエントリに対するインデックスアドレス毎に、パリティデータで選択される一部のウェイ毎の利用履歴を示す複数ビットの履歴データを格納する領域を有し、
前記キャッシュメモリは、インデックスアドレス情報を用いてLRUデータアレイから読み出した前記履歴データと、対応する選択データとに基づいてキャッシュフィルを行うキャッシュエントリを選ぶ、データ処理装置。 - 複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリはアドレス情報に基づいてウェイを操作するとき、前記アドレス情報の一部であるタグアドレス情報に基づいて生成される選択データに応ずる一部のウェイの中から操作対象とするキャッシュエントリを選ぶ、データ処理装置。 - 複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリは、アドレス情報の一部であるタグアドレス情報に基づいて生成される選択データに応ずる一部のウェイの中から、アドレスタグと比較するキャッシュタグをリードし、且つ、キャッシュフィルを行うキャッシュエントリを選ぶ、データ処理装置。 - 複数のウェイを複数のキャッシュエントリの格納に用いるセットアソシアティブ型のキャッシュメモリを含むデータ処理装置であって、
前記キャッシュメモリはキャッシュタグをリードするとき、複数のウェイの内のどのウェイを選択するかを、アドレス情報の一部であるタグアドレス情報に基づいて生成される選択データの値に応じて指示され、指示されたウェイの中から何れのキャッシュタグをリードするかを、前記アドレス情報の中のインデックスアドレスによって指示され、
前記キャッシュメモリはキャッシュフィルを行うとき、全てのウェイのキャッシュエントリについての前記インデックスアドレス単位で参照される使用履歴と前記選択データの値との組み合わせにしたがって、キャッシュフィルを行うキャッシュエントリの選択を行う、データ処理装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180075908.3A CN104011692B (zh) | 2011-12-26 | 2011-12-26 | 数据处理装置 |
PCT/JP2011/080078 WO2013098919A1 (ja) | 2011-12-26 | 2011-12-26 | データ処理装置 |
EP11878346.3A EP2799997B1 (en) | 2011-12-26 | 2011-12-26 | Data processing device |
US14/367,925 US9495299B2 (en) | 2011-12-26 | 2011-12-26 | Data processing device utilizing way selection of set associative cache memory based on select data such as parity data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/080078 WO2013098919A1 (ja) | 2011-12-26 | 2011-12-26 | データ処理装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013098919A1 true WO2013098919A1 (ja) | 2013-07-04 |
Family
ID=48696489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/080078 WO2013098919A1 (ja) | 2011-12-26 | 2011-12-26 | データ処理装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US9495299B2 (ja) |
EP (1) | EP2799997B1 (ja) |
CN (1) | CN104011692B (ja) |
WO (1) | WO2013098919A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017503298A (ja) * | 2014-12-14 | 2017-01-26 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | アドレス・タグ・ビットに基づく動的キャッシュ置換ウェイ選択 |
JP2017503299A (ja) * | 2014-12-14 | 2017-01-26 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | モードに応じてセットの1つ又は複数を選択的に選択するように動的に構成可能であるマルチモード・セット・アソシエイティブ・キャッシュ・メモリ |
JP2017507442A (ja) * | 2014-12-14 | 2017-03-16 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | モードに応じてウェイの全部又はサブセットに選択的に割り当てるように動的に構成可能であるマルチモード・セット・アソシエイティブ・キャッシュ・メモリ |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101912910A (zh) * | 2010-06-22 | 2010-12-15 | 武汉科技大学 | 一种低塑性、低硬化指数薄板的拉深方法 |
US9239797B2 (en) * | 2013-08-15 | 2016-01-19 | Globalfoundries Inc. | Implementing enhanced data caching and takeover of non-owned storage devices in dual storage device controller configuration with data in write cache |
US9846648B2 (en) * | 2015-05-11 | 2017-12-19 | Intel Corporation | Create page locality in cache controller cache allocation |
CN105335247B (zh) * | 2015-09-24 | 2018-04-20 | 中国航天科技集团公司第九研究院第七七一研究所 | 高可靠系统芯片中Cache的容错结构及其容错方法 |
CN109074336B (zh) * | 2016-02-29 | 2022-12-09 | 瑞萨电子美国有限公司 | 用于对微控制器内的数据传输进行编程的系统和方法 |
US10156887B2 (en) | 2016-09-29 | 2018-12-18 | Qualcomm Incorporated | Cache memory clock generation circuits for reducing power consumption and read errors in cache memory |
KR102321332B1 (ko) * | 2017-05-30 | 2021-11-04 | 에스케이하이닉스 주식회사 | 반도체 장치 및 그의 구동 방법 |
US10564856B2 (en) * | 2017-07-06 | 2020-02-18 | Alibaba Group Holding Limited | Method and system for mitigating write amplification in a phase change memory-based storage device |
KR102593757B1 (ko) * | 2018-09-10 | 2023-10-26 | 에스케이하이닉스 주식회사 | 메모리 시스템 및 메모리 시스템의 동작방법 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04205449A (ja) * | 1990-11-30 | 1992-07-27 | Matsushita Electric Ind Co Ltd | キャッシュ装置 |
JPH07168760A (ja) * | 1993-12-14 | 1995-07-04 | Fujitsu Ltd | キャッシュ制御装置 |
JP2000132460A (ja) | 1998-10-21 | 2000-05-12 | Hitachi Ltd | 半導体集積回路装置 |
WO2002008911A1 (fr) * | 2000-07-24 | 2002-01-31 | Hitachi,Ltd | Systeme de traitement de donnees |
JP2002236616A (ja) | 2001-02-13 | 2002-08-23 | Fujitsu Ltd | キャッシュメモリシステム |
JP2010026716A (ja) | 2008-07-17 | 2010-02-04 | Toshiba Corp | キャッシュメモリ制御回路及びプロセッサ |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5666513A (en) | 1996-01-05 | 1997-09-09 | Unisys Corporation | Automatic reconfiguration of multiple-way cache system allowing uninterrupted continuing processor operation |
JP2000200221A (ja) * | 1998-10-30 | 2000-07-18 | Nec Corp | キャッシュメモリ装置及びその制御方法 |
US6708294B1 (en) * | 1999-09-08 | 2004-03-16 | Fujitsu Limited | Cache memory apparatus and computer readable recording medium on which a program for controlling a cache memory is recorded |
JP4753549B2 (ja) | 2004-05-31 | 2011-08-24 | パナソニック株式会社 | キャッシュメモリおよびシステム |
US7673102B2 (en) | 2006-05-17 | 2010-03-02 | Qualcomm Incorporated | Method and system for maximum residency replacement of cache memory |
US20080040576A1 (en) | 2006-08-09 | 2008-02-14 | Brian Michael Stempel | Associate Cached Branch Information with the Last Granularity of Branch instruction in Variable Length instruction Set |
CN101620572B (zh) * | 2008-07-02 | 2011-06-01 | 上海华虹Nec电子有限公司 | 非易失性内存及控制方法 |
US8145985B2 (en) * | 2008-09-05 | 2012-03-27 | Freescale Semiconductor, Inc. | Error detection schemes for a unified cache in a data processing system |
US8549383B2 (en) * | 2011-08-24 | 2013-10-01 | Oracle International Corporation | Cache tag array with hard error proofing |
-
2011
- 2011-12-26 EP EP11878346.3A patent/EP2799997B1/en active Active
- 2011-12-26 US US14/367,925 patent/US9495299B2/en active Active
- 2011-12-26 CN CN201180075908.3A patent/CN104011692B/zh not_active Expired - Fee Related
- 2011-12-26 WO PCT/JP2011/080078 patent/WO2013098919A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04205449A (ja) * | 1990-11-30 | 1992-07-27 | Matsushita Electric Ind Co Ltd | キャッシュ装置 |
JPH07168760A (ja) * | 1993-12-14 | 1995-07-04 | Fujitsu Ltd | キャッシュ制御装置 |
JP2000132460A (ja) | 1998-10-21 | 2000-05-12 | Hitachi Ltd | 半導体集積回路装置 |
WO2002008911A1 (fr) * | 2000-07-24 | 2002-01-31 | Hitachi,Ltd | Systeme de traitement de donnees |
JP2002236616A (ja) | 2001-02-13 | 2002-08-23 | Fujitsu Ltd | キャッシュメモリシステム |
JP2010026716A (ja) | 2008-07-17 | 2010-02-04 | Toshiba Corp | キャッシュメモリ制御回路及びプロセッサ |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017503298A (ja) * | 2014-12-14 | 2017-01-26 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | アドレス・タグ・ビットに基づく動的キャッシュ置換ウェイ選択 |
JP2017503299A (ja) * | 2014-12-14 | 2017-01-26 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | モードに応じてセットの1つ又は複数を選択的に選択するように動的に構成可能であるマルチモード・セット・アソシエイティブ・キャッシュ・メモリ |
JP2017507442A (ja) * | 2014-12-14 | 2017-03-16 | ヴィア アライアンス セミコンダクター カンパニー リミテッド | モードに応じてウェイの全部又はサブセットに選択的に割り当てるように動的に構成可能であるマルチモード・セット・アソシエイティブ・キャッシュ・メモリ |
US9798668B2 (en) | 2014-12-14 | 2017-10-24 | Via Alliance Semiconductor Co., Ltd. | Multi-mode set associative cache memory dynamically configurable to selectively select one or a plurality of its sets depending upon the mode |
US10698827B2 (en) | 2014-12-14 | 2020-06-30 | Via Alliance Semiconductor Co., Ltd. | Dynamic cache replacement way selection based on address tag bits |
US10719434B2 (en) | 2014-12-14 | 2020-07-21 | Via Alliance Semiconductors Co., Ltd. | Multi-mode set associative cache memory dynamically configurable to selectively allocate into all or a subset of its ways depending on the mode |
Also Published As
Publication number | Publication date |
---|---|
EP2799997A4 (en) | 2015-12-09 |
US9495299B2 (en) | 2016-11-15 |
EP2799997A1 (en) | 2014-11-05 |
US20140359223A1 (en) | 2014-12-04 |
CN104011692B (zh) | 2017-03-01 |
EP2799997B1 (en) | 2020-01-22 |
CN104011692A (zh) | 2014-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013098919A1 (ja) | データ処理装置 | |
US9552256B2 (en) | Semiconductor memory device including non-volatile memory, cache memory, and computer system | |
US8103830B2 (en) | Disabling cache portions during low voltage operations | |
JP4989872B2 (ja) | 半導体記憶装置および演算処理装置 | |
US6006311A (en) | Dynamic updating of repair mask used for cache defect avoidance | |
US6023746A (en) | Dual associative-cache directories allowing simultaneous read operation using two buses with multiplexors, address tags, memory block control signals, single clock cycle operation and error correction | |
US20070079184A1 (en) | System and method for avoiding attempts to access a defective portion of memory | |
US5958068A (en) | Cache array defect functional bypassing using repair mask | |
JPH03286495A (ja) | 半導体記憶装置 | |
JPH03141443A (ja) | データ格納方法及びマルチ・ウェイ・セット・アソシアチブ・キャッシュ記憶装置 | |
US9081719B2 (en) | Selective memory scrubbing based on data type | |
US6085288A (en) | Dual cache directories with respective queue independently executing its content and allowing staggered write operations | |
US10528473B2 (en) | Disabling cache portions during low voltage operations | |
JP2006059068A (ja) | プロセッサ装置 | |
US5883904A (en) | Method for recoverability via redundant cache arrays | |
US5943686A (en) | Multiple cache directories for non-arbitration concurrent accessing of a cache memory | |
US5867511A (en) | Method for high-speed recoverable directory access | |
US7649764B2 (en) | Memory with shared write bit line(s) | |
JP6149265B2 (ja) | データ処理装置 | |
US20140195733A1 (en) | Memory Using Voltage to Improve Reliability for Certain Data Types | |
Wen et al. | Read error resilient MLC STT-MRAM based last level cache | |
JPWO2013098919A1 (ja) | データ処理装置 | |
US9141451B2 (en) | Memory having improved reliability for certain data types | |
JP2007080283A (ja) | 半導体集積回路 | |
JP3672695B2 (ja) | 半導体記憶装置、マイクロコンピュータ、及びデータ処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11878346 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013551060 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011878346 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14367925 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |