CN100375066C - Method realizing priority reading memory based on cache memory line shifting - Google Patents

Method realizing priority reading memory based on cache memory line shifting Download PDF

Info

Publication number
CN100375066C
CN100375066C CNB2005100323066A CN200510032306A CN100375066C CN 100375066 C CN100375066 C CN 100375066C CN B2005100323066 A CNB2005100323066 A CN B2005100323066A CN 200510032306 A CN200510032306 A CN 200510032306A CN 100375066 C CN100375066 C CN 100375066C
Authority
CN
China
Prior art keywords
cache
data
read
address
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100323066A
Other languages
Chinese (zh)
Other versions
CN1858720A (en
Inventor
汪东
卢晏安
陈书明
郭阳
孙书为
扈啸
方兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CNB2005100323066A priority Critical patent/CN100375066C/en
Publication of CN1858720A publication Critical patent/CN1858720A/en
Application granted granted Critical
Publication of CN100375066C publication Critical patent/CN100375066C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention relates to a method for realizing the priority reading memory based on the line offset volume of a cache memory, which is used for improving the efficiency of needed data during the failure of the Cache. The technical scheme designs a new memory control member. In the Cache, the offset interception logic, the start address generation logic, and the data recovering and counting logic are designed. The Cache line is evenly divided into M blocks. During the failure of the Cache, the position of the failure data is firstly judged. The offset f is used to indicate the position of the block. When the Cache sends an access request to the storage and control member, the f and the start address A of the Cache line are output to the storage and control member. According to the A and the f, the start address of the failure data block f is calculated by the storage and control member. The f is read and returned to the Cache, the start addresses of other M-1 data blocks are calculated and the data of each block is read, and then the data is returned to the Cache for completing the read of the whole Cache line. By adopting the present invention, the needed data during the failure of the Cache can be preferentially read to remove the failure state of the Cache as soon as possible and improve the performance of a microprocessor.

Description

Realize preferentially reading the method for storer based on the cache line side-play amount
Technical field:
The present invention relates to embedded microprocessor and SoC (System on Chip, SOC (system on a chip)) in the design from the external memory storage reading of data to sheet in the method for Cache (cache memory), especially control the method that reads storer when having the immediate data path between the parts when depositing of Cache and external memory storage.
Background technology:
In universal cpu, be responsible for all realizing by the bridging chip on the mainboard with the control parts of depositing of external memory storage swap data.And at embedded microprocessor, Digital SignalProcessor for example, digital signal processor), ASIC (Application Specific Integrated Circuit, special IC), and among some SoC, deposit the control parts and usually be integrated on the chip piece, constitute microprocessor with CPU nuclear.In this case, microprocessor chip can directly articulate external memory storage, does not need bridging chip.Cache is a fast data buffer storer commonly used in the Modern microprocessor chip, and performance of processors is most important for improving.In embedded microprocessor, Cache is usually by depositing control parts and chip external memory swap data.When CPU nuclear is read the Cache inefficacy, in external memory storage Mem, read in required data by depositing the control parts.In modern Cache technology, Cache generally is organized into the structure of multirow, and data volume of each row is according to the difference of CPU architecture and difference.When CPU nuclear is read the Cache inefficacy, read in the full line data that fail data is expert at from external memory storage Mem by depositing the control parts, remove failure state then.Because the address of each line data of Cache all is continuous, so Cache only need provide the capable start address of this Cache when depositing the control parts and send read request, deposits the control parts and carries out the address automatically and add up, and reads in the full line data continuously.The shortcoming of this method is: always begin to read in continuously data line from first address, efficient is lower.Because when the required data of CPU nuclear are capable last data of Cache, deposit the control parts and will read in this data at last.Before this, the Cache failure state can't be removed all the time, and CPU nuclear can not get required data, and the CPU streamline gets clogged all the time.Therefore, deposit the control parts and read in Cache inefficacy desired data more early, the time that the CPU streamline gets clogged is just short more, and the performance of microprocessor is just good more.
Summary of the invention:
The technical problem to be solved in the present invention is: in embedded emblem processor or SoC, for Cache with deposit the control parts and have the situation of immediate data path, when CPU nuclear is read the Cache inefficacy, improve and deposit the efficient that the control parts read in Cache inefficacy desired data, remove the Cache failure state as early as possible, improve the performance of microprocessor.
Technical scheme of the present invention is: design a kind of new control parts of depositing, design side-play amount intercepting logic, start address produce logic and data reclaim and the counting logic in Cache simultaneously, finish storer jointly by them and preferentially read.Because computer system is that least unit is carried out addressing with byte (8 bit) generally, the present invention supposes that the capable size of Cache is M*S byte, and with the capable M piece that is divided into of Cache, every comprises S byte.If computer system is unit-addressed with word or long word, then dividing Cache corresponding when capable is that unit carries out with word or long word.In order to handle conveniently, get M=2 usually n, S=2 p(n, p are positive integer).Design side-play amount intercepting logic, start address produce logic and data recovery and counting logic in Cache.When CPU nuclear is read the Cache inefficacy, data hit/inefficacy decision logic judges at first fail data is positioned at which capable piece of Cache, Cache is to depositing the control parts when sending out the memory access request, side-play amount intercepting logic produces side-play amount f, start address produces logic and produces start address A, and f and A is exported to simultaneously deposit the control parts.The producing method of f and A is as follows.Suppose k=log 2(M*S)-1, the logical address D of data is the q bit wide, and the k position of the logical address D of side-play amount intercepting logic intercepts fail data is to k-(n-1) position, i.e. D[k:k-(n-1)], be total to the n bit address, as block offset f; If fail data be positioned at the capable f piece of Cache (0≤f≤M-1), then f=D[k:k-(n-1)].Start address produces logic, and 0 of the k position to the of q position logical address D all is changed to 0, and the high address is constant, as start address, promptly
Depositing the control parts reads sequence controller, logical shift device, totalizer, address decoding unit and memory controller by data block and forms.Wherein data block is read first address A+ (f+i) S that sequence controller, logical shift device, totalizer are responsible for calculating each piece, i=-f wherein, and 1-f ... ,-1,1,2 ..., M-1-f.Data block is read the data block that the sequence controller selection need be read, and promptly selects the value of i, and calculates f+i, exports to the logical shift device.Address decoding unit and memory controller at first read the data block f at fail data place, promptly select i=0.After reading in the f blocks of data, take circulation continuously to read mode or block the mode of reading again and read in other M-1 blocks of data, thereby finish the capable read request of whole C ache.Circulating continuously and reading mode is after reading in the f blocks of data, reads f+1 continuously to the M-1 blocks of data, and then reads the 0th to the f-1 blocks of data, promptly selects i=1,2 ..., after the M-1-f, select i=-f again, 1-f ... ,-1.It is after reading in the f blocks of data that clean cut system reads mode, at first reads the 0th to the f-1 blocks of data, and then reads f+1 to the M-1 blocks of data, promptly selects i=-f, 1-f ..., after-1, select i=1 again, 2 ..., M-1-f.Obviously the second way has been divided into three sections with data line, has increased the primary address calculating operation than first kind of mode.But, adopt in the processor of n level Cache structure, if n-1 level Cache preecedence requirement the 0th to the f-1 blocks of data, should adopt second kind so and read mode at some.Therefore, in different microprocessor architectures, these two kinds are read order all is feasible.
Because S=2 p(p is a positive integer), for fear of the complex calculation unit that uses multiplier and so on, the present invention adopts the mode of logical shift to calculate (f+i) S in depositing the control parts, promptly adopts the logical shift device with the f+i p position that moves to left, just obtain the result of (f+i) S, then this result has been exported to totalizer.Totalizer adds (f+i) S on the basis of the capable first address of Cache, obtain first address A+ (f+i) S of offset blocks, then with this address with read memory command is given address decoding unit and memory controller is handled, address decoding unit and memory controller are from first address A+ (f+i) S of data block, from external memory storage, read in a blocks of data continuously, return to Cache.
The data recovery of Cache writes the Cache memory bank with this blocks of data after receiving the f blocks of data with the counting logic, simultaneously to CPU or upper level Cache return data, can remove the Cache failure state.Then, the data of Cache reclaim with the counting logic according to deposit control parts agreement read order (circulation is read or blocked and read continuously), the follow-up M-1 blocks of data of receiving is write the capable relevant position of Cache in the Cache memory bank successively, thereby finish capable the reading of whole C ache.
As if theoretically, if it is big more that the piece of the capable division of Cache is counted M, the byte number S that each piece comprised so is just more little, deposits the control parts and preferentially reads after the fail data piece, and Cache removes failure state just more early, and the performance of CPU is high more.But when M was big more, the value of n was also big more, and Cache is just complicated more with judgement of depositing the control parts and steering logic, and hardware cost is also high more.Therefore when selecting the M value, between cpu performance and hardware cost, weigh, make M≤8 as far as possible, Cache just can indicate the side-play amount f of fail data piece with the signal that is no more than 3 bit wides so, and Cache also realizes than being easier to the steering logic of depositing the control parts.
For the embedded microprocessor that has only one-level Cache, can and deposit between the control parts and adopt said method at one-level Cache.If comprise n level Cache in the embedded microprocessor, should and deposit between the control parts and adopt said method at n level Cache so.
Adopt the present invention can reach following beneficial technical effects:
1, the required data of Cache inefficacy are preferentially read in the present invention, remove the Cache failure state as early as possible, have improved the performance of embedded microprocessor greatly;
2, for the Cache of multichannel set associative, under the situation of piecemeal not, Cache is capable to be the minimum data unit of Cache memory bank, therefore is the further refinement to the minimum data unit with the capable further piecemeal of Cache, irrelevant with way and the associative structure of Cache.Whether the associative structure of Cache is used for quick judgment data and lost efficacy in Cache, and the present invention is after judging inefficacy, improves a kind of method of reading of data efficient.Needing only the Cache of embedded microprocessor and depositing between the control parts exists direct path just can adopt the present invention.
Description of drawings:
Fig. 1 is common comprising Cache and deposit the embedded microprocessor synoptic diagram of controlling parts.
Fig. 2 is the Cache structure of one 4 road set associative.
Fig. 3 is Cache of the present invention and the composition structural representation of depositing the control parts.
Fig. 4 is the capable partitioned mode synoptic diagram of certain Cache behind employing the present invention.
Fig. 5 deposits the control parts preferentially to read in after the f blocks of data, and the continuous circulation of reading other M-1 blocks of data is read mode and blocked and reads the mode synoptic diagram.
Fig. 6 is that one-level Cache and Multi-Level Cache are used synoptic diagram of the present invention respectively.
Embodiment:
Fig. 1 is common comprising Cache and deposit the embedded microprocessor synoptic diagram of controlling parts, and this embedded microprocessor has articulated external memory storage.CPU at first sends request to Cache when nuclear needs data.Cache is through after judging, if find that data in Cache, Cache promptly take place hit, then Cache directly provides data to CPU nuclear.If find data not in Cache, Cache promptly takes place to lose efficacy, Cache just sends the memory access request of reading external memory storage Mem to depositing the control parts.When each Cache lost efficacy, to depositing control component request data line.Deposit the control parts after external memory storage Mem sense data, data are returned to Cache, Cache removes failure state, provides required data to CPU.
Fig. 2 is the Cache structure of one 4 road set associative.As we can see from the figure, all to comprise some Cache capable for each road Cache.In different microprocessors, the number that Cache is capable all may be different with the data capacity of every row.When each Cache lost efficacy, all to depositing control component request data line.Under the situation of piecemeal not, Cache is capable to be the minimum data unit of Cache memory bank, therefore is the further refinement to the minimum data unit with the capable further piecemeal of Cache, irrelevant with way and the associative structure of Cache.Whether the associative structure of Cache is used for quick judgment data and lost efficacy in Cache, and the present invention is after judging inefficacy, improves a kind of method of reading of data efficient.In Fig. 2, Cache is capable to be divided into 4, i.e. M=4, and Cache can represent the side-play amount f of each piece with 2 binary signals.
Fig. 3 is Cache of the present invention and the composition structural representation of depositing the control parts.Left side square frame is the logical organization of Cache, and the right square frame is the logical organization of depositing the control parts.In order to simplify composition, the Cache and deposit control f, A and three signals of return data between the parts of only having drawn among the figure has omitted signals such as read, data width, DSR.CPU nuclear can provide the logical address D of data when the Cache request msg." data hit/inefficacy decision logic " in the Cache compares the Tag of buffer memory in address D and the Cache (being used for the whether zone bit in the Cache memory bank of designation data), if address D and Tag coupling, Cache promptly takes place hit, then Cache directly provides data to CPU nuclear.If address D and Tag do not match, Cache promptly takes place to lose efficacy, Cache sends the memory access request to depositing the control parts, and provides the side-play amount f of capable start address A of this Cache and n bit wide.Side-play amount f is produced by side-play amount intercepting logic, and the k position that promptly intercepts logical address D is exported to and deposited the control parts to the value of k-(n-1) position as f.Start address A then produces logic by start address and produces, and 0 of the k position to that is about to q position logical address D all is changed to 0, and the high address is constant, as start address, exports to and deposits the control parts, promptly Depositing the control parts reads sequence controller, logical shift device, totalizer, address decoding unit and storage controller by data block and forms.Wherein data block is read first address A+ (f+i) S that sequence controller, logical shift device, totalizer are responsible for calculating each piece, i=-f wherein, 1-f, ...,-1,1,2, ..., M-1-f. after data block was read sequence controller and received the capable first address A of the memory access request of Cache, block offset f and Cache, the data block that selection need be read was promptly selected the value of i, and calculate f+i, export to the logical shift device.In the present invention, at first read the fail data piece, promptly select i=0.Because S=2 p(p is a positive integer).So, when calculating (f+i) S, the logical shift device just can obtain the result of (f+i) S with the f+i p position that moves to left, and then the result is exported to totalizer.Totalizer adds (f+i) S on the basis of the capable first address of Cache, obtain first address A+ (f+i) S of offset blocks, then with this address with read memory command is given address decoding unit and memory controller is handled, address decoding unit and memory controller are from first address A+ (f+i) S, from external memory storage, read in a blocks of data continuously, return to Cache.
Fig. 4 is the capable partitioned mode synoptic diagram of Cache behind employing the present invention.The start address that Cache is capable is A, the first address of each piece with respect to the side-play amount of start address A be respectively 0, S, 2S......fS, (f+1) S, (f+2) S......MS, the data block at shadow representation fail data place, first address is A+fS.
Fig. 5 deposits the control parts preferentially to read in after the f blocks of data, and that reads other M-1 blocks of data continuous circulatingly reads mode and clean cut system reads the mode synoptic diagram.The circulating mode that reads is after reading in the f blocks of data continuously, reads f+1 continuously to the M-1 blocks of data, and then gets back to start address A, reads the 0th to the f-1 blocks of data.That is to say that in Fig. 5, " data block is read sequence controller " at first selects i=0, select i=1 then successively, 2 ..., M-1-f ,-f, 1-f ... ,-1.It is after reading in the f blocks of data that clean cut system reads mode, at first gets back to start address A, reads the 0th to the f-1 blocks of data, and then reads f+1 to the M-1 blocks of data.That is to say that in Fig. 5, " data block is read sequence controller " at first selects i=0, select i=-f then successively, 1-f ... ,-1,1,2 ..., M-1-f.
Fig. 6 has only the embedded microprocessor of one-level Cache and comprises in the embedded microprocessor of Multi-Level Cache to use synoptic diagram of the present invention respectively.In the microprocessor that has only one-level Cache, Cache will deposit the data block returned of control and carry out directly offering CPU nuclear after Cache replaces it.And in comprising the emblem processor of Multi-Level Cache, n level Cache will deposit the data block returned of control and carry out after Cache replaces it, data are handed to n-1 level Cache, and Cache at different levels then carry out data forwarding step by step according to set Cache agreement, until data being offered CPU nuclear.

Claims (6)

1. method that realizes preferentially reading storer based on the cache line side-play amount, it is characterized in that designing a kind of new control parts of depositing, and design side-play amount intercepting logic, start address produce logic and data recovery and counting logic in Cache, with size is the capable M piece that is divided into of Cache of M*S byte, every comprises S byte, M and S are 2 integral number power, i.e. M=2 n, S=2 p, n, p are positive integer; When CPU nuclear is read the Cache inefficacy, data hit/inefficacy decision logic judges at first fail data is positioned at which capable piece of Cache, Cache is to depositing the control parts when sending out the memory access request, side-play amount intercepting logic produces side-play amount f, start address produces logic and produces start address A, and exports to simultaneously and deposit the control parts; Depositing the control parts reads sequence controller, logical shift device, totalizer, address decoding unit and memory controller by data block and forms; Data block is read first address A+ (f+i) S that sequence controller, logical shift device, totalizer are responsible for calculating each piece, i=-f wherein, and 1-f ... ,-1,1,2 .., M-1-f; Data block is read the data block that the sequence controller selection need be read, and promptly selects the value of i, and calculates f+i, exports to the logical shift device; Address decoding unit and memory controller read the data block f at fail data place, promptly select i=0, after reading in the f blocks of data, take circulation continuously to read mode or block the mode of reading again and read in other M-1 blocks of data; The data recovery of Cache writes the Cache memory bank with this blocks of data after receiving the f blocks of data with the counting logic, to CPU nuclear or upper level Cache return data, can remove the Cache failure state simultaneously; Then, the recovery of the data of Cache is read or is blocked according to continuous circulation with the counting logic and reads order, the follow-up M-1 blocks of data of receiving is write the capable relevant position of Cache in the Cache memory bank successively, thereby finish capable the reading of whole C ache.
2. as claimed in claim 1ly realize preferentially reading the method for storer it is characterized in that supposing k=log based on the cache line side-play amount 2(M*S)-1, the logical address D of data is the q bit wide, and the k position of the logical address D of side-play amount intercepting logic intercepts fail data is to k-[n-1] position, i.e. D[k:k-[n-1]], be total to the n bit address, as block offset f; If fail data is positioned at the capable f piece of Cache, 0≤f≤M-1, then f=D[k:k-[n-1]]; Start address generation logic all is changed to 0 with 0 of the k position to the of q position logical address D, and the high address is constant, as start address, that is:
Figure C2005100323060003C1
3. the method that realizes preferentially reading storer based on the cache line side-play amount as claimed in claim 1, it is characterized in that it is after reading in the f blocks of data that mode is read in described continuous circulation, reads f+1 continuously to the M-1 blocks of data, and then reads the 0th to the f-1 blocks of data, promptly select i=1,2 ..., after the M-1-f, select i=-f again, 1-f ... ,-1; It is after reading in the f blocks of data that clean cut system reads mode, at first reads the 0th to the f-1 blocks of data, and then reads f+1 to the M-1 blocks of data, promptly selects i=-f, 1-f ..., after-1, select i=1 again, 2 ..., M-1-f.
4. the method that realizes preferentially reading storer based on the cache line side-play amount as claimed in claim 1, it is characterized in that in depositing the control parts, adopting the mode of logical shift to calculate (f+i) S, promptly adopt the logical shift device that the f+i p position that moves to left is obtained the result of (f+i) S, then this result is exported to totalizer; Totalizer adds (f+i) S on the basis of the capable first address of Cache, obtain first address A+ (f+i) S of offset blocks, then with this address with read memory command is given address decoding unit and memory controller is handled, address decoding unit and memory controller are from first address A+ (f+i) S of data block, from external memory storage, read in a blocks of data continuously, return to Cache.
5. as claimed in claim 1ly realize preferentially reading it is characterized in that the method for storer when selecting the M value, will between cpu performance and hardware cost, weighing, make M≤8 based on the cache line side-play amount.
6. as claimed in claim 1ly realize preferentially reading it is characterized in that the method for storer, adopt said method with depositing to control between the parts at one-level Cache for the embedded microprocessor that has only one-level Cache based on the cache line side-play amount; If comprise n level Cache in the embedded microprocessor, then at n level Cache and deposit control and adopt said method between the parts.
CNB2005100323066A 2005-10-28 2005-10-28 Method realizing priority reading memory based on cache memory line shifting Expired - Fee Related CN100375066C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100323066A CN100375066C (en) 2005-10-28 2005-10-28 Method realizing priority reading memory based on cache memory line shifting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100323066A CN100375066C (en) 2005-10-28 2005-10-28 Method realizing priority reading memory based on cache memory line shifting

Publications (2)

Publication Number Publication Date
CN1858720A CN1858720A (en) 2006-11-08
CN100375066C true CN100375066C (en) 2008-03-12

Family

ID=37297629

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100323066A Expired - Fee Related CN100375066C (en) 2005-10-28 2005-10-28 Method realizing priority reading memory based on cache memory line shifting

Country Status (1)

Country Link
CN (1) CN100375066C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100593146C (en) * 2007-11-09 2010-03-03 上海可鲁系统软件有限公司 Method for preventing industrial automation system from snowslip
CN101751245B (en) * 2010-01-18 2013-05-15 龙芯中科技术有限公司 Processor Cache write-in invalidation processing method based on memory access history learning
CN102378971B (en) * 2011-08-05 2014-03-12 华为技术有限公司 Method for reading data and memory controller
US9589606B2 (en) * 2014-01-15 2017-03-07 Samsung Electronics Co., Ltd. Handling maximum activation count limit and target row refresh in DDR4 SDRAM

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994003856A1 (en) * 1992-08-07 1994-02-17 Massachusetts Institute Of Technology Column-associative cache
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994003856A1 (en) * 1992-08-07 1994-02-17 Massachusetts Institute Of Technology Column-associative cache
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高速缓存对系统的性能及加快程序执行速度的影响. 陈讯.福建电脑,第6期. 2004 *

Also Published As

Publication number Publication date
CN1858720A (en) 2006-11-08

Similar Documents

Publication Publication Date Title
CN101470670B (en) Cache memory having sector function
US7694077B2 (en) Multi-port integrated cache
CN101256481B (en) Data processor and memory read active control method
CN101587447B (en) System supporting transaction storage and prediction-based transaction execution method
CN102693187B (en) In order to reduce the equipment thrown out in more levels of cache memory level and method
CN105095116A (en) Cache replacing method, cache controller and processor
CN103730149B (en) A kind of read-write control circuit of dual-ported memory
US7861192B2 (en) Technique to implement clock-gating using a common enable for a plurality of storage cells
CN102541756A (en) Cache memory system
CN102473091A (en) Extended page size using aggregated small pages
US4047244A (en) Microprogrammed data processing system
CN101918925B (en) Second chance replacement mechanism for a highly associative cache memory of a processor
CN103959258A (en) Background reordering - a preventive wear-out control mechanism with limited overhead
CN101751980A (en) Embedded programmable memory based on memory IP core
CN100375066C (en) Method realizing priority reading memory based on cache memory line shifting
US6351788B1 (en) Data processor and data processing system
WO2006078837A2 (en) Methods and apparatus for dynamically managing banked memory
US11301250B2 (en) Data prefetching auxiliary circuit, data prefetching method, and microprocessor
CN100382050C (en) Data sotrage cache memory and data storage cache system
JPS6244851A (en) Memory access circuit
CN102122270B (en) Method and device for searching data in memory and memory
US20020161976A1 (en) Data processor
CN105095104A (en) Method and device for data caching processing
JP2002055879A (en) Multi-port cache memory
US7007135B2 (en) Multi-level cache system with simplified miss/replacement control

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080312

Termination date: 20191028

CF01 Termination of patent right due to non-payment of annual fee