CN101558393B - Configurable cache for a microprocessor - Google Patents

Configurable cache for a microprocessor Download PDF

Info

Publication number
CN101558393B
CN101558393B CN200780046003.7A CN200780046003A CN101558393B CN 101558393 B CN101558393 B CN 101558393B CN 200780046003 A CN200780046003 A CN 200780046003A CN 101558393 B CN101558393 B CN 101558393B
Authority
CN
China
Prior art keywords
cache
cache line
instruction
line
branch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200780046003.7A
Other languages
Chinese (zh)
Other versions
CN101558393A (en
Inventor
罗德尼·J·佩萨文托
格雷格·D·拉赫蒂
约瑟夫·W·特里斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microchip Technology Inc
Original Assignee
Microchip Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/928,322 external-priority patent/US7966457B2/en
Application filed by Microchip Technology Inc filed Critical Microchip Technology Inc
Publication of CN101558393A publication Critical patent/CN101558393A/en
Application granted granted Critical
Publication of CN101558393B publication Critical patent/CN101558393B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A cache module for a central processing unit has a cache control unit coupled with a memory, and a cache memory coupled with the control unit and the memory wherein the cache memory has a plurality ofcache lines, each cache line having a storage area for storing instructions to be issued sequentially and associated control bits, wherein at least one cache line of the plurality of cache lines hasat least one branch trail control bit which when set provides for an automatic locking function of the cache line in case a predefined branch instruction has been issued.

Description

For the configurable cache of microprocessor
the cross reference of related application
The application's case advocate the exercise question of on Dec 19th, 2006 application be " linked branch history impact damper (LINKEDBRANCH HISTORY BUFFER) " the 60/870th, the exercise question of No. 622 U.S. Provisional Application cases and application on Dec 15th, 2006 is for " to have configurable skin cache memory (the CONFIGURABLE PICOCACHE WITH PREFETCH AND LINKED BRANCHTRAIL BUFFERS looking ahead and link branch's trace buffer and flash prefetch buffer, AND FLASH PREFETCH BUFFER) " the 60/870th, the right of priority of No. 188 U.S. Provisional Application cases, the full text of described two provisional application cases is incorporated herein.
Technical field
The present invention relates to a kind of configurable cache for microprocessor or microcontroller.
Background technology
The bottleneck of pipeline type microprocessor structure is the high access time of accumulator system.Use large high-speed memory buffer and the multiple data words of every clock transfer after initial high memory access time in order to the typical method of head it off.Small-sized microcontroller design is limited to the amount that can be positioned at the cache memory on chip, and it can not support large-sized high stand-by period but the narrow storer of format high throughput.Therefore, need a kind of configurable cache for microcontroller or microprocessor.
Summary of the invention
According to an embodiment, a kind of cache module for CPU (central processing unit) can have the cache memory being coupled with the cache memory control module of storer coupling and with described control module and described storer, wherein said cache memory has multiple cache lines, each cache line has storage area and the control bit that is associated for storing the instruction for the treatment of order issue, at least one cache line in wherein said multiple cache line has at least one branch and follows the tracks of control bit, described branch follows the tracks of control bit the AutoLock feature to described cache line in the situation that predefine branch instruction has been published is provided in the time being set.
According to further embodiment, at least one cache line can further comprise the locking control bit for locking manually or automatically described cache line.According to further embodiment, each cache line can further comprise the control bit that is associated of the validity that is used to indicate described cache line.According to further embodiment, each cache line can further comprise that to be used to indicate described cache line be as instruction cache line or the control bit that is associated of data cache lines.According to further embodiment, each cache line can further comprise the address marker bit field that is associated comparing with Address requests, be associated mask bit field and the screen unit for the position of the address mark that is associated described in described cache line being shielded according to described mask bit field that uses shielding for storing.According to further embodiment, described cache module can further comprise the pre-fetch unit with described storer and the coupling of described cache memory, wherein said pre-fetch unit through design with by the instruction load from described storer in another cache line, wherein said instruction is in succession in current instruction of issuing from described cache line.According to further embodiment, described pre-fetch unit can be controlled to be activated or stop using.According to further embodiment, can determine which cache line will be by overwrite with least-recently-used algorithm.
According to another embodiment, a kind of operation can comprise following steps for the method for the cache memory of CPU (central processing unit): multiple sequential instructions are stored in the cache line of described cache memory; Set the branch trail function for described cache line; And carry out the instruction of extracting from described cache line, wherein said cache line is automatically locked calling after subroutine.
According to further embodiment, be last instruction in described cache line if call the instruction of subroutine, described in can not carrying out so, automatically lock the step of described cache line.According to further embodiment, described method can further be included in the step of the locking to described cache line that resets from described subroutine is returned.According to further embodiment, can after instruction contained in the described cache line of execution, call described subroutine.According to further embodiment, can be by subroutine described in interrupt call.
According to an embodiment again, operation can comprise following steps for the method for the cache memory of CPU (central processing unit): multiple cache lines are provided, its each there is storage area for storing instruction or data and for controlling multiple control bits of function of each cache line; Multiple sequential instructions are stored in the cache line in described multiple cache line; And behind branch trail function position in described multiple control bits of setting in described cache line, automatically lock described cache line while calling subroutine between the order period in described cache line when being stored in execution.
According to further embodiment, described method can further be included in the step resetting from described subroutine is returned for the described branch trail function of described cache line.According to further embodiment, described method can further comprise the step that the manual locking function of cache line is provided by means of the locking control bit in described multiple control bits.According to further embodiment, described method can further comprise provide by means of the Type Control position in described multiple control bits instruction cache line be for storing instruction or the step of the type of functionality of data.According to further embodiment, described method can further comprise the step of the validity function of the validity of the storage content in the instruction cache line that cache line is provided by means of the validity control bit in described multiple control bits.According to further embodiment, described method can further be included as each cache line and be provided for the step of multiple that the position of the address tag field that is associated is shielded.According to further embodiment, described method can further comprise following steps: after execution is stored in the instruction in cache line, be initial to being stored in the looking ahead of instruction of the instruction in being stored in described cache line in succession in storer.According to further embodiment, described institute prefetched instruction can be stored in by the definite cache line of least-recently-used algorithm.According to further embodiment, described method can further comprise and the instruction stream that contains instruction loop is stored at least one cache line and locks described cache line in order to avoid it is by the step of overwrite.According to further embodiment, described method can further comprise the step that sequential instructions is stored in multiple cache lines and locks all cache lines that contain the instruction that forms instruction loop.
Brief description of the drawings
Can be by obtaining referring to the following description done by reference to the accompanying drawings of the present invention compared with complete understanding, wherein:
Fig. 1 illustrates the first embodiment of configurable cache.
Fig. 2 illustrates according to the details of the cache memory sections of the embodiment of Fig. 1.
Fig. 3 illustrates the second embodiment of configurable cache.
Fig. 4 illustrates according to the details of the cache line of the cache memory of the embodiment of Fig. 3.
Fig. 5 explanation is used for the exemplary register of the function of the embodiment that controls cache memory.
Fig. 6 illustrates according to other register of the content of the mapping cache line of the one in described embodiment.
Fig. 7 explanation is for generation of certain logical circuit of signal specific.
Fig. 8 illustrates and shows the process flow diagram of simplifying cache access process.
Although the present invention allows various amendments and alternative form, show in the accompanying drawings and also describe in this article its specific example embodiment in detail.But, should be appreciated that, herein the description of specific example embodiment is not wished to limit the invention to particular form disclosed herein, but on the contrary, the present invention will be contained as all modifications by appended claims defined and equivalent.
Embodiment
Standard micro controller unit (MCU) comprises 8 or microprocessor of 16 bit core conventionally.32 cores only just enter MCU circle recently.All these cores all do not have cache memory conventionally.Only complicated high-end 32 8-digit microcontrollers can have cache memory.This is that cache memory is larger and expensive because for MCU.The embodiment disclosing provides the small-sized configurable cache of middle ground, and it can configure and can serve as and look ahead and branch's trace buffer in running, is provided for the optimum high speed memory buffer degree of depth of MCU application simultaneously.
According to an embodiment, cache memory can be configurable with operation very neatly through being designed to.For instance, it can be through programming strictly to operate as cache memory, and this is useful for small loop optimization.For this reason, the respective cache line that manually lockable comprises loop.It also can contribute the cache line (for example, nearly for the half of the line of linked branch history storage) of given number, and this can acceleration function calls and returns.Finally, it can be configured to sequential program information being prefetched to least-recently-used cache line in the time that cache line is issued the first instruction.By carrying out prefetch program instruction to double the speed that microprocessor can service routine instruction, accumulator system provides available bandwidth with extraction procedure data in the situation that not making program instruction streams stop.In fact be not, that all routine datas extractions are transparent.Provide in order to by providing with the high stand-by period but the feature balance of the low latency cache memory of the wide memory of format high throughput combination is improved the mechanism of performance according to the cache design method of different embodiment.
According to an embodiment, cache memory can through be designed to working time and running in configurable complete association cache memory.Fig. 1 shows the block diagram of the embodiment of this type of configurable cache 100.Coupling bus 110a and 110b are coupled to cache memory the CPU (central processing unit) (CPU) of microcontroller or microprocessor.Cache memory 100 comprises cache controller 120, and described cache controller 120 is coupled to instruction cache section 130 and data cache section 140.Each instruction cache section include instruction storer peculiar and the control bit and the mark (for example, with linear formula) that are associated, its center line can comprise the storage area for storing multiple words.For instance, the line that word can be in 16 long and instruction caches 130 can have 4 double words, thereby produces 4 × 32 positions.According to an embodiment, small instruction cache 130 can comprise 4 these type of lines.According to other embodiment, according to the design of respective processor, other fixed configuration may be more favourable.According to an embodiment, data cache section 140 can be designed to being similar to instruction cache design 130.According to designing a model and determining, independent data and instruction cache section 130 and 140 may be desirable, for example, in the processor of this () (Harvard) structure that can be used for having Harvard.But in conventional variational OR (von Neumann) type microprocessor, can use can be from the hybrid cache memory of same memory cache instruction and data.Fig. 1 only shows that basis has the program flash memory 160 (PFM) that is connected to instruction and data caching 130,140 of the processor of Harvard structure.Data-carrier store can be coupled in Harvard structure individually, or storer 160 can be the unified instruction/data storer as used in variational OR structure.Multiplexer 150 (for example) is controlled by cache controller 120 and the data/commands being stored in cache memory 130,140 is provided to CPU via bus 110b.
Fig. 2 shows in more detail according to the instruction cache 130 of an embodiment and the structure of data caching.Described layout is shown the independent cache memory for instruction and data again.Each line of cache memory comprises data/commands storage area and multiple being associated controlled and marker bit (for example, IFM, TAG and BT).IFM represents particular mask, and it can for example, in order to some position of () shielded address tag field TAG, the start address that described address mark field TAG contains data/commands cache memory DATA, as explained in more detail below.Each line can (for example) include instruction/data caching 4 × 32 positions, as demonstrated in Figure 2.Tag field can comprise actual address and indicate the extra bits of the validity of respective cache line, locking, type etc.In addition,, as demonstrated in Figure 2, provide branch tail bit BT for each cache line.When this position is while being set, in the time carrying out subroutine call instruction and described instruction be not last instruction in described line in respective cache line, CPU can automatically lock the cache line being associated.In the case, respective cache line is automatically locked, and in the time that program is returned from respective subroutine, the instruction of following after respective calls instruction will be present in cache memory, as explained in more detail below.
Fig. 3 shows another embodiment of configurable cache.Cache controller 120 is provided for control signal and the information of all functions of cache memory.For instance, cache controller 120 is controlled TAG logic 310, described TAG logic 310 is coupled with hit logic 320, and described hit logic 320 is also processed from cache controller 120 and carried out the data of the mark 330 of looking ahead that free cache controller provides.Hit logic produces the signal of controlling cache line address scrambler 340, described cache line address scrambler 340 addressing cache memories 350, described cache memory 350 is in this embodiment including (for example) the data/commands storer of 16 lines, and each line is including (for example) 4 × 32 double words for instruction/data storage.Program flash memory 160 is coupled with cache controller 120 and is coupled with cache memory via pre-fetch unit 360, and described pre-fetch unit 360 is also connected to cache line address scrambler 340.Pre-fetch unit 360 is sent to instruction direct by cache line address scrambler 340 or passes through in each cache line of cache memory 350 of addressed.For this reason, pre-fetch unit 360 can comprise one or more impact dampers of the instruction in the storage area that can store respective cache line to be sent to.Multiplexer 150 is through controlling to select respective byte/word/double word and be provided to cpu bus 110b in cache memory 350 or from the prefetch buffer of unit 360.
Fig. 4 shows cache memory 350 in more detail.In this embodiment, provide 16 cache lines.Each line comprises multiple control bits and one 4 × 32 bit instructions/data storage areas (Word0 is to Word3).Described control bit comprises shielding MASK, address mark TAG, validity bit V, locking bit L, type bit T and branch tail bit BT.Shielding MASK allows the selected position of shielded address mark TAG during being compared by hit logic 320, as explained in more detail below.The beginning of the cache line in address mark TAG and then instruction memory 160.As explained in more detail below, address mark TAG is readable and can writes, and fashionablely will force pre-fetch function being write by user.Entry in validity bit V instruction associated cache line is effective.This position can not be changed by user, and it is through automatically setting or resetting.Whether locking bit L instruction cache line is locked, and therefore can not be by overwrite.This position can be changed or can automatically be set with respect to branch trail function by user, as below explained.The type of position T instruction cache line, that is, cache line is as instruction cache line or as data cache lines.This position can be changed by user through being designed to, and this allows assigning very flexibly and configuring of cache memory.Replacement is appointed as data cache lines with a single T that assigns by some cache line, it is individual by the line for cached data that useful general configuration register defines given number, and residue cache line will be used for instruction cache.In this embodiment, still can provide a T to indicate which cache line through being set as specified data cache lines, and therefore the rheme T of institute can not be modified in this embodiment.As explained after a while, can (for example) be configured to the object for data cache by zero cache line, 1,2 or 4 cache lines according to the cache memory of an embodiment.Therefore this appointment can be split into cache memory two parts, for example, determines according to the number of the line of assigning, and can upwards assign data cache lines from the bottom of cache memory.There is other configuration of more data cache line yes possible and determine according to the respective design of cache memory.Therefore,, in the time being set, position T indicates this line for data cache.
Fig. 7 shows can be in order to implement the embodiment of certain logical circuit of branch trail function.As explained above, branch tail bit 750 in order to by be branched off into subroutine and by the subroutine instruction of returning, capture, interrupt or other instruction in the case of in cache line carry out and be not to automatically lock associated cache line last instruction in described line.In the time being set, calling that subroutine type instruction has been performed and program branches leaves that it is linear while carrying out sequence, CPU can be by setting position 740 automatic lock-related on lines via logic gate 760.The execution of this subroutine type instruction can detect, and pass through signal 770 signaling logic gates 760 in performance element.Not yet carry out but the instruction of carrying out when respective subroutine is returned in program is stayed to cache line time when at least one, enable that this is functional.In the case of this instruction is placed in last storage space of cache line, to there is no need to keep cache line to automatically lock, because instruction subsequently will be in different cache line or even may be not in cache memory.In the time that position 750 is set according to the execution (it is by detection signal 770 signaling logic gates 760) of respective subroutine or interrupt call, CPU sets and reset locking position 740 automatically.
Fig. 5 and Fig. 6 are illustrated in microprocessor or microcontroller and implement to control the behavior of configurable cache and the example of functional universal high speed memory buffer control register 510 and other control register 610 to 660.All registers can designed to be used 32 bit registers that use in 32 environment.But these registers can easily be adapted to work in 16 or 8 environment.For instance, register CHECON comprises position 31 to enable or to stop using whole cache memory, and position 16 CHECOH can set in order to the cache coherence of realizing on PFM program loop position.For instance, this CHECOH can in the time being set, make all data and order line invalid, maybe can make all data lines invalid and only make without locking order line invalid.Position 24 can be in order to enable compulsory data cache function, as explained in more detail below.In the time being set, if cache memory bandwidth not for extracting instruction, this function is forced data cache so.Position 11-12 BTSZ can be in order to enable/disable branch trace labelling.For instance, in one embodiment, if be activated, branch's trace labelling can be set to the size of 1,2 or 4 line so.Therefore, 1,2 or 4 cache line will have that this is functional.According to other embodiment, all cache lines can be activated for this functional.Position 8-9 DCSZ is in order to define the number of data cache lines, as explained above.In one embodiment, described number can be through setting to enable 0,1,2 or 4 data cache lines.
Position 4-5 PREFEN can be in order to enable optionally the predictive prefetch for the cacheable and non-cacheable area of storer.The cacheable area of storer can be in storer for example can be through the district of the storer of actual cache or program area, it means the memory areas with the actual coupling of cache memory.Non-cacheable area refers generally to the memory mapped peripheral space that generation (for example) can not be cached conventionally.Differentiation criterion system between Yu Fei cacheable area, cacheable area and determining.Some embodiment may need this difference, and respective microprocessor/microcontroller will be supported high-speed cache/non-cache method, and other embodiment of processor may any type of high-speed cache storer, and no matter Qi Shi actual storage district or memory mapped district.
If be set, pre-fetch unit will be extracted the instruction of following after the cache line of current therefrom issuing command always so.Use two positions to allow (for example) four kinds of different set, for example, enable for both predictive prefetch of Ji Fei cacheable area, cacheable area, only enable for the predictive prefetch of non-cacheable area, only enable predictive prefetch and inactive predictive prefetch for cacheable area.According to an embodiment, suppose that cache line comprises 16 bytes or four double words.For instance, if CPU (central processing unit) request from the instruction x1 of address 0x001000, cache memory steering logic compares all address marks and 0x00100X (its meta X is left in the basket) so.If controller produces and hits, select so corresponding line.Selected line comprises all instructions initial with address 0x001000.Therefore, be 32 long in the situation that in each instruction, the first instruction will be distributed to CPU (central processing unit), and pre-fetch unit will be triggered next line of looking ahead.For this reason, pre-fetch unit loads command adapted thereto by address mark being subsequently calculated as to 0x001010 and starting in next available cache line.In the time that CPU (central processing unit) is further carried out the instruction from address 0x001004,0x001008 and 0x00100C, pre-fetch unit is used from the instruction of address 0x001010,0x001014,0x001018 and 0x00101C and is filled up next available cache line.Complete the instruction of the cache line of carrying out current selected in CPU (central processing unit) before, pre-fetch unit will complete loading subsequent instructions.Therefore, CPU (central processing unit) will not be stopped.
Return referring to Fig. 5, position 0-2 is in order to define the number of waiting status of program flash memory.Therefore, various flash memory can use together with microcontroller.
Each line in cache memory can be mapped to register as shown in Figure 6 under controlling as shown in Figure 4.Therefore, can be designed to completely can be by reading and write operation carrys out access and can be changed by user completely for cache line.But as described above, some positions of cache line can must not be changed by user or may need corresponding line to unblank before user can change corresponding line through design.For this reason, can provide indexed registers 600 for selecting the one in described 16 cache lines.Once select cache line by indexed registers 600, described cache line just can carry out access by register 610-660 subsequently.Mask register can for example, comprise the shielding MASK that selectes cache line in () 5-15 in place.The second register for mark can have address mark and also can comprise instruction position V, the L, T and the BT that select validity, lock-out state, type and the branch trail function of register by 4-23 in place.Finally, four 32 bit registers can be provided for the selected line that comprises cached data or instruction in register Word0, Word1, Word2 and Word3.Can implement other control register to control the general utility functions of cache memory.Therefore, each cache line can be by user or software access and manipulation, as explained in more detail below.
According to disclosed embodiment, cache memory 100,300 extracts and responds initial cpu instruction with for example, instruction word set (being called line) by extract () 128 bit alignments from PFM 160 through design.The actual instruction of asking can be present in described line Anywhere.Described line is stored in cache memory 130,350 (filling), and instruction turns back to CPU.This access can take multiple clock period and CPU is stopped.For instance, for 40 nanoseconds of access flash, access can cause 3 waiting statuss under 80MHz.But, once line is cached, the subsequent access that is present in the instruction address in described line is just occurred in zero wait state.
If high-speed cache is so activated, this process continues on for each instruction address of miss cache line so.In this way, if minor loop is 128 bit alignments and identical with the byte number of cache memory 130,350 or is less than the byte number of cache memory 130,350, so described loop can be carried out from cache memory under zero wait state.For the loop of filling completely, the every clock of 4 line cache memory 130 as shown in Figure 1 with 32 bit instructions is carried out an instruction.In other words, CPU carries out all instructions that are stored in cache memory 130 in 16 clocks.If only support the extraction of 128 bit wides, so described the same circuit for example can every line takies the waiting status of given number, for (extracting, 3 waiting statuss), and for example take the clock of given number, for (carrying out, 4 clocks), this will cause (for example) every 4 instructions to take 7 clocks.This example has generation the total loop time of 28 clocks.
Embodiment in Fig. 1 comprises two line data cachings to utilize the constant that can be stored in PFM 160 and the spatial proximity of showing data.But in other embodiments, this cache memory can be larger and is connected to data-carrier store.
In addition, as explained above, as Fig. 1 and cache memory demonstrated in Figure 3 also can be realized and looking ahead, to allow to avoid to extract the waiting status of the required given number of the instruction stream of 128 bit wides.Be activated if looked ahead, cache memory 100,300 uses least-recently-used line to carry out predicted address filling so.Predicted address is just in time next order 128 bit alignment address, as above the example that uses actual address being explained in detail.Therefore, in cache line, carry out between order period, if predicted address not yet in cache memory, cache memory produces flash memory access so.For example, while operation under CPU needs the frequency of () the 3 waiting status accesses to flash memory system, predicted address is extracted in CPU wherein and needs in cycle of predict command and completes.In this way, for linear code, cpu instruction extracts and can under zero wait state, move.
When link branch carries out with preservation cache line for future while using with link skip instruction in CPU, branch's tracking characteristics is checked described instruction.This feature strengthens by preserve any instruction in the line of following the tracks of branch or skip instruction the performance that funcall returns.
Program flash memory cache 160 and prefetch module 120,360 are in cacheable program flash memory region, the outside application of carrying out provides the performance of enhancing.Performance enhancing realizes with three kinds of different modes.
First kind of way is module cache capability.As Fig. 1 and 4 or 16 line instruction cache 130,350 demonstrated in Figure 3 have every clock be loop supply once command (for 32 bit manipulation codes nearly 16/64 instruction and for nearly 32/128 instruction of 16 bit manipulation codes) ability.Other configuration of cache size and tissue is applicable.Embodiment demonstrated in Figure 1 also provides the ability of high-speed cache two line data, thereby the improvement access to the data item in line is provided.Embodiment demonstrated in Figure 3 is by setting split point or individually assigning corresponding cache memory type (as explained above) that the more data cache lines size of flexible assignment is provided.
The second, in the time allowing to look ahead, the every clock of module provides once command for linear code, thereby hides the access time of flash memory.The 3rd, module can be distributed to one or two instruction cache line linked branch history instruction.In the time having the jump of link or branch instruction and occur in CPU, last line is marked as branch history line and preserves for returning from calling.
Module is enabled
According to an embodiment, after resetting, can for example, enable module by setting position (, the position 31ON/OFF (referring to Fig. 5) in CHECON register).Remove this position and will complete the following:
Stop using all cache memories, look ahead and the state of branch history functionality and reset cache memory.
Module is set as to bypass mode.
Allow special function register (SFR) to read and write.
Operation under energy-saving mode
Park mode
According to an embodiment, in the time that device enters park mode, clock control piece stops the clock to cache module 100,300.
Idle mode
According to an embodiment, in the time that device enters idle mode, cache memory and the clock source of looking ahead still works and CPU stops run time version.Any untreated module 100,300 that is taken in advance stops completing before its clock via automatic Clock gating.
Bypass behavior
According to an embodiment, default mode of operation is bypass.Under bypass mode, module is for each instruction and access PFM, thereby causes the flash access time of being defined as the PFMWS position (referring to Fig. 5) in register CHECON.
High-speed cache behavior
According to Fig. 1, high-speed cache and prefetch module can be implemented complete association 4 line instruction cache.Determine according to design, more or less cache line can be provided.Instruction/data storage area in cache line can through be designed to write and being eliminated together with the control bit that is associated during quick flashing programmed sequence or in the time that the corresponding positions in general control register CHECON is set to logical zero.Its every line uses the register or the bit field that contain flash address tag.Each line can be made up of the instruction of 128 positions (16 bytes), and no matter instruction size how.In order to simplify access, can only ask 16 byte aligned instruction data from quick flashing 160 according to the high-speed cache of Fig. 1 and Fig. 3 and prefetch module.According to an embodiment, if the address that CPU asks is not aimed at 16 byte boundaries, module will be carried out aligned address by abandoning address bit [3.0] so.
In the time being only configured to cache memory, module by when miss by multiple instruction load in line and work as any cache memory.According to an embodiment, module can be used simply least-recently-used (LRU) algorithm to select which line to receive new instructions and close.Cache controller is determined when it and is detected how long it must wait for flash access when miss by the wait state value of register CHECON.In the time hitting, cache memory is return data under zero wait state.
Instruction cache foundation is looked ahead and branch follows the tracks of selection and works by different way.If code is 100% linearity, so only cache mode will provide instruction to get back to CPU with corresponding PFMWS cycle sequential, the number that wherein PFMWS is waiting status.
Shielding
Use mask bit field can realize further using flexibly of cache memory.Fig. 7 shows in order to implement the possible logical circuit of function of shielding.The bit field 710 of cache line contains (for example) 11 positions, and institute's rheme can be used some position with shielded address mark 720.11 positions of mask bit field 710 in order to shielded address mark 720 compared with low level 0-10.In the time that comparer 780 compares address mark 720 and institute's request address 790, any position that is set to " 1 " in mask bit field 710 will be left in the basket the corresponding positions in address mark.If instruction/data storage area comprises 16 bytes, address mark does not comprise lower 4 positions of actual address so.Therefore,, if all positions of shielding 710 are set to " 1 ", comparer compares the position 0-19 of the address mark in the system of the position 4-23 of actual address and 24 address bits of use so.But, by shielding 730, can force comparer 780 only the corresponding fraction of the fraction of address mark 720 and actual address 790 to be compared.Therefore, multiple addresses can cause and hit.This is functional can be especially advantageously causes together with the interruption of branch of the predefined address in command memory or the generation of trapped instruction and uses with some.For instance, interruption can cause to the branch of the storage address that contains Interrupt Service Routine, and described storage address adds that by interrupting base address the offset address being defined by the priority of interrupting is defined.For instance, priority 0 interrupts being branched off into address 0x000100, and priority 1 interrupts being branched off into address 0x000110, and priority 2 interrupts being branched off into address 0x000120, etc.Trapped instruction can be organized similarly and can be caused similar branching pattern.The Interrupt Service Routine of supposing given number is at least identical for the instruction of predefine number, and by using function of shielding, these addresses can cause to the branch of the initial same cache line that contains service routine so.For instance, if the forth day of a lunar month 32 bit instruction for the Interrupt Service Routine of priority level 0-3 are identical, the mask bit field that is included in so the cache line of the initial instruction in 0x000010 place, address can be set to " 11111111100 ", and it is by causing and hit from the initial all addresses to 0x0001300 of 0x000100.Therefore, the interruption not only with priority 0 will cause hits, and there is priority 1,2 and 3 interruption also will cause and hit.It all will jump to the same instruction sequence being carried in cache memory.Therefore, the loss because of access flash storer will can not be there is.
The behavior of looking ahead
The bit field PREFEN of control register CHECON or corresponding single position (referring to Fig. 5) can be in order to enable pre-fetch function.When being configured when looking ahead, module 100,300 next line address of prediction and returning it in the LRU line of cache memory 130,350.Pre-fetch function extracts to start prediction based on the first cpu instruction.In the time that First Line is positioned in cache memory 130,350, module only makes address increment arrive next 16 byte alignment address and starts flash access.Flash memory 160 in the time that all instructions can be carried out from first front or before return to next instruction set.
If any time during predicted flash access, new cpu address does not mate with predicted address, and flash access will be changed to correct address so.This behavior can not make CPU access take than the shared longer time of time in the situation that does not have to predict.
If predicted flash access completes, so instruction is positioned in LRU line together with its address mark.Before hitting line, do not upgrade cpu address LRU instruction.If it is the line just in time looked ahead, is the lines that use recently at most so by described wire tag and correspondingly upgrades other line.If it is another line in cache memory, algorithm is correspondingly adjusted so, but the line just in time looked ahead is still LRU line.If it is miss cache memory 130,350, access forwards quick flashing to and link order is positioned over LRU line (it is for upgrading at most recently but from untapped prefetched lines) so.
According to an embodiment, as described above, optionally open or close data pre-fetching.According to another embodiment, for example, if the dedicated bit in control register (, CHECON) is set to logical one, can cause instruction prefetch abort in instruction prefetch data access midway so.If this position is set to logical zero, data access completes after instruction prefetch completes so.
Branch's tracking behavior
Cache memory can for example, be used for branch trace command by the bit field BTSZ (referring to Fig. 5) in program register CHECON by one or more lines of instruction cache with () through division.When CPU request as from branch with link or when new address that jump and link instruction are calculated, branch trail line is the cache lines of nearest maximum uses.According to an embodiment, in the time that MRU cache line is labeled as branch trail line by module 100,300, it also can deallocate LRU branch trail line, uses thereby make it be returned as universal high speed memory buffer.
As explained above, if last access is that so described line is not marked as branch trail line from last instruction in MRU line (superlatively location).And module does not deallocate any one existing line from branch's tracking section of cache memory.
Prestrain behavior
Bootable module 100,300 use of application code are from instruction prestrain and a cache line of locking of flash memory 160.Pre-loaded function is used from the LRU of line that is labeled as cache memory (, not branch trail).
According to an embodiment, the address tag bit field in can direct access cache line, and user can be written to any value in this bit field.This writes the pressure prestrain high-speed cache that causes the corresponding line to institute's addressing in flash memory.Therefore, prestrain carrys out work by address being written in the address tag bit field of cache line to be pre-loaded to corresponding line from storer.According to an embodiment, this action made described line invalid before access flash is with search instruction.After prestrain, described line can be by CPU (central processing unit) access for carrying out command adapted thereto.
According to an embodiment, this functional can be in order to implement debug functionality very flexibly, and without the code in change program storage.Being included in once recognize the corresponding line that needs the instruction of breakpoint during debug sequence, can be just that prestrain has particular address by described wire tag.Then, the content of described cache line can be through amendment to comprise debug command.For instance, system software can automatic replacement described in instruction in cache line to produce breakpoint or to carry out the subroutine of any other type.Once respective code is performed, just can replaces described instruction and can change storehouse to turn back to the same address of therefrom carrying out debugging routine with presumptive instruction.Therefore, preload functionality allows to change very neatly intrasystem code.
According to another embodiment, if cache line is locked or locked by branch tail bit potentially by locking bit, can forbid so the access that writes to this cache line.Therefore, only can be and can write through the cache line of unblanking.If it is functional to implement this, user's described cache line of first unblanking before must be in it can be written to cache line by new address mark so, to force cache controller to load command adapted thereto or the data from storer.For instruction/data storage area write to access too.
Especially for function of shielding as explained above, can be very useful by the feature of specified instruction load cache memory on one's own initiative.For instance, if it is initial that many Interrupt Service Routines come with same instruction sequence, so can be by respective service routine address being written in address mark to make respective cache line prestrain have the instruction of respective interrupt service routine to force this instruction sequence to enter in cache memory.By setting corresponding shielding as explained above and locking respective cache line, cache memory can not have a flash access loss through pre-configured so that calling program is made a response to some interruption.Therefore, some routine can be carried out access by cache memory all the time.
Reset and initialization
After reset, all cache lines are all marked as invalid and cache features and are deactivated.For instance, by register CHECON, waiting status is reset to its max wait state value (allowing to carry out bypass accesses after resetting).
In the time that any quick flashing program starts, it is its reset values that module 100,300 forces cache memory.Before program loop finishes, any access of being undertaken by CPU is all stopped.Once program loop completes, CPU access co-pending is just proceeded via switching to quick flashing.Link order completes by the value defining in configuration register.
Flash prefetch buffer (FPB)
According to an embodiment, flash prefetch buffer design (referring to Fig. 3) can be simple impact damper, for example latch or register 365.In one embodiment, it can reach the core cpu instruction of 8 instructions or the core cpu instruction of 4 instructions of looking ahead in the time operating under 32 bit instruction patterns altogether to allow to utilize 4 panels of x32 position flash memory to look ahead when operate under 16 bit instruction patterns through design.The instruction that the FPB implementing in cache controller 120 looks ahead to guarantee to be fed in core with linear mode will not make kernel instruction stop.According to an embodiment, FPB can contain 2 impact dampers separately with 16 bytes.Each impact damper trace command address extraction.If branched out outside present buffer instruction boundary, utilize so alternate buffer (cause initially and stopping, but then the linear code of high-speed cache extracts).Each instruction fetch forces FPB to capture 16 bytes that follow-up linearity is possible with fill buffer.
According to another embodiment, optionally, programmable forced data cache operation can be implemented by prefetch buffer.Once cache memory is filled with one or more order lines, just can sequentially carry out described instruction and without extract other order line within the special time cycle.This situation is especially true, because the execution time of the instruction in single cache line can double or even more be longer than in order to cache line is loaded into the time in cache memory.In addition,, if one or more row cache memory line comprises the loop through carrying out, possibility duration of existence does not need the relatively long time of any other instruction of high-speed cache so.According to an embodiment, this time can be used with cached data, for example, treat relatively a large amount of data of using in table, etc.Cache memory can be by register (for example, position 23 DATAPREFEN (referring to Fig. 5) in register CHECON) programming, while extracting instruction to be not used in cache memory bandwidth, carry out extra data caching function.This is useful by can be the program use that need to be loaded in cache memory in tables of data.Data are extracted and can after initial filling for the first time, be occurred and still allow core to continue to use the institute's prefetched instruction from cache line.According to an embodiment, in the time that function digit DATAPREFEN is set, can after each instruction fetch, automatically extract data line.Or, according to another embodiment, as long as corresponding positions DATAPREFEN is set, just can force data cache.Therefore, for instance, can start and stop compulsory data cache by setting corresponding positions.In another embodiment, in the time that cache memory suspends load instructions within a time cycle, just can automatically perform compulsory data cache.If multiple control bits are provided, can implement so the programmable combination of different pieces of information cache mode.
Fig. 8 shows according to the simplification flash memory request of the use high-speed cache of an embodiment and pre-fetch function.Flash memory request starts at step 800 place.First, in step 805, determine whether request is cacheable.If request for cacheable, determines whether the address that provides has produced cache-hit so in step 810.If so,, so according to an embodiment, process can branch into two parallel procedures.But other embodiment can sequentially carry out these processes.The first branch starts with step 812, determines whether to ask calling subroutine in step 812.If not, the first parallel procedure finishes so.If so, in step 815, determine whether so in respective cache line, to have set branch tail bit.If so, in step 820, determine so whether call is last instruction in cache line.If so, the first parallel procedure finishes so.If so, in step 830, lock so respective cache line.The second parallel procedure starts in step 835, wherein from cache memory link order, and in step 835, carries out the algorithm of last use recently to upgrade the state of cache line.If if not yet produce cache-hit or request for not cacheable in step 810, in step 840, determine so whether prefetch buffer produces to hit.If prefetch buffer, containing the instruction of request to some extent, returns to asked instruction so in step 845.Otherwise, in step 850, carry out flash access, it will make CPU stop.In the step 855 after step 850, in the situation that cache line can be used for carrying out cache memory function, flash request can be filled cache line.Routine finishes with step 860.
Although describe, describe and defined embodiments of the invention with reference to example embodiments of the present invention, described reference does not also mean that limitation of the present invention, and should not infer any this type of restriction.The subject matter disclosing can make considerable amendment, change and equivalent in form and function, as association area and benefit from those skilled in the art of the present invention and will expect.The embodiment that describes and describe of the present invention is only example, and and non exhaustive scope of the present invention.

Claims (24)

1. for a cache module for CPU (central processing unit), it comprises:
With the cache memory control module of storer coupling,
Cache memory with described control module and the coupling of described storer, wherein said cache memory comprises multiple cache lines, each of described multiple cache lines comprises storage area and the control bit that is associated for storing the instruction for the treatment of order issue, wherein at least one control bit that is associated is that branch follows the tracks of control bit, described branch follows the tracks of control bit and provide AutoLock feature to the one being associated in described multiple cache lines in the time being set, wherein in the time that described branch tracking control bit is set, issue the predefine branch instruction of taking out from the described cache line being associated, the cache line being associated is now locked.
2. cache module according to claim 1, the wherein said cache line being associated further comprises the locking control bit of the cache line for being associated described in locking manually or automatically.
3. cache module according to claim 1, each of wherein said multiple cache lines further comprises the control bit that is associated of the validity that is used to indicate the one being associated in described multiple cache line.
4. cache module according to claim 1, it is as instruction cache line or the control bit that is associated of data cache lines that each of wherein said multiple cache lines further comprises the cache line being used to indicate in described multiple cache line.
5. cache module according to claim 1, each of wherein said multiple cache lines further comprises the address marker bit field that is associated comparing with Address requests, be associated mask bit field and the screen unit for the position of the address mark that is associated described in the described cache line being associated being shielded according to described mask bit field that uses shielding for storing.
6. cache module according to claim 1, it further comprises the pre-fetch unit with described storer and the coupling of described cache memory, wherein said pre-fetch unit through design with by the instruction load from described storer in another cache line, the wherein said instruction from described storer is in succession in current instruction of issuing from the described cache line that is associated.
7. cache module according to claim 6, wherein said pre-fetch unit can be controlled to be activated or stop using.
8. cache module according to claim 1, wherein least-recently-used algorithm is in order to determine which cache line will be by overwrite.
9. operation is for a method for the cache memory of CPU (central processing unit), and it comprises following steps:
Multiple sequential instructions are stored in the cache line of described cache memory, wherein said cache line comprises multiple control bits of the function for controlling each cache line;
Follow the tracks of control bit by the branch setting in the described multiple control bits in described cache line, for described cache line is set branch trail function;
Carry out the instruction of extracting from described cache line, wherein in the time that described branch follows the tracks of that control bit is set and described instruction is not last instruction in described cache line, after subroutine, at once automatically lock described cache line calling.
10. method according to claim 9, if wherein extracting from described cache line the instruction of calling subroutine is last instruction in described cache line, described in not carrying out so, automatically lock the step of described cache line.
11. methods according to claim 9, it is further included in from described subroutine is returned at once as the reset step of described locking of described cache line.
12. methods according to claim 9, wherein call described subroutine at once carrying out after predetermined branch instruction contained in described cache line.
13. methods according to claim 9, wherein call described subroutine by interruption.
14. 1 kinds of operations are used for the method for the cache memory of CPU (central processing unit), and it comprises following steps:
Multiple cache lines are provided, and each of described multiple cache lines has storage area for storing instruction or data and for controlling multiple control bits of function of each cache line;
Multiple sequential instructions are stored in the cache line in described multiple cache line;
Branch in described multiple control bits of setting in described cache line follows the tracks of after control bit, automatically locks described cache line while calling subroutine when being stored in execution between the order period in described cache line.
15. methods according to claim 14, wherein said branch follows the tracks of control bit the AutoLock feature to described cache line in the situation that predefine branch instruction has been published is provided in the time being set.
16. methods according to claim 14, it is further included at once to reset from described subroutine is returned follows the tracks of the step of control bit for the described branch of described cache line.
17. methods according to claim 14, it further comprises the step that the cache line manual locking function in described multiple cache line is provided by means of the locking control bit in described multiple control bits.
18. methods according to claim 14, cache line providing by means of the Type Control position in described multiple control bits in the described multiple cache lines of instruction is further provided for it is for storing instruction or the step of the type of functionality of data.
19. methods according to claim 14, step of the validity function of the validity of the content of storing in the cache line instruction cache line providing by means of the validity control bit in described multiple control bits in described multiple cache line is further provided for it.
20. methods according to claim 14, its be further included as each cache line be provided for shielding be associated address tag field position the step of multiple.
21. methods according to claim 14, it further comprises following steps:
At once it is initial to being stored in looking ahead of instruction in storer, the instruction in the cache line in being stored in described multiple cache line in succession after execution is stored in the instruction in the cache line in described multiple cache line.
22. methods according to claim 21, are wherein stored in described institute prefetched instruction by the definite cache line of least-recently-used algorithm.
23. methods according to claim 14, it further comprises the instruction stream that contains instruction loop is stored in the cache line of described multiple cache lines and locks described cache line in order to avoid by the step of overwrite.
24. methods according to claim 14, it further comprises the step that sequential instructions is stored in multiple cache lines and locks all cache lines that contain the instruction that forms instruction loop.
CN200780046003.7A 2006-12-15 2007-12-14 Configurable cache for a microprocessor Active CN101558393B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US87018806P 2006-12-15 2006-12-15
US60/870,188 2006-12-15
US87062206P 2006-12-19 2006-12-19
US60/870,622 2006-12-19
US11/928,322 US7966457B2 (en) 2006-12-15 2007-10-30 Configurable cache for a microprocessor
US11/928,322 2007-10-30
PCT/US2007/087600 WO2008076896A2 (en) 2006-12-15 2007-12-14 Configurable cache for a microprocessor

Publications (2)

Publication Number Publication Date
CN101558393A CN101558393A (en) 2009-10-14
CN101558393B true CN101558393B (en) 2014-09-24

Family

ID=41175633

Family Applications (3)

Application Number Title Priority Date Filing Date
CN2007800461129A Active CN101558391B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046103.XA Active CN101558390B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046003.7A Active CN101558393B (en) 2006-12-15 2007-12-14 Configurable cache for a microprocessor

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN2007800461129A Active CN101558391B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor
CN200780046103.XA Active CN101558390B (en) 2006-12-15 2007-12-12 Configurable cache for a microprocessor

Country Status (1)

Country Link
CN (3) CN101558391B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011209904A (en) * 2010-03-29 2011-10-20 Sony Corp Instruction fetch apparatus and processor
CN102567220A (en) * 2010-12-10 2012-07-11 中兴通讯股份有限公司 Cache access control method and Cache access control device
KR102202575B1 (en) 2013-12-31 2021-01-13 삼성전자주식회사 Memory management method and apparatus
JP5863855B2 (en) * 2014-02-26 2016-02-17 ファナック株式会社 Programmable controller having instruction cache for processing branch instructions at high speed
JP6250447B2 (en) * 2014-03-20 2017-12-20 株式会社メガチップス Semiconductor device and instruction read control method
US9460016B2 (en) * 2014-06-16 2016-10-04 Analog Devices Global Hamilton Cache way prediction
DE102016211386A1 (en) * 2016-06-14 2017-12-14 Robert Bosch Gmbh Method for operating a computing unit
US10866897B2 (en) * 2016-09-26 2020-12-15 Samsung Electronics Co., Ltd. Byte-addressable flash-based memory module with prefetch mode that is adjusted based on feedback from prefetch accuracy that is calculated by comparing first decoded address and second decoded address, where the first decoded address is sent to memory controller, and the second decoded address is sent to prefetch buffer
CN111124955B (en) * 2018-10-31 2023-09-08 珠海格力电器股份有限公司 Cache control method and equipment and computer storage medium
US11360704B2 (en) 2018-12-21 2022-06-14 Micron Technology, Inc. Multiplexed signal development in a memory device
CN112527390B (en) * 2019-08-28 2024-03-12 武汉杰开科技有限公司 Data acquisition method, microprocessor and device with storage function

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0612013A1 (en) * 1993-01-21 1994-08-24 Advanced Micro Devices, Inc. Combination prefetch buffer and instruction cache cross references to related applications
CN100517274C (en) * 2004-03-24 2009-07-22 松下电器产业株式会社 Cache memory and control method thereof
US7386679B2 (en) * 2004-04-15 2008-06-10 International Business Machines Corporation System, method and storage medium for memory management

Also Published As

Publication number Publication date
CN101558393A (en) 2009-10-14
CN101558391B (en) 2013-10-16
CN101558390A (en) 2009-10-14
CN101558391A (en) 2009-10-14
CN101558390B (en) 2014-06-18

Similar Documents

Publication Publication Date Title
CN101558393B (en) Configurable cache for a microprocessor
KR101363585B1 (en) Configurable cache for a microprocessor
TWI431472B (en) Configurable cache for a microprocessor
CA1199420A (en) Hierarchical memory system including separate cache memories for storing data and instructions
KR100257518B1 (en) Resizable and relocatable memory scratch pad as a cache slice
US9286221B1 (en) Heterogeneous memory system
TWI442227B (en) Configurable cache for a microprocessor
CN109952567B (en) Method and apparatus for bypassing internal caches of advanced DRAM memory controllers
US20090132749A1 (en) Cache memory system
US20060212654A1 (en) Method and apparatus for intelligent instruction caching using application characteristics
US11907301B2 (en) Binary search procedure for control table stored in memory system
WO2020021223A1 (en) Memory protection unit using memory protection table stored in memory system
US9262325B1 (en) Heterogeneous memory system
US5835945A (en) Memory system with write buffer, prefetch and internal caches
US5434990A (en) Method for serially or concurrently addressing n individually addressable memories each having an address latch and data latch
US5953740A (en) Computer memory system having programmable operational characteristics based on characteristics of a central processor
JP3755804B2 (en) Object code resynthesis method and generation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant