CN102214146B - Step size adaptive Cache pre-fetching method and system - Google Patents
Step size adaptive Cache pre-fetching method and system Download PDFInfo
- Publication number
- CN102214146B CN102214146B CN2011102133606A CN201110213360A CN102214146B CN 102214146 B CN102214146 B CN 102214146B CN 2011102133606 A CN2011102133606 A CN 2011102133606A CN 201110213360 A CN201110213360 A CN 201110213360A CN 102214146 B CN102214146 B CN 102214146B
- Authority
- CN
- China
- Prior art keywords
- looking ahead
- ahead
- address
- cache
- index value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003044 adaptive effect Effects 0.000 title claims abstract description 19
- 238000000034 method Methods 0.000 title claims abstract description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 9
- 239000008186 active pharmaceutical agent Substances 0.000 claims description 34
- 238000013277 forecasting method Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 5
- 238000006386 neutralization reaction Methods 0.000 claims description 3
- 230000001960 triggered effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3824—Operand accessing
- G06F9/383—Operand prefetching
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a step size adaptive Cache pre-fetching method and system, wherein the method comprises the steps that: a pre-fetching table is set, an index value is calculated according to a fail address, two predication addresses are calculated and compared with the index value, the pre-fetching table is considered to be hit if any one of the predication addresses is identical to the index value, otherwise, a new entry is allocated to the fail address; if the pre-fetching table is hit and data is pre-fetched in the pre-fetching table, the pre-fetched data is returned to Cache; the pre-fetching table is then updated; if the pre-fetching table is hit, whether the hit entries in the pre-fetching table meet pre-fetching conditions is determined, and if so, a pre-fetching operation is triggered. The system comprises a pre-fetching table, an address conversion component for converting the index value, an adder for calculating the two predication addresses, a comparator for comparing the two predication addresses with the index value, and an update control logic component for determining whether to perform the pre-fetching operation and updating the pre-fetching table. The method and the system disclosed by the invention have the advantages of strong portability, high pre-fetching corrective rate and the like.
Description
Technical field
The present invention relates to field of microprocessors, relate in particular to a kind of Cache forecasting method and system.
Background technology
The widespread use of Cache (cache memory) technology, well solved the restriction that storage wall problem promotes microprocessor performance, and the development of very large scale integration technology, also so that on the sheet integrated large capacity C ache become possibility, this has reduced the crash rate of Cache to a great extent.But when the Cache inefficacy occured, large capacity C ache meaned and will read more data from external memory, has so just increased the inefficacy expense of Cache.The Cache that looks ahead can predict next time and occur the data address that Cache lost efficacy, and in advance data is read Cache in memory access idle period of time, just can hit Cache when really need these data next time, reaches the purpose of hiding the inefficacy expense.
Existing Cache prefetch mechanisms mostly adopts and is similar to the list structure of looking ahead shown in Figure 1, and wherein Ins addr represents the access instruction address, and Prefetch addr represents prefetch address, and stride represents the step-length of looking ahead, and valid represents whether current list item is effective.There is following shortcoming in this list structure of looking ahead: one, only a certain memory access address sequence type is looked ahead, lack the dirigibility of looking ahead, can not adjust predicted address computing method, function singleness according to the fail address sequence type flexibly; Two, the index that uses instruction address to look ahead and show as inquiry needs the special CPU of increasing to the instruction address data path of Cache, and the design realization is complicated.
Summary of the invention
Technical matters to be solved by this invention is: for the problem of prior art existence, the invention provides a kind of Cache of use fail address and calculate the index that inquiry is looked ahead and shown, and can dynamically adjust flexibly the predicted address computing method, Cache forecasting method and the system thereof of the adaptive step that portability is strong, the accuracy of looking ahead is high.
For solving the problems of the technologies described above, the present invention by the following technical solutions:
A kind of Cache forecasting method of adaptive step may further comprise the steps:
(1) table of looking ahead is set: the table of looking ahead is set, stores the fail address described looking ahead in showing, be recorded in the PA territory;
(2) inquire about the table of looking ahead: when occuring at first to inquire about the table of looking ahead when Cache lost efficacy, at first use the high position of fail address as index value; Content according to the clauses and subclauses of storage in the table of looking ahead obtains two predicted address by additional calculation, and described two predicted address are compared with index value respectively, if any predicted address is identical with index value, then judges and hits the table of looking ahead, and turns step (4); Otherwise judge the miss table of looking ahead, turn step (3);
(3) distribute new list item: if miss looking ahead when showing then is that a new list item is distributed in this fail address in the table of looking ahead; Turn step (5);
(4) return data: if hit in look ahead table and the table of looking ahead during prefetch data, return the data of having looked ahead to Cache; Otherwise return data not; If there is not prefetch data to return, then proceeds accessing operation and fetch required data; Turn step (5);
(5) upgrade the table of looking ahead: the information to the table of looking ahead is upgraded;
(6) look ahead: if hit the table of looking ahead, judge then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfying, then triggers prefetch operation, looks ahead.
Further improvement as forecasting method of the present invention:
Memory access address conductively-closed when described index value specifically refers to the described last Cache of generation inefficacy falls on the ground the low n position of location, wherein n=log
2(Cache line size).
Also record in the described table of looking ahead: S territory, the step-length of memory access address sequence, the step-length of described memory access address sequence are the difference of current memory access fail address and last memory access fail address;
With the DS territory, the linear change value of memory access address sequence step-length;
Two predicted address are specially in the described step (2): the predicted address PDA of fixed step size memory access type
cPredicted address PDA with step-length linear change memory access type
1, PDA wherein
c=PA+S, PDA
1=PA+S+DS.
Also store the memory access address sequence type of prediction in the described table of looking ahead, be recorded in the PT territory, wherein, PT is that the type of prediction of the current list item of 0 expression is fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is step-length linear change memory access type;
The described table of looking ahead that hits comprises that ordained by Heaven neutralization vacation hits, described ordained by Heaven in for a type of prediction of hitting consistent with current memory access address sequence; Described vacation is hit and is current type of prediction and memory access address sequence Type-Inconsistencies.
Also record in the described table of looking ahead: the C territory is a saturability counter, records the confidence value of current list item, is used for judging and carries out the opportunity of looking ahead;
Whether effective the V territory identifies current list item, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The described table of looking ahead also is provided with confidence system<I, D, T, ST〉(this confidence system can preferably realize by analogy method before hardware is realized), in the described confidence system, when hitting when looking ahead table, make C=C+I; Miss looking ahead when showing makes C=C-D, and T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C;
Described step (5) is upgraded the table of looking ahead, and specifically comprises following two kinds:
5.1) do not need the renewal of conversion estimation type:
If look ahead table in a. ordained by Heaven, a C that then will hit increases I, is updated to successively in order: DS=index value-PA-S, S=index value-PA and PA=index value, and the C with other list items subtracts D simultaneously;
If the miss table of looking ahead b. then is that a new list item is distributed in current fail address, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=index value-PA-S, S=index value-PA and PA=index value;
C. PT and PR are set to 0; And with the V clear 0 of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
If d. the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is set to I, and PR is set to 0, and is updated to successively in order: DS=index value-PA-S, S=index value-PA, PA=index value.
The present invention also provides a kind of Cache pre-fetching system of the adaptive step for realizing said method, and described pre-fetching system comprises:
The table of looking ahead is used for the required information of storage Cache prefetch operation, and described information comprises the Cache fail address;
The address translation parts are used for converting described Cache fail address to index value that the table of looking ahead is inquired about;
Totalizer goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer be used for described two kinds of predicted address and index value are compared, and the result that will compare is sent to the renewal control logic unit;
Upgrade control logic unit, be used for judging whether to carry out prefetch operation according to the result of described comparer comparison, and upgrade the described table of looking ahead.
Further improvement as pre-fetching system of the present invention:
Described pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned and arithmetical unit, described and arithmetical unit link to each other with the described table of looking ahead with described comparer respectively.
Compared with prior art, the invention has the advantages that:
1, the Cache forecasting method of adaptive step of the present invention, the increase of excessive data path avoided in the index that uses the fail address to look ahead and show as inquiry, can be very easily on the basis of existing Cache structure, add prefetch architecture and realize looking ahead Cache, portable strong.
2, the Cache forecasting method of adaptive step of the present invention can Dynamic Recognition memory access address sequence type, and adaptive adjustment predicted address computing method are looked ahead to a greater variety of memory access address sequences.Simultaneously, the present invention passes through to introduce the confidence system mechanism, and chooses suitable structural parameters and trigger prefetch operation in good time, can obtain the higher success ratio of looking ahead, and hides to a greater degree the inefficacy expense;
3, the Cache pre-fetching system of adaptive step of the present invention can by increasing limited parts, be realized said method.It is simple in structure, hardware spending is little and portable strong, performance that can Effective Raise cache system.
Description of drawings
Fig. 1 is the existing prefetch mechanisms list structure schematic diagram of typically looking ahead;
Fig. 2 is the main-process stream schematic diagram of forecasting method of the present invention;
Fig. 3 is the basic structure schematic diagram of pre-fetching system of the present invention;
Fig. 4 is the steps flow chart schematic diagram that inquiry is looked ahead and shown in the specific embodiment of the invention; Wherein, Fig. 4 (a) is the ordained by Heaven middle steps flow chart schematic diagram in the specific embodiment of the invention; Fig. 4 (b) hits the steps flow chart schematic diagram for the vacation in the specific embodiment of the invention;
Fig. 5 is the steps flow chart schematic diagram that upgrades the table of looking ahead in the specific embodiment of the invention.
Marginal data:
1, address translation parts; 2, totalizer; 3, comparer; 4, upgrade control logic unit; 5 and arithmetical unit.
Embodiment
Below with reference to Figure of description and specific embodiment the present invention is described in further detail.
As shown in Figure 3, the Cache pre-fetching system of adaptive step of the present invention comprises:
The table of looking ahead is used for the required information of storage Cache prefetch operation;
Address translation parts 1 are used for converting the Cache fail address to index value (IDX) that the table of looking ahead is inquired about;
Totalizer 2 goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer 3 is used for two kinds of predicted address and index value are compared, and the result that will compare is sent to renewal control logic unit 4;
Upgrade control logic unit 4, be used for judging whether to carry out prefetch operation according to comparer 3 result relatively, and upgrade the table of looking ahead.
In the present embodiment, pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned with arithmetical unit 5, with arithmetical unit 5 respectively with comparer 3 with look ahead the table link to each other.
Wherein, look ahead and comprise following information in the table:
The PA territory, the memory access address when the last Cache of generation lost efficacy, i.e. fail address;
S territory, the step-length of memory access address sequence, the step-length of memory access address sequence are the difference of current memory access fail address and last memory access fail address;
The DS territory, the linear change value of memory access address sequence step-length;
In the PT territory, also store the memory access address sequence type of prediction in the table of looking ahead, wherein, PT is that the type of prediction of the current list item of 0 expression is fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is step-length linear change memory access type;
The C territory is a saturability counter, records the confidence value of current list item, is used for judging carrying out the opportunity of looking ahead;
Whether effective the V territory identifies current list item, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The table of looking ahead also is provided with confidence system<I, D, T, ST 〉, this confidence system can preferably realize by analogy method before hardware is realized.In this confidence system, when hitting when table of looking ahead, make C=C+I; Miss looking ahead when showing, C=C-D, T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C.
Can realize the Cache forecasting method of adaptive step of the present invention by above-mentioned pre-fetching system, as shown in Figure 2, the method realizes by following steps:
1, the table of looking ahead is set:
One table of looking ahead is set, in the table of looking ahead, the clauses and subclauses such as PA, S, DS, PT, C, V and PR is set, be used for storing above-mentioned information.
2, inquire about the table of looking ahead:
When occuring at first to inquire about the table of looking ahead when Cache lost efficacy, at first to use by address translation parts 1 high position of fail address as index value (IDX), the memory access address mask when being about to the last Cache of generation inefficacy falls on the ground the low n position of location, wherein n=log
2(Cache line size), Cache line size are the capable size of Cache that is provided with above-mentioned pre-fetching system.
As shown in Figure 4, calculate two predicted address with totalizer 2: the predicted address PDA of fixed step size memory access type
cPredicted address PDA with step-length linear change memory access type
1, PDA wherein
c=PA+S, PDA
1=PA+S+DS.By comparer 3 with PDA
cAnd PDA
1Compare with IDX respectively, if any is identical with IDX, then judges and hit the table of looking ahead, turn step 4; Otherwise judge the miss table of looking ahead, turn step 3.
Hitting the table of looking ahead and comprising that ordained by Heaven neutralization vacation hits, as IDX and PDA
cIdentical and PT is 0, perhaps IDX and PDA
1Identical and PT be thought in 1 o'clock ordained by Heaven in (shown in Fig. 4 (a)), be a type of prediction of hitting consistent with current memory access address sequence; Vacation is hit (shown in Fig. 4 (b)) and is current type of prediction and memory access address sequence Type-Inconsistencies, but needs only conversion estimation type (being about to the PT negate) when table is looked ahead in follow-up renewal, just can meet current memory access address sequence type.
3, distribute new list item:
Look ahead when table when the fail address is miss, then in the table of looking ahead, be new list item of this fail address distribution; Its principle is preferentially to select the sky list item, and namely V is 0 item, if there is not null term then to select C minimum or directly select first.After the selected list item, the PA of new list item is set to current fail address, V is set to the current list item of looking ahead of 1 sign for effectively, and S, DS, PT, C, PR all are initialized as 0; Turn step 5.
4, return data:
If look ahead table during the fail address is ordained by Heaven, and PR is 1 and returns the data of having looked ahead to Cache, otherwise return data not.If there is not prefetch data to return, namely be not look ahead in ordained by Heaven table or ordained by Heaven in the list item data of not looked ahead, then proceed accessing operation and fetch required data; Turn step 5.
5, upgrade the table of looking ahead:
As shown in Figure 5, no matter whether Query Result is for hitting the table of looking ahead, and all needs the information of the table of looking ahead is upgraded.Renewal comprises following two kinds of situations:
5.1) do not need the renewal of conversion estimation type:
If look ahead table in a. ordained by Heaven, a C that then will hit increases I, is updated to successively in order: DS=IDX-PA-S, S=IDX-PA and PA=IDX, and the C with other list items subtracts D simultaneously;
If the miss table of looking ahead b. then is that a new list item is distributed in current fail address, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=IDX-PA-S, S=IDX-PA and PA=IDX;
C. PT and PR are set to 0; And with clear 0 (it is invalid to be set to) of V of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
If d. the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is set to I, and PR is set to 0, and is updated to successively in order: DS=IDX-PA-S, S=IDX-PA, PA=IDX.
6, look ahead:
The table of looking ahead in the fail address is ordained by Heaven judges then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfying (C who hits more than or equal to threshold value T time), then triggers prefetch operation, looks ahead.And after prefetch data returns, PR is put 1.
Prefetch address is according to the difference of PT value, and its computing method are also different.When PT was 0, prefetch address was (PA+S), and when PT was 1, prefetch address was (PA+S+DS), and wherein PA, S, DS are the value after the renewal.
Prefetch operation must carry out in memory access idle period of time, carried out prefetch operation when namely all normal accessing operations are all finished, and waited for otherwise be taken in the formation in advance.
Application examples 1:
In the above-mentioned forecasting method, the table of looking ahead be the fixed step size memory access mode, and the confidence system adopts<1,2,2,3 for shown in the table 1 〉.Its course of work is as follows:
1) table of looking ahead distributes a new list item for first memory access fail address 0x80, and this moment, PA write memory access fail address 0x80, and V puts 1, and S, DS, PT, C, PR all are initialized as 0.
2) next memory access fail address is 0x100, because PT=0 and 0x100 ≠ PDA
c(0x80+0), so do not hit the table of looking ahead, only need to upgrade the value of looking ahead table and not increasing counter C, PA<=0x100, S<=(0x100-0x80), DS<=(0x100-0x80-0).
3) next memory access fail address is 0x180, because PT=0 and 0x180=PDA
c(0x100+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x180-0x100), DS<=(0x180-0x100-0x80); So this moment, C<T did not trigger prefetch operation.
4) next memory access fail address is 0x200, because PT=0 and 0x200=PDA
c(0x180+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x200, and S<=(0x200-0x180), DS<=(0x200-0x180-0x80); So this moment, C=T triggered prefetch operation, prefetch address is 0x280 (PA+S), and after returning prefetch data PR is put 1.
5) next memory access fail address is 0x280, because PT=0 and 0x280=PDA
c(0x200+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x180-0x100), DS<=(0x180-0x100-0x80); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x300 (PA+S), and after returning prefetch data PR is put 1.
6) next memory access fail address is 0x300, because PT=0 and 0x300=PDA
c(0x280+0x80), so hit the table of looking ahead; The value of this hour counter C is 3, and the value of reaching capacity is not so need to add 1 again; The renewal table of looking ahead, PA<=0x180, S<=(0x180-0x100), DS<=(0x180-0x100-0x80); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x300 (PA+S), and after returning prefetch data PR is put 1.
Table 1
Application examples 2:
In the above-mentioned forecasting method, the table of looking ahead be step-length linear change memory access mode, and the confidence system adopts<1,2,2,3 for shown in the table 2 〉.Its course of work is as follows:
1) table of looking ahead can be that first memory access fail address 0x80 distributes a new list item, and this moment, PA write memory access fail address 0x80, and V puts 1, S, DS, PT, C, PR and all is initialized as 0.
2) next memory access fail address is 0x100, because PT=0 and 0x100 ≠ PDA
c(0x80+0), so do not hit the table of looking ahead, only need to upgrade the value of looking ahead table and not increasing counter C, PA<=0x100, S<=(0x100-0x80), DS<=(0x100-0x80-0).
3) next memory access fail address is 0x200, because PT=0 and 0x200=PDA
c(0x100+0x80), but this moment 0x200=PDA
1(0x100+0x80+0x80), need the conversion estimation type, PT=1 thinks simultaneously and hits the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x180, and S<=(0x200-0x100), DS<=(0x180-0x100-0x80); So this moment, C<T did not trigger prefetch operation.
4) next memory access fail address is 0x380, because PT=1 and 0x380=PDA
1(0x200+0x100+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x380, and S<=(0x380-0x200), DS<=(0x380-0x200-0x100); So this moment, C=T triggered prefetch operation, prefetch address is 0x580 (PA+S+DS), and after returning prefetch data PR is put 1.
5) next memory access fail address is 0x580, because PT=1 and 0x580=PDA
1(0x380+0x180+0x80), so hit the table of looking ahead; The value of counter C increases by 1 and also upgrades the table of looking ahead, PA<=0x580, and S<=(0x580-0x380), DS<=(0x580-0x380-0x180); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0x800 (PA+S+DS), and after returning prefetch data PR is put 1.
6) next memory access fail address is 0x800, because PT=1 and 0x800=PDA
1(0x580+0x200+0x80), so hit the table of looking ahead; The value of this hour counter C is 3, and the value of reaching capacity is not so need to add 1 again; The renewal table of looking ahead, PA<=0x800, S<=(0x800-0x580), DS<=(0x800-0x580-0x200); This moment PR=1, the data that need to have looked ahead return to Cache, C>T triggers prefetch operation, prefetch address is 0xB00 (PA+S+DS), and after returning prefetch data PR is put 1.
Table 2
To sum up, forecasting method of the present invention, whether the index that uses the fail address to look ahead and show as inquiry according to the adjustment predicted address computing method of memory access address sequence type self adaption, and satisfies in good time triggering of threshold value requirement according to the confidence value and looks ahead; The present invention can dynamically adjust the predicted address computing method, and portable strong, the accuracy of looking ahead is high, is applicable to the improvement in performance of Cache.
The above only is preferred implementation of the present invention, and protection scope of the present invention also not only is confined to above-described embodiment, and all technical schemes that belongs under the thinking of the present invention all belong to protection scope of the present invention.Should be pointed out that for those skilled in the art, the some improvements and modifications not breaking away under the principle of the invention prerequisite should be considered as protection scope of the present invention.
Claims (7)
1. the Cache forecasting method of an adaptive step is characterized in that may further comprise the steps:
(1) table of looking ahead is set: the table of looking ahead is set, stores the fail address described looking ahead in showing, be recorded in the PA territory;
(2) inquire about the table of looking ahead: when occuring at first to inquire about the table of looking ahead when Cache lost efficacy, at first use the high position of fail address as index value; Content according to the clauses and subclauses of storage in the table of looking ahead obtains two predicted address by additional calculation, and described two predicted address are compared with index value respectively, if any predicted address is identical with index value, then judges and hits the table of looking ahead, and turns step (4); Otherwise judge the miss table of looking ahead, turn step (3);
(3) distribute new list item: if miss looking ahead when showing then is that a new list item is distributed in this fail address in the table of looking ahead; Turn step (5);
(4) return data: if hit in look ahead table and the table of looking ahead during prefetch data, return the data of having looked ahead to Cache; Otherwise return data not; If there is not prefetch data to return, then proceeds accessing operation and fetch required data; Turn step (5);
(5) upgrade the table of looking ahead: the information to the table of looking ahead is upgraded;
(6) look ahead: if hit the table of looking ahead, judge then whether hitting in the table of looking ahead satisfies the condition of looking ahead, and as satisfying, then triggers prefetch operation, looks ahead;
Memory access address conductively-closed when described index value specifically refers to the described last Cache of generation inefficacy falls on the ground the low n position of location, wherein n=log
2(Cache line size).
2. the Cache forecasting method of adaptive step according to claim 1, it is characterized in that, also record in the described table of looking ahead: S territory, the step-length of memory access address sequence, the step-length of described memory access address sequence are the difference of current memory access fail address and last memory access fail address;
With the DS territory, the linear change value of memory access address sequence step-length;
Two predicted address are specially in the described step (2): the predicted address PDA of fixed step size memory access type
cPredicted address PDA with step-length linear change memory access type
l, PDA wherein
c=PA+S, PDA
l=PA+S+DS.
3. the Cache forecasting method of adaptive step according to claim 2, it is characterized in that, also store the memory access address sequence type of prediction in the described table of looking ahead, be recorded in the PT territory, wherein, PT is that the type of prediction of the current list item of 0 expression is fixed step size memory access type, and PT is that the type of prediction of the current list item of 1 expression is step-length linear change memory access type;
The described table of looking ahead that hits comprises that ordained by Heaven neutralization vacation hits, described ordained by Heaven in for a type of prediction of hitting consistent with current memory access address sequence; Described vacation is hit and is current type of prediction and memory access address sequence Type-Inconsistencies.
4. according to the Cache forecasting method of adaptive step claimed in claim 3, it is characterized in that, also record in the described table of looking ahead: the C territory is a saturability counter, records the confidence value of current list item, is used for judging and carries out the opportunity of looking ahead;
Whether effective the V territory identifies current list item, and V represented that current list item was effective at 1 o'clock, otherwise invalid;
The PR territory identifies in the D territory of current list item whether the data of having looked ahead are arranged, and PR is 1 o'clock then prefetch data and effectively in the D territory of corresponding list item, otherwise invalid;
The described table of looking ahead also is provided with confidence system<I, D, T, ST 〉, in the described confidence system, when hitting when looking ahead table, make C=C+I; Miss looking ahead when showing makes C=C-D, and T is for triggering the confidence threshold value of looking ahead, and ST is the saturation value of saturability counter C.
5. according to the Cache forecasting method of adaptive step claimed in claim 4, it is characterized in that, described step (5) is upgraded the table of looking ahead, and specifically comprises following two kinds:
5.1) do not need the renewal of conversion estimation type:
If look ahead table in a. ordained by Heaven, a C that then will hit increases I, is updated to successively in order: DS=index value-PA-S, S=index value-PA and PA=index value, and the C with other list items subtracts D simultaneously;
If the miss table of looking ahead b. then is that a new list item is distributed in current fail address, the C of all list items is subtracted D, C is updated in order successively less than the list item of confidence threshold value T: DS=index value-PA-S, S=index value-PA and PA=index value;
C. PT and PR are set to 0; And with the V clear 0 of the miss list item of C=0;
5.2) need the renewal of conversion estimation type:
If d. the table of looking ahead is hit in vacation, then will hit the PT negate of item, C is set to I, and PR is set to 0, and is updated to successively in order: DS=index value-PA-S, S=index value-PA, PA=index value.
6. a Cache pre-fetching system that is used for the adaptive step of each described method of realization claim 1~5 is characterized in that, described pre-fetching system comprises:
The table of looking ahead is used for the required information of storage Cache prefetch operation, and described information comprises the Cache fail address;
The address translation parts are used for converting described Cache fail address to index value that the table of looking ahead is inquired about;
Totalizer goes out two kinds of predicted address according to the information calculations in the table of looking ahead;
Comparer be used for described two kinds of predicted address and index value are compared, and the result that will compare is sent to the renewal control logic unit;
Upgrade control logic unit, be used for judging whether to carry out prefetch operation according to the result of described comparer comparison, and upgrade the described table of looking ahead.
7. the Cache pre-fetching system of adaptive step according to claim 6, it is characterized in that, described pre-fetching system also comprise be used to judge whether the data of having looked ahead are returned and arithmetical unit, described and arithmetical unit link to each other with the described table of looking ahead with described comparer respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102133606A CN102214146B (en) | 2011-07-28 | 2011-07-28 | Step size adaptive Cache pre-fetching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102133606A CN102214146B (en) | 2011-07-28 | 2011-07-28 | Step size adaptive Cache pre-fetching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102214146A CN102214146A (en) | 2011-10-12 |
CN102214146B true CN102214146B (en) | 2013-04-10 |
Family
ID=44745464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011102133606A Active CN102214146B (en) | 2011-07-28 | 2011-07-28 | Step size adaptive Cache pre-fetching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102214146B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521149B (en) * | 2011-11-28 | 2014-08-27 | 曙光信息产业(北京)有限公司 | Optimizing polling system and optimizing polling method for collecting data from plurality of buffer zones |
CN102521158B (en) * | 2011-12-13 | 2014-09-24 | 北京北大众志微系统科技有限责任公司 | Method and device for realizing data pre-fetching |
CN102662862B (en) * | 2012-03-22 | 2015-01-21 | 北京北大众志微系统科技有限责任公司 | Method and device for implementing hybrid prefetch |
CN104461758B (en) * | 2014-11-10 | 2017-08-25 | 中国航天科技集团公司第九研究院第七七一研究所 | A kind of quick abnormality eliminating method and its processing structure for emptying streamline of tolerance cache missings |
US9817764B2 (en) | 2014-12-14 | 2017-11-14 | Via Alliance Semiconductor Co., Ltd | Multiple data prefetchers that defer to one another based on prefetch effectiveness by memory access type |
EP3049915B1 (en) | 2014-12-14 | 2020-02-12 | VIA Alliance Semiconductor Co., Ltd. | Prefetching with level of aggressiveness based on effectiveness by memory access type |
CN110765034B (en) | 2018-07-27 | 2022-06-14 | 华为技术有限公司 | Data prefetching method and terminal equipment |
CN111639042B (en) * | 2020-06-04 | 2023-06-02 | 中科芯集成电路有限公司 | Processing method and device for prefetching buffer data consistency |
CN113656332B (en) * | 2021-08-20 | 2023-05-26 | 中国科学院上海高等研究院 | CPU cache data prefetching method based on merging address difference value sequence |
CN116166575B (en) * | 2023-02-03 | 2024-01-23 | 摩尔线程智能科技(北京)有限责任公司 | Method, device, equipment, medium and program product for configuring access segment length |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6216219B1 (en) * | 1996-12-31 | 2001-04-10 | Texas Instruments Incorporated | Microprocessor circuits, systems, and methods implementing a load target buffer with entries relating to prefetch desirability |
JP2006164218A (en) * | 2004-11-11 | 2006-06-22 | Nec Corp | Storage system and its cache control method |
CN100399299C (en) * | 2005-10-28 | 2008-07-02 | 中国科学院计算技术研究所 | Memory data processing method of cache failure processor |
-
2011
- 2011-07-28 CN CN2011102133606A patent/CN102214146B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN102214146A (en) | 2011-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102214146B (en) | Step size adaptive Cache pre-fetching method and system | |
US10089240B2 (en) | Cache accessed using virtual addresses | |
US9098418B2 (en) | Coordinated prefetching based on training in hierarchically cached processors | |
US10540287B2 (en) | Spatial memory streaming confidence mechanism | |
US10409725B2 (en) | Management of shared pipeline resource usage based on level information | |
US9710387B2 (en) | Guest instruction to native instruction range based mapping using a conversion look aside buffer of a processor | |
US20170115991A1 (en) | Unified shadow register file and pipeline architecture supporting speculative architectural states | |
US9519586B2 (en) | Methods and apparatus to reduce cache pollution caused by data prefetching | |
US9886385B1 (en) | Content-directed prefetch circuit with quality filtering | |
US9582282B2 (en) | Prefetching using a prefetch lookup table identifying previously accessed cache lines | |
US10642618B1 (en) | Callgraph signature prefetch | |
US8504777B2 (en) | Data processor for processing decorated instructions with cache bypass | |
CN117389630B (en) | Data caching method and device, electronic equipment and readable storage medium | |
CN113986774A (en) | Cache replacement system and method based on instruction stream and memory access mode learning | |
CN102163144A (en) | Hardware data pre-fetching method of embedded processor | |
WO2017222801A1 (en) | Pre-fetch mechanism for compressed memory lines in a processor-based system | |
CN107562806B (en) | Self-adaptive sensing acceleration method and system of hybrid memory file system | |
CN115934170A (en) | Prefetching method and device, prefetching training method and device, and storage medium | |
US6535961B2 (en) | Spatial footprint prediction | |
US9009410B2 (en) | System and method for locking data in a cache memory | |
JP2024511768A (en) | Method and apparatus for DRAM cache tag prefetcher | |
EP3283966B1 (en) | Virtualization-aware prefetching | |
Mittal et al. | Cache performance improvement using software-based approach | |
CN117971723A (en) | Data prefetching method and device | |
CN118585486A (en) | Cache prefetching method, device, terminal and medium of graph neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |