CN1517882A - Remapping method of internallystored address - Google Patents

Remapping method of internallystored address Download PDF

Info

Publication number
CN1517882A
CN1517882A CNA031027253A CN03102725A CN1517882A CN 1517882 A CN1517882 A CN 1517882A CN A031027253 A CNA031027253 A CN A031027253A CN 03102725 A CN03102725 A CN 03102725A CN 1517882 A CN1517882 A CN 1517882A
Authority
CN
China
Prior art keywords
address
memory
target
output
several
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA031027253A
Other languages
Chinese (zh)
Other versions
CN1269043C (en
Inventor
李明宪
平德林
刘恕民
陈灿辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Integrated Systems Corp
Original Assignee
Silicon Integrated Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Integrated Systems Corp filed Critical Silicon Integrated Systems Corp
Priority to CNB031027253A priority Critical patent/CN1269043C/en
Publication of CN1517882A publication Critical patent/CN1517882A/en
Application granted granted Critical
Publication of CN1269043C publication Critical patent/CN1269043C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

A remapping method of internally stored address includes providing buffer address and linear arithmetic unit, performing linear operation to the first and the second inputs to obtain the first output, executing substitution step by use of several key transition bits to replace the several bits in the first output to obtain the second output, and assigning the second output as the object location address of object memory address in several memory addresses for storing the data from buffer.

Description

The method that remaps of memory address
Technical field
The present invention relates to (Remapping) method that remaps of a kind of memory address, relate in particular to a kind of method that remaps that is applied in the memory address of various memory systems, improve the efficient of memory access (MemoryAccess) with this.
Background technology
In the prior art computer system, north bridge chips is connected with processor by main bus, is connected with memory modules by rambus again, and north bridge chips also can be connected with other various components by different buses, as AGP display module and South Bridge chip etc.In addition, other various device (as PCI equipment, i.e. peripheral hardware expansion interface equipment) is connected in South Bridge chip.When processor or miscellaneous equipment send read cycle or write cycle during to memory modules, read cycle or write cycle need just can arrive memory modules through rambus.
On the other hand, in the memory modules of computer system, memory bank (Memory Bank) hereinafter to be referred as storehouse (Bank), be a logical block, and the size in storehouse is decided by processor.For example, one 32 processor can require the storehouse that one 32 information is provided at every turn.One page (Page) expression in storehouse at every turn in memory modules from the position amount of row address (Row Address) access, and the size of page or leaf depends on the quantity of column address (Column Address).In addition, each storehouse all has a row buffer (RowBuffer), and this row buffer is in order to keep the data of one page.And on prior art, in a storehouse, can only open one page at every turn.
To have three kinds of possibilities during memory access, below will be described.A memory access has generally included activation step (Activation Step), commands steps (as reading order and write command etc.) and precharge step (Precharge Step).Activate step and be commonly used to open required storehouse and page or leaf in the internal memory; The read/write commands steps normally makes Memory Controller Hub enable, and from internal memory sense data or with the data write memory; And precharge step is mainly closed storehouse and page or leaf that above-mentioned activation step is opened.
Three kinds of possibilities during relevant for memory access include:
1. page or leaf hits (Page-hit):
The page or leaf of present memory access equals the page or leaf of memory access last time in whole storehouses, wherein the activation step of precharge step and row only needs to carry out when first memory access, the activation step that memory access then only need be carried out row gets final product, so the memory access when page or leaf hits just can be by fully channelization computing (Pipeline), simultaneously, because the complete application of bandwidth and internal memory location (Memory Localization), so the delay during memory access is very short.
2. lost efficacy and page fault (Bank-miss and Page-miss) in the storehouse:
The at present page or leaf of memory access and all opened page or leaf and inequality, the storehouse at the page or leaf place of memory access at present simultaneously, also inequality with the storehouse of activating page place that step opens last time.Because the memory access the when storehouse was lost efficacy with page fault can be handled by parallel mode, so the operation of its correspondence also can be able to abundant channelization computing.
3. hit and page fault (Bank-hit and Page-miss) in the storehouse:
Present all opened page or leaf and inequality in the page or leaf of memory access and the whole storehouses, but the storehouse at the page or leaf place of memory access equals the storehouse at page place of memory access last time at present.Memory access when the storehouse is hit with page fault will cause conflicting of row buffer, and, also need to carry out activation step and precharge step with each memory access of initialization.Therefore, the memory access the when storehouse is hit with page fault is failed by fully channelization computing, hit with page or leaf or the memory access of storehouse when losing efficacy with page fault Comparatively speaking, its delay is very serious.And the memory bandwidth of only utilization part.
When the storehouse was lost efficacy with page fault in the prior art, as shown in Figure 1, the page or leaf of memory access is at the A in A storehouse page or leaf 10 at present, and the page or leaf of follow-up memory access is at the B in B storehouse page or leaf 20, owing to occur the situation of storehouse inefficacy and page fault between the present and follow-up memory access, so as shown in Figure 2, can use channelization computing fully at present and in the follow-up memory access, the activation step 60 of wherein follow-up memory access 40 can begin after the activation step 50 of present memory access 30 is finished, and the reading order 80 of the write command 70 of present memory access 30 and follow-up memory access 40 almost can be carried out during same simultaneously, then carries out precharge step 90 respectively with follow-up memory access at present or closes.
On the other hand, the memory access when the prior art page or leaf hits equally can be by fully channelization computing, and this is owing to present and follow-up memory access all is access reasons with one page.
But, owing in a storehouse, can only open one page at every turn, so work as present request memory the read cycle or the write cycle of the A in a certain storehouse page or leaf, and follow-up request memory is when the read cycle of the B in same storehouse page or leaf or write cycle, and the A page or leaf must advanced line precharge step or close before follow-up request memory began.
When the storehouse was hit with page fault in the prior art, as shown in Figure 3, the page or leaf of present memory access was at the C in E storehouse page or leaf 105, and the page or leaf of follow-up memory access is at the D in E storehouse page or leaf 110.As shown in Figure 4, because present memory access 115 and follow-up memory access 120 are at two of same storehouse same pages not, so several steps in the present memory access 115 (as activating step 125, read command step 130 and precharge step 135) will at first be finished in regular turn, then, several steps of follow-up memory access 120 (as activating step 140, read command step 145 and precharge step 150) are just carried out in regular turn.So the conflict of row buffer increases, the utilization of bandwidth also will descend, and memory access will produce for a long time and postpone.
In the computer organization of prior art, design has at least one high-speed cache (hereinafter to be referred as buffer memory), in order to store computer system data commonly used, improve reading order or the hit rate of write command when carrying out that processor or miscellaneous equipment send with this, because the access between processor or miscellaneous equipment and the buffer memory is quicker than the access between memory modules and processor or the miscellaneous equipment.
But, because the storage volume of buffer memory is limited, so but when having other data to deposit into buffer memory when full of data in the buffer memory, at least one batch data just needs to be stored to memory modules in the buffer memory, as DDR (Double Data Rate) memory modules and Rambus (bus type) memory modules etc., depend on and use that a kind of memory system.When at least one batch data when buffer memory is stored to memory modules, at first need to explain and obtain target memory address (Target Memory Address), this target memory address belongs to one of memory address numerous in the memory modules, and this target memory address is expressed the target location that is stored at least one batch data of memory modules from buffer memory.Therefore, need carry out switch process to obtain the target memory address to buffer address, wherein this buffer address is to should at least one batch data, and the target memory address has identical content (being designated hereinafter simply as address contents) with buffer address, it is identical with the included whole positions of target memory address to be the included whole position of buffer address, but buffer address to the definition of address contents and expression and target memory address to the definition of address contents and represent also inequality.In simple terms, no matter be to use that a kind of memory system (as DDR memory system and Rambus memory system etc.), when data when buffer memory is stored to memory modules, the address contents of buffer address is identical with the address contents of target memory address.
As shown in Figure 5, definition according to prior art buffer address 170, the address contents of prior art buffer address 170 is divided into fractional part, include block address (Block Address) 175 and block offset (Block Offset) 180 respectively, wherein block address 175 comprises zone bit (Tag) 185 and group index (SetIndex) 190 again.
As shown in Figure 6, definition according to prior art DDR memory address 200, the address contents of prior art DDR memory address 200 is divided into fractional part, include a page index (Page Index) 205 and page or leaf side-play amount (Page Offset) 210 respectively, wherein page or leaf index 205 comprises high page or leaf index (High PageIndex) 215, DIMM address 220, face address (Side Address) 225 and storehouse index (BankIndex) 230 again.Relevant for the definition of high page or leaf index 215, DIMM address 220, face address 225 and storehouse index 230 and operate unspecified because it is conventionally known to one of skill in the art.
As shown in Figure 7, definition according to prior art Rambus memory address 250, the address contents of prior art Rambus memory address 250 is divided into fractional part, include page index 255 and page or leaf side-play amount 260 respectively, wherein page or leaf index 255 comprises row address (Row Address) 265, address, storehouse (Bank Address) 270 and device address (Device Address) 275 again, and page or leaf side-play amount 260 comprises row (Column) 280, passage (Channel) 285 and side-play amount (Offset) 290.Relevant for the definition of row address 265, address, storehouse 270, device address 275, row 280, passage 285 and side-play amount 290 with operate in this and unspecified, because it is conventionally known to one of skill in the art.
On the other hand, according to different designs, at least have one group (Set) in the buffer memory, and comprise several different zone bits (Tag) in this at least one group, so several buffer address may have identical group, but the zone bit in each buffer address is also inequality, makes that the data that are stored in each cache location will only corresponding unique buffer address.
Yet, when data when buffer memory is stored to memory modules, it is not good to use the effect that the page or leaf interleave method (Page Interleaving Method) of prior art obtains.For example, as shown in Figure 8, if when in the DDR memory system, using the page or leaf interleave method of prior art, target DIMM address according to prior art target DDR memory address, target face address and object library index, at least one batch data will be stored to target location the DDR memory modules from buffer memory, wherein several corresponding positions will be redefined the address into prior art target DIMM in the group index in the prior art buffer address 170 190, position in target face address and the object library index, and prior art target DIMM address, target face address and object library concordance list illustrate this at least one batch data and are stored to target location in the DDR memory modules.
If data A is stored to the DDR memory modules from buffer memory at present, and data B is stored to the DDR memory modules in after a while from buffer memory, and when the group index in the data A corresponding cache address equals group index in the data B corresponding cache address, the page or leaf interleave method of using prior art will make data A and data B be assigned with same storehouse in the DDR memory modules into but in the different page or leaf.Therefore, hit narration with page fault according to above-mentioned relevant for the storehouse, when access data A and data B, row buffer will clash and variable length delay, so the conflict of row buffer is risen, reduce the application of bandwidth simultaneously.
In another example, if when in the Rambus memory system, using the page or leaf interleave method of prior art, as shown in Figure 9, prior art target bank address and prior art destination device address according to prior art target Rambus memory address, at least one batch data is stored to target location the Rambus memory modules from buffer memory, wherein will to be redefined be position in prior art target bank address and the destination device address to the corresponding position of several in the group index in the prior art buffer address 170 190, and the prior art target bank address and destination device address are expressed at least one batch data and be stored to target location in the Rambus memory modules.
Similarly, when data A is stored to the Rambus memory modules from buffer memory at present, and data B will be from the buffer memory unloading to the Rambus memory modules after a while, when if the group index in the data A corresponding cache address equals group index in the data B corresponding cache address, the page or leaf interleave method of using prior art will make data A and data B be assigned with same storehouse in the Rambus memory modules into but in the different page or leaf.Therefore, hit narration with page fault according to above-mentioned relevant for the storehouse, when access data A and data B, row buffer will clash and variable length delay.
In summary, the mapping method of the memory address of described prior art on reality is used, obviously exists inconvenience and defective, so be necessary to be improved.
Summary of the invention
In above-mentioned prior art, three kinds of possibilities will be arranged during memory access: page or leaf hits, storehouse inefficacy and page fault and hit in the storehouse and page fault.Along with Development of Computer Architecture, in order to improve the function of computer system, and the storehouse occurs when solving because of memory access and hit the problem that causes the minimizing bandwidth applications with page fault, how complete data locking and that use row buffer in a plurality of storehouses and each storehouse simultaneously become more and more important.So, the invention provides a kind of method that remaps of memory address, improve the application of page hit rate and bandwidth with this, thus the purpose that realizes increasing work efficiency and postpone when reducing memory access.
Fundamental purpose of the present invention is to provide a kind of method that remaps of memory address.The long-time number of times that postpones appears in order to reduce in memory access (hitting memory access and memory access time thereof with page fault as the storehouse occurring), use a linear operator (LinearOperator) when implementing the method that remaps of memory address, as XOR (logical difference exclusive disjunction) arithmetical unit, two inputs are carried out linear operation to obtain linear output, wherein this two input is selected from buffer address respectively, and the linearity that is obtained output is assigned as the address, target location in the target memory address target location when this target memory address is expressed from the buffer memory unloading to internal memory.And, the method that remaps of memory address of the present invention has also comprised the replacement step, the zone bit of this replacement step from buffer address selected several crucial conversion positions to replace several comparatively stable positions in the linear output, the page or leaf hit rate when improving memory access with this and the application of bandwidth.
According to above-described purpose, the invention provides a kind of method that remaps of memory address, be particularly to be applied in various memory systems, the method that remaps as the memory address of DDR memory system and Rambus memory system (claiming the RDRAM system again) etc. improves the access efficiency of internal memory with this.The method that remaps of memory address comprises at least: provide have zone bit, the buffer address of composite marker position, group index and block offset; Linear operator is provided; First linear operation input and second linear operation input are carried out linear operation to obtain first output, wherein first linear operation input is according to several correspondence position and quantity in the location address in the memory address (as DDR memory address and Rambus memory address), several positions of from the group index of buffer address, selecting, simultaneously, second linear operation input is according to the quantity of the position in the location address, several positions of selecting from the zone bit of buffer address and composite marker position; Carry out the replacement step, at least comprise: from zone bit, select several crucial conversion positions (Key Transition Bit) to replace several first carry-out bits in first output, obtain second output with this, wherein these first carry-out bits are more more stable than other first carry-out bit in first output, and crucial conversion position has the characteristic that changes more continually than high-order position in zone bit; And assign second output as the address, target location of the target memory address in several memory addresss, the target location of data wherein from cache storage to corresponding target memory address.
And, when the present invention is applied to the DDR memory system, memory address is the DDR memory address, the target memory address is a target DDR memory address, location address in each DDR memory address includes DIMM address, face address and storehouse index at least, and the address, target location in the target DDR memory address includes target DIMM address, target face address and object library index at least.When the present invention is applied to the Rambus memory system, memory address is the Rambus memory address, the target memory address is a target Rambus memory address, location address in each Rambus memory address includes address, storehouse and device address at least, and the address, target location in the target Rambus memory address includes target bank address and destination device address at least.In addition, linear operator can be the XOR arithmetical unit, and the linear operation when using the XOR arithmetical unit just is the XOR computing.
In addition, when the present invention is applied to the Rambus memory system, the method that remaps of memory address of the present invention also includes the replacement step, this replacement step includes to be selected a replacement position and replaces a stable position (being arranged in the target bank address of the address, target location of target memory address) from zone bit, obtain output with this, wherein this replacement position has in zone bit than other high-order position stable properties more, and data just are stored to the target location according to the result of output from buffer memory.Therefore, data just can be dispensed in the page or leaf of different memory banks, and by using the present invention, the page or leaf hit rate of being not only internal memory obtains to improve, and can preserve the fixation of memory address than art methods for more time.
Brief Description Of Drawings
Below in conjunction with accompanying drawing,, will make technical scheme of the present invention and other beneficial effect apparent by detailed description to preferred embodiment of the present invention.
In the accompanying drawing,
The synoptic diagram of the memory access when Fig. 1 is inefficacy of prior art storehouse and page fault;
Fig. 2 is when losing efficacy with page fault according to storehouse among Fig. 1, the synoptic diagram of the time relationship between the present and follow-up memory access;
Fig. 3 is the synoptic diagram of the memory access of prior art storehouse when hitting with page fault;
Fig. 4 is when hitting with page fault according to storehouse among Fig. 3, the synoptic diagram of the time relationship between the present and follow-up memory access;
Fig. 5 is the synoptic diagram of the definition of buffer address in the prior art buffer memory;
Fig. 6 is the synoptic diagram of the definition of DDR memory address in the prior art DDR memory modules;
Fig. 7 is the synoptic diagram of the definition of prior art Rambus memory address;
Fig. 8 is according to the synoptic diagram that concerns between the prior art buffer address of Fig. 5 and Fig. 6 and the prior art DDR memory address;
Fig. 9 is according to the synoptic diagram that concerns between the prior art buffer address of Fig. 5 and Fig. 7 and the prior art Rambus memory address;
Figure 10 is the synoptic diagram according to an embodiment of the method that remaps of memory address of the present invention;
Figure 11 is tabulation of each input and the output when being applied to the DDR memory system of the embodiment according to the method that remaps of the memory address of Figure 10;
Figure 12 is the synoptic diagram of another preferred embodiment according to the method that remaps of memory address of the present invention when being applied to the DDR memory system;
Figure 13 is tabulation of each input and the output when being applied in the DDR memory system of another preferred embodiment according to the method that remaps of the memory address of Figure 12; And
Figure 14 is the synoptic diagram of another preferred embodiment of the method that remaps of memory address of the present invention when being applied to the Rambus memory system.
Embodiment
Hereinafter, will describe the present invention in detail.
In order to improve the efficient of memory access, the invention provides a kind of method that remaps of memory address, be applicable to the memory access under the several cases:
Situation 1. 1 data transmission are when the rambus of coupling north bridge chips and memory modules; And
When situation 2. 1 had the data transmission of computer system of at least one CPU, wherein this at least one CPU was connected at least one main memory bus, at least one main memory module and this at least one main memory bus is coupled.
And, in the method that remaps of memory address of the present invention, prior art zone bit in the buffer address is divided into zone bit 485 and composite marker position (Associative Tag) 495, as shown in figure 10.
When the position of the position of the zone bit of arbitrary buffer address and composite marker position be not equal to the position of zone bit of other buffer address and composite marker position the time, the method that remaps that can use and implement memory address of the present invention for example is applied to SDRAM system, DDR memory system and Rambus memory system etc. in the various memory systems with different operating and structure.
Because memory address has different definition and expression in the different memory systems, the principle of operation when so the present invention is applied to various memory system for convenience of description, and several positions that will express the position that data store in memory modules are called " location address ".For example, when if the present invention is applied to the DDR memory system, because DIMM address, face address and storehouse concordance list illustrate the position of the storage of data in the DDR internal memory, so the DIMM address in the DDR memory address, face address and storehouse index are referred to as " location address " in the following description.In another example, when if the present invention is applied to the Rambus memory system, because the position of the storage of data in the Rambus internal memory is expressed in address, storehouse and device address, so address, storehouse and device address in the Rambus memory address also are referred to as " location address " in the following description.
When the target location of data from the buffer memory unloading to memory modules, and when using the method that remaps of memory address of the present invention, utilize linear operator (as the XOR arithmetical unit, adder calculator and subtraction device etc.) two inputs are carried out linear operation to obtain linear output, wherein an input in two inputs is according to bit quantity in the location address of memory address and position, position, several positions of from group index, selecting to buffer address that should data, another input in two inputs is then according to the bit quantity in the location address of memory address, several positions of selecting from the zone bit of buffer address and composite marker position.Can obtain linear output after the linear operation, and assign this linearity output as several positions in the address, target location of target memory address, wherein this target memory address is represented the target location in the memory modules.
One embodiment of the invention, as shown in figure 10, a DDR memory address 500 in buffer address 470 and several DDR memory addresss is provided respectively, the data that wherein are arranged in buffer address 470 need the target DDR memory address 570 of unloading to several DDR memory addresss, and block offset 480 has write down other related data of buffer address 470, and the high page or leaf index 515 in page or leaf side-play amount 510 and the page or leaf index 505 has also write down other related data of DDR memory address 500.
In Figure 10, utilize XOR arithmetical unit 400 to carry out the XOR computing to obtain linear output.At first, for example DIMM address 520, face address 525 and storehouse index 530 have 3 positions in DDR memory address 500, then according to the whole bit positions and the quantity of DIMM address 520, face address 525 and storehouse index 530, in the group index 490 of the block address 475 of buffer address 470, select several positions, as 3 positions " 1; 1 a, 0 " input as XOR arithmetical unit 400.
Then, because DIMM address 520, face address 525 and storehouse index 530 have 3 positions in the DDR memory address 500, so quantity of whole according to DIMM address 520, face address 525 and storehouse index 530, in the zone bit 485 of buffer address 470 and composite marker position 495, select several positions from the lower-order of composite marker position 495, as 3 positions " 1; 1,1 " another input as XOR arithmetical unit 400.
To " 1; 1; 0 " " 1; 1; 1 " after carrying out the XOR computing, the principle of logic-based computing can obtain the linearity output of XOR arithmetical unit 400, for with data from the buffer memory unloading to the DDR memory modules, this linearity output one " 0; 0; 1 " be assigned to target DIMM address 550, target face address 555 and object library index 560 in the page object index 535 of corresponding target DDR memory address 570, and the page or leaf side-play amount 540 of target DDR memory address 570 and high page index 545 store other data about target DDR memory address 570.
In other words, when the method for remapping of memory address of the present invention is applied to the DDR memory system, bit quantity and position according to DIMM address 520, face address 525 and storehouse index 530 in the DDR memory address 500, selecting several positions in the group index 490 of buffer address 470 imports as the XOR arithmetical unit, again according to the bit quantity of DIMM address 520, face address 525 and storehouse index 530 in the DDR memory address 500, in the zone bit 485 of buffer address 470 and composite marker position 495, select several as another XOR arithmetical unit input.Finish after the XOR computing, can obtain linear output, this linearity is exported several positions that are assigned as the address, target location, this address, target location includes target DIMM address 550, target face address 555 and object library index 560.
Each input and the tabulation of exporting when the embodiment among Figure 10 is applied in the DDR memory system, as shown in figure 11, two inputs of input XOR arithmetical unit 400 are respectively first input queue with 11 first input items and second input queue with 11 second input items, wherein import XOR arithmetical unit 400 two inputs the method for selecting as previously mentioned, in 11 first input items each is selected from zone bit 485 and composite marker position 495, in 11 second input items each is selected from group index 490, and the DIMM address 520 in DDR memory address 500, when having 3 positions in face address 525 and the storehouse index 530, arbitrary first input item and arbitrary second input item are formed by 3 bytes.
In each XOR computing, one first input item and one second input item are imported XOR arithmetical unit 400 respectively, can obtain one first output item of XOR arithmetical unit 400 then, so after 11 first input items and 11 second input items have been calculated, can obtain each first output item in regular turn, and each first output item is formed first output queue.For example, first input item " 1,1,1 " and second input item " 1,1,0 " are imported XOR arithmetical unit 400 respectively, can obtain first output item " 0,0,1 ".Can find obviously that from Figure 11 figure most first output item and other first output item are different.So as mentioned above, in the XOR computing and assign operation result to corresponding target DIMM address, corresponding target face address and corresponding object library index, the data in each buffer address will be according to each DDR memory address with different sink index and by in the page or leaf of unloading different sink to the DDR memory modules.In addition, since after the linear operation with the address of one page still at same one page, so the fixation of internal memory reference is kept.
Yet, in first output queue, still have several to have first output item of identical bits (as " 0,0,1 ") after the XOR computing.In order to obtain not first output items of coordination that have, the invention provides a preferred embodiment of the method that remaps of memory address more.
When another preferred embodiment of the present invention is applied in the DDR memory system, as shown in figure 12, at first provide buffer address 470 and DDR memory address 500.In order in first output queue, to obtain the not output items of coordination that have after the linear operation shown in Figure 10 more, also include as the preferred embodiment of the method that remaps of the memory address that is applied to the DDR memory system among Figure 12 each first output item that produces after the linear operation is carried out replacement step.
Usually carry out the situation (as benchmark test computing etc.) of memory access at some, are everlasting and have identical conversion behavior at present and in the subsequent access in position in position in the zone bit 485, the composite marker position 495 and the position in the group index 490, so after statistics and estimation, several positions of selecting the character with frequent changes from zone bit 485 are called " crucial conversion position " 405.And, because the characteristic that buffer memory replaces, make that the position of lower-order more often changes in the zone bit 485, so to begin to select crucial conversion position 405 will be comparatively appropriate in the position of lower-order from zone bit 485.Then, utilize this key conversion position 405 to reach, after carrying out the replacement step, in output queue, obtain the not output projects of coordination that have more with this for position more stable in each output.Therefore, after linear operation, can obtain the target DDR memory addresss 570 with different target DIMM addresses 550, target face address 555 and object library index 560 more, can realize the data allocations in the buffer memory to the interior purpose of the page or leaf of different sink.
For example, in Figure 12, at first carry out linear operation as shown in figure 10,, wherein have several identical first output items one " 0,0,1 " to obtain tabulation shown in Figure 11.According to statistics, learn that the position of lowest-order in each first output item often is " 1 ", mean that promptly the position of lowest-order in each first output item is more stable than interior other position of first output item.In order to obtain mutually different first output item in first output queue,, can pick out crucial conversion position 405 and replace this stable position-" 1 " according to the description of selecting crucial conversion position 405 before.
Each input and the tabulation of exporting when another preferred embodiment of the present invention is applied in the DDR memory system, as shown in figure 13, two inputs of input XOR arithmetical unit 400 are respectively first input queue with 11 first input items and second input queue with 11 second input items, wherein import XOR arithmetical unit 400 two inputs the method for selecting as previously mentioned, in 11 first input items each is selected from zone bit 485 and composite marker position 495, in 11 second input items each is selected from group index 490, and the DIMM address 520 in DDR memory address 500, when having 3 positions in face address 525 and the storehouse index 530, arbitrary first input item and arbitrary second input item are formed by 3 bytes.
In order in output queue, to obtain how mutually different output item, can from first input queue, each first input item select crucial conversion 405 a stable position that replace in each first output item, second output queue that just can obtain several second output items then and form by these second output items.Wherein, can obviously find each second output item and follow-up second output item and inequality, so the data in the buffer memory can be dispensed in the page or leaf in the different sink effectively.
In addition, the method for remapping of memory address of the present invention is not can only be applied in the DDR memory system, also can be applicable in other memory system, as the Rambus memory system.
When another preferred embodiment of the present invention is applied to the Rambus memory system, as shown in figure 14, a Rambus memory address 600 in buffer address 470 and several Rambus memory addresss is provided at first respectively, wherein the data in the buffer address 470 need the target Rambus memory address 650 of unloading to several Rambus memory addresss, and block offset 480 has write down other related data of buffer address 470, row 630 in row address 615 in the page or leaf index 605 and the page or leaf side-play amount 610, passage 635 and side-play amount 640 have write down other related data of Rambus memory address 600, the row 680 in row address 665 in the page object index 655 and the page or leaf side-play amount 660, passage 685 and side-play amount 690 have also write down other relevant data objects of target Rambus memory address 650.
The method that remaps of using memory address of the present invention is when the Rambus memory system, address, storehouse 620 and the 625 interior all quantity and the positions of position, device address according to Rambus memory address 600, in the group index 490 of buffer address 470, select several positions as an XOR computing input, again according to whole quantity of position in the address, storehouse 620 of Rambus memory address 600 and the device address 625, in the zone bit 485 of buffer address 470 and composite marker position 495, select several as another XOR computing input.After carrying out the XOR computing, obtain and the linear position of exporting as the address, target location of appointment, wherein this address, target location includes target bank address 670 and destination device address 675.
In order to obtain the not mutually different outputs of coordination that have more, can utilize crucial conversion position 405 to carry out replacement steps, similar when the present invention is applied to associative operation step in the Rambus memory system and is applied to the DDR memory system to the present invention.In addition, when the present invention is applied to the Rambus memory system, when carrying out the replacement step, not only can utilize crucial conversion position 405 to carry out the replacement step, also can utilize a replacement position 410 in the zone bit 485 that is selected from buffer address 470 to replace the position of lowest-order in the target bank address 670, wherein should replacement 410 the more stable properties that in zone bit 485, have than other low order.Therefore, the data in the buffer memory just can be dispensed in the page or leaf of different sink effectively.
In addition, application of the present invention is not limited in the above embodiments, and the present invention also can be applicable to improve page hit rate and bandwidth applications with this in the various memory systems.
Advantage of the present invention is for providing a kind of method that remaps of memory address, and be particularly to a kind of method that remaps that can be applicable to the memory address of various memory systems (as DDR memory system and Rambus memory system etc.), improve the efficient of memory access with this, and reduce processor and assembly access time to memory modules.Compare with prior art design, the invention provides the method for a kind of distribute data effectively to the page or leaf of different sink, make the page or leaf hit rate increase and the delay of memory access is reduced.
Be understandable that; for the person of ordinary skill of the art; can make other various corresponding changes and distortion according to technical scheme of the present invention and technical conceive, and all these changes and distortion all should belong to the protection domain of accompanying Claim of the present invention.

Claims (5)

1. the method that remaps of a memory address, be applied in a memory system with several memory addresss, and the target location of data can a memory modules from a cache storage to this memory system, wherein each this memory address has a location address, and the method that remaps of this memory address comprises at least:
A buffer address that corresponds to these data is provided, and wherein this buffer address includes a zone bit, a composite marker position, a group index and a block offset at least;
One linear operator is provided;
One first linear operation input and one second linear operation input are carried out a linear operation to obtain one first output, wherein this first linear operation input is according to several correspondence position and quantity in this location address in the memory address of these memory addresss, several first input positions of from this group index, selecting, simultaneously, this second linear operation input is according to these quantity in this location address, several second input positions of selecting from this zone bit and this composite marker position;
Carrying out a generation changes step to comprise at least suddenly:
From this zone bit, select several crucial conversion positions to replace the part of several first carry-out bits in this first output, obtain one second output with this, wherein this part of these first carry-out bits is more more stable than other first carry-out bit in this first output in this first output, and these crucial conversion positions are selected from this zone bit according to several positions of these first carry-out bits in this first output, and these crucial conversion positions have the characteristic that changes more continually than high-order in other of these crucial conversion positions in this zone bit; And
Assign the target location address of this second output as a target memory address of these memory addresss, wherein these data from this cache storage to this target location that should the target memory address.
2. the method that remaps of memory address according to claim 1 is characterized in that, these quantity equals the quantity of several first carry-out bits in this first output in the described location address.
3. the method that remaps of memory address according to claim 1, it is characterized in that, when described memory system is a DDR memory system, these memory addresss are several DDR memory addresss, this target memory address is a target DDR memory address, this location address in each these DDR memory address includes a DIMM address, one side address and a storehouse index at least, and this address, target location in this target DDR memory address includes a target DIMM address, a target face address and an object library index at least.
4. the method that remaps of memory address according to claim 1, it is characterized in that, when described memory system is a Rambus memory system, these memory addresss are several Rambus memory addresss, this target memory address is a target Rambus memory address, this location address in each these Rambus memory address includes an address, a storehouse and a device address at least, and this address, target location in this target Rambus memory address includes a target bank address and a destination device address at least.
5. the method that remaps of memory address according to claim 1 is characterized in that, described linear operator is an XOR arithmetical unit, and this linear operation is an XOR computing.
CNB031027253A 2003-01-16 2003-01-16 Remapping method of internallystored address Expired - Fee Related CN1269043C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB031027253A CN1269043C (en) 2003-01-16 2003-01-16 Remapping method of internallystored address

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031027253A CN1269043C (en) 2003-01-16 2003-01-16 Remapping method of internallystored address

Publications (2)

Publication Number Publication Date
CN1517882A true CN1517882A (en) 2004-08-04
CN1269043C CN1269043C (en) 2006-08-09

Family

ID=34281866

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031027253A Expired - Fee Related CN1269043C (en) 2003-01-16 2003-01-16 Remapping method of internallystored address

Country Status (1)

Country Link
CN (1) CN1269043C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100390764C (en) * 2004-11-05 2008-05-28 国际商业机器公司 Method, system, and program for transferring data directed to virtual memory addressed to a device memory
CN101237595B (en) * 2008-02-25 2011-07-13 中兴通讯股份有限公司 Data load method
CN102331978A (en) * 2011-07-07 2012-01-25 曙光信息产业股份有限公司 DMA (Direct Memory Access) controller access implementation method for Loongson blade large-memory address devices
CN102404652A (en) * 2010-09-10 2012-04-04 高通创锐讯通讯科技(上海)有限公司 Double data rate (DDR) buffer Ethernet network package method for optical line terminal (OLT) or optical network unit (ONU) chip in Ethernet passive optical network (EPON)
CN104899159A (en) * 2014-03-06 2015-09-09 华为技术有限公司 High-speed Cache address mapping processing method and apparatus
CN113900966A (en) * 2021-11-16 2022-01-07 北京微核芯科技有限公司 Access method and device based on Cache

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100390764C (en) * 2004-11-05 2008-05-28 国际商业机器公司 Method, system, and program for transferring data directed to virtual memory addressed to a device memory
CN101237595B (en) * 2008-02-25 2011-07-13 中兴通讯股份有限公司 Data load method
CN102404652A (en) * 2010-09-10 2012-04-04 高通创锐讯通讯科技(上海)有限公司 Double data rate (DDR) buffer Ethernet network package method for optical line terminal (OLT) or optical network unit (ONU) chip in Ethernet passive optical network (EPON)
CN102331978A (en) * 2011-07-07 2012-01-25 曙光信息产业股份有限公司 DMA (Direct Memory Access) controller access implementation method for Loongson blade large-memory address devices
CN104899159A (en) * 2014-03-06 2015-09-09 华为技术有限公司 High-speed Cache address mapping processing method and apparatus
US9984003B2 (en) 2014-03-06 2018-05-29 Huawei Technologies Co., Ltd. Mapping processing method for a cache address in a processor to provide a color bit in a huge page technology
CN104899159B (en) * 2014-03-06 2019-07-23 华为技术有限公司 The mapping treatment method and device of the address cache memory Cache
CN113900966A (en) * 2021-11-16 2022-01-07 北京微核芯科技有限公司 Access method and device based on Cache

Also Published As

Publication number Publication date
CN1269043C (en) 2006-08-09

Similar Documents

Publication Publication Date Title
CN102792285B (en) For the treatment of the apparatus and method of data
US20210406170A1 (en) Flash-Based Coprocessor
US7930515B2 (en) Virtual memory management
Qin et al. A two-level caching mechanism for demand-based page-level address mapping in NAND flash memory storage systems
CN1517880A (en) Memory interleaving
Bae et al. {FlashNeuron}:{SSD-Enabled}{Large-Batch} Training of Very Deep Neural Networks
CN1993683A (en) Maintaining processor resources during architectural events
CN105095116A (en) Cache replacing method, cache controller and processor
CN1841343A (en) System and method of improving task switching
CN1726477A (en) Page descriptors for prefetching and memory management
CN1896972A (en) Method and device for converting virtual address, reading and writing high-speed buffer memory
CN100342353C (en) Process mapping realization method in embedded type operation system
US8990500B2 (en) Storing the most significant and the least significant bytes of characters at non-contiguous addresses
CN1269043C (en) Remapping method of internallystored address
EP3757804A1 (en) Page tables for granular allocation of memory pages
CN100414518C (en) Improved virtual address conversion and converter thereof
US20060277352A1 (en) Method and system for supporting large caches with split and canonicalization tags
CN100351813C (en) Method of storage unit access in digital signal processing system and processing system therefor
US20230236969A1 (en) Hinting Mechanism for Efficient Accelerator Services
US20040078544A1 (en) Memory address remapping method
CN1296815C (en) Marker digit optimizing method in binary system translation
KR100463205B1 (en) Computer system embedded sequantial buffer for improving DSP data access performance and data access method thereof
US20030225992A1 (en) Method and system for compression of address tags in memory structures
KR101102260B1 (en) A virtual address cache and method for sharing data using a unique task identifier
Lee et al. A banked-promotion TLB for high performance and low power

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee