TW417048B - Mapping method of distributed cache memory - Google Patents

Mapping method of distributed cache memory Download PDF

Info

Publication number
TW417048B
TW417048B TW88103217A TW88103217A TW417048B TW 417048 B TW417048 B TW 417048B TW 88103217 A TW88103217 A TW 88103217A TW 88103217 A TW88103217 A TW 88103217A TW 417048 B TW417048 B TW 417048B
Authority
TW
Taiwan
Prior art keywords
memory
cacheable
bits
bit
cache
Prior art date
Application number
TW88103217A
Other languages
Chinese (zh)
Inventor
Jin Lai
Chian-Yu Chen
Original Assignee
Via Tech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Tech Inc filed Critical Via Tech Inc
Priority to TW88103217A priority Critical patent/TW417048B/en
Priority to DE1999157810 priority patent/DE19957810A1/en
Priority to JP2000019307A priority patent/JP3841998B2/en
Application granted granted Critical
Publication of TW417048B publication Critical patent/TW417048B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0888Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass

Abstract

There is disclosed a mapping method of distributed cache memory, which utilizes a tag mapping table to scatter the cacheable range of the cache memory. A set of addresses is selected from an address space to correspond to the tag mapping table. From the possible combination of the selected specific addresses, the user defines the system memory that is mapped to the corresponding tag mapping table to be a cacheable block or non-cacheable block. Therefore, the frequently accessed upper portion and bottom portion of the system memory can be defined to be cacheable at the same time. Alternatively, it is applicable to define the desired blocks to be cacheable range or non-cacheable range by the user, such that the cacheable range of the memory is distributed as desired, instead of being continuously arranged.

Description

4026 ti /006 A7 B7 經濟部中央標準局貝工消资合作社印製 五、發明説明(p 本發明是有關於一種記憶體之讀取方式,且特別是有 關於—種快取記憶體之映射方式。 如第1圖所示爲一種習知快取裝置的方塊圖。快取裝 置】10主要由快取記憶體111與快取控制電路1丨2構成, 快取控制電路112可控制快取記憶體111,負責整個快取 裝置110的運作,其中快取記憶體ill又包括資料記億體 113與標籤記憶體(tagRAM)114,資料記憶體(dataRAM)113 存放對應主記憶體140的資料,標籤記憶體114則存放對 應於主記憶體140的位址資料,並可用其中一位元作爲更 動旗號(dirty bit)以辨別資料記憶體113中的資料是否被更 動過。 第2A圖爲快取記憶體與主記憶體之間的對應情形, 因爲快取記憶體只能存放主記憶體一部份的資料,所以實 際對應至主記憶體的資料存放在資料記憶體,而資料記憶 體存放的資料是對應至主記憶體那一部份的位址,其對應 資料則存放在標籤記憶體中,所以資料記憶體的索引位址 與標籤記憶體的位址對應資料組合起來,即等於主記憶體 的實際位址,如第2B圖所示。若在主記憶體中,每一資 料的位址,快取記憶體皆有一位置與之對應,此種快取記 憶體結構稱之爲直接映射(direct mapped),因爲主記憶體的 每一位置皆被映射至快取記憶體的某一位置上。 因爲快取記憶體中只存放了主記憶體中一部份的資 料’而CPU運作時,主要都是針對快取記憶體做讀寫,所 以快取裝置在處理CPU的讀寫資料的要求時,必需考慮是 3 03 (請先閲讀背面之注意事項?^寫本頁)4026 ti / 006 A7 B7 Printed by the Shell Industry Consumer Cooperative of the Central Standards Bureau of the Ministry of Economic Affairs 5. Description of the invention (p The present invention relates to a method of reading memory, and in particular to a mapping of a cache memory As shown in Figure 1, a block diagram of a conventional cache device is shown. The cache device] 10 is mainly composed of a cache memory 111 and a cache control circuit 1 2. The cache control circuit 112 can control the cache. The memory 111 is responsible for the operation of the entire cache device 110. The cache memory ill includes a data memory 113 and a tag memory (tagRAM) 114. The data memory (dataRAM) 113 stores data corresponding to the main memory 140. The tag memory 114 stores address data corresponding to the main memory 140, and one of the bits can be used as a change bit (dirty bit) to identify whether the data in the data memory 113 has been changed. FIG. 2A is fast Correspondence between fetch memory and main memory. Because cache memory can only store part of the data in main memory, the data that actually corresponds to main memory is stored in data memory, and data memory The data is the address corresponding to the part of the main memory, and the corresponding data is stored in the tag memory, so the index address of the data memory and the address correspondence data of the tag memory are combined, which is equal to The actual address of the main memory is shown in Figure 2B. If the address of each data in the main memory corresponds to a position in the cache memory, this cache memory structure is called Direct mapping, because each location in the main memory is mapped to a certain location in the cache memory. Because the cache memory stores only a part of the data in the main memory 'and the CPU During operation, it mainly reads and writes to the cache memory, so the cache device must consider 3 03 when processing the request of the CPU to read and write data (please read the precautions on the back first? ^ Write this page)

T 本紙张尺度通用中國國家標隼(CNS )从规格(210X297公釐) 4026twf.doc/006 A7 B7 經濟部中央標準局負工消费合作社印製 五、發明説明(J) 否命中快取記憶體及是否要重新將主記憶體的資料搬到記 憶體之問題,判斷命中的方法是在收到CPU送出的讀或寫 要求時,將CPU送出的位址與標籤記憶體的內容做比較, 若相同則爲命中。快取記憶體係以標籤映射的方法來使得 的記憶體被設定爲可快取(cachable),即前述之命中。然, 一般之標籤位元有限,一般僅爲8位元或7位元,所以記 憶體被受標籤位元的影響而被區分爲可快取部分與非可快 取(non-cachable)部分。 假如系統記憶體大小爲256M,以快取記憶體爲512K 且具有8位元的標籤爲例子,其可以映射(mappmg)到之最 大可快取(cachable)的範圍受限於標籤位元的大小,亦即 512K的2s倍,即128M,如第3圖所示。習知的快取記憶 體之映射方法,使得可快取記憶體的範圍係連續的,如第 3圖中之256M系統記憶體(system memory)的第一部份 200a ’即0M到128M的範圍爲可快取部分,而Π8Μ以上 到256M範圍的記憶體則被可快取控制器(cachable controller)設定爲非可快取(non-cachable)記憶體的範圍。由 上述可以知道,習知的快取記憶體之標籤映射方式係將記 憶體分成兩部分,一部份爲下層的可快取記憶體2〇〇a,另 —部份則爲上層的非可快取記憶體200b。然而,作業系統 (operating system,0S)會時常利用上層的記憶體做堆疊 (stack)或狀態的維持(status keeping),如從256M的位址開 始往下堆疊或從0M的位址開始往上堆疊,因此,習知的 可快取範圍之標籤映射方法僅能將記憶體之下層部分 4 |( ! I ί> I n I n n I HI n n n T f--I n . .1 n /V /i (請先閲讀背面之注意事項再填寫本頁} 本紙张尺度適用中阁國家標準(CNS ) A4規格(210X297公釐} 4026twf-doc/0〇6 A7 B7 五、發明説明(3 ) 200a,0〜128M做爲可快取範圍。故,這對作業系統而言是 非常沒有效率的。因此,如何使系統記憶體的最上層200b 與最下層200a皆爲可快取範圍便是在控制快取記憶體的 硏究方向。 綜上所述,習知之快取記憶體的標籤映射方法只能映 射到一塊連續的可诀取記憶體區瑰,且爲系統記憶體的下 層部分,造成作業系統存取頻繁的最上部分記憶體無法成 爲可快取記憶體範圍’造成整個系統的效率降低。 因此本發明的目的就是在提供一種分散式快取記憶體 的映射方法,其可以使系統記憶體的最上層部分與最下層 部分均可爲可快取之記憶體。 本發明的另一目的就是在提供一種分散式快取記憶體 的映射方法,其可針對需求將系統記憶體之可快取範圍分 散,而不必侷限在一連續的記憶體區塊。 本發明的另一目的就是在提供一種分散式快取記憶體 的映射方法,使得作業系統存取最頻繁的系統記憶體部分 可以被設定成爲可快取範圍/以增加系統的效率。 經濟部中央標準局—工消费合作社印製 爲達上述與其他之目的,本發明提出一種分散式快取 記憶體的映射方法,其簡述如下: 本發明所揭露之分散式快取記憶體的映射方法’在― 位址空間中選定其中一組位址做爲與標籤映射表對應之 用。藉由所選出的特定位址的可能組合由使用者定義出其 相對應的標籤映射表所映射到的系統記憶體爲可快取區塊 以及非可快取區塊。因爲,標籤映射表所映射到的系統記 5 本紙張尺度適用中国阎家標準(CNS ) A4说格(210X297公釐) 4 0 2 6 twf . doc /0 06 A7 B7 經濟部中央標準局員工消費合作社印製 五、發明説明(4) 億體爲可快取區塊以及非可快取區塊係由使用者對特定位 址位元加以定義,所以可以將系統記憶體的最上層部分與 < 最下層部分可被同時定義爲可快取部分,或依使用者的需 求定義爲哪些區塊爲可快取範圍或非可快取範圍,使得記 億體的可快取範圍不再是連續分佈,而是以所需而分散地 (scatter)分佈。 藉此,使用者藉由本發明所提出的映射方法,可以將 作業系統存取頻繁之系統記憶體的最上層區塊與最下層區 塊定義爲可快取範圍,藉以增加系統的效率。 爲讓本發明之上述目的、特徵、和優點能更明顯易懂,下 文特舉較佳實施例,並配合所附圖式,作詳細說明如下: 圖式之簡單說明: 第1圖繪示一種習知快取裝置的方塊圖; 第2A圖繪示快取記憶體與主記憶體之間的對應情 形; 第2B圖是快取記憶體定址方法: 第3圖繪示習知系統記憶體之可快取範圍與非可快 取範圍之示意圖; 第4圖繪示依據本發明之分散式快取記憶體的映射方 法,標籤映射表與記憶體之間的對應關係;以及 第5圖繪示第4圖中的其中之一位址對應例子的示意 圖。 圖不之標號說明: 110快取裝置 111快取記憶體 6 ! n t 裝 I f H 訂 線 (請先閲讀背面之注意事項再填寫本頁) 本紙張尺度通用中國國家標卑(CNS ) Λ4^ ( 210X297公釐) 4026twf.doc/006 A7 B7 經濟部中央標準局貝工消費合作社印製 五、發明説明(夕) 112快取控制電路 1Π資料記憶體 114標籤記憶體 120 CPU 140主記憶體 200a可快取範圍之記憶體 200b非可快取範圍之記憶體 實施例 本發明所揭露之分散式快取記億體的映射方法,使得 系統存取頻繁的記憶體區塊定義成可快取記憶體,藉以增 加系統存取記憶體的效率·> 在一位址空間(address space)中選定較高位元位置的其 中一組位元做爲與標籤映射表(tag mapping table)對應之 用。藉由所選出的一組位元的可能組合由使用者定義出其 相對應、的標籤映射表所映射到的系統記憶體爲可快取區塊 以及非可快取區塊。因爲,標籤映射表所映射到的系統記 億體爲可快取區塊以及非可快取區塊係由使用者對該組位 元加以定義,所以可以將系統記憶體的最上層部分與最下 層部分可被同時定義爲可快取部分,或依使用者的需求定 義爲哪些區塊爲可快取範圍或非可快取範圍,使得記憶體 的可快取範圍不再是連續分佈,而是以所需而分散地 (scatter)分佈。 請參考第4圖,其繪示本發明之分散式快取記憶體的 映射方'法’標籤映射表與記憶體之間的對應關係,藉以說 明本發明之操作方式與其功效》 位址空間中,較高位元位置之一組位元做爲與標籤位 7 ----------裝-------訂------線 {請先閱讀背面之注意事項再填寫本頁) 本紙張尺度適用中國國家標率{ CNS ) Λ4規格(210X297公货) 4Π048 4 0 2 6 twf . :/006 A7 B7 經濟部中央標準扃貝工消費合作社印繁T This paper size is common Chinese National Standard (CNS) from the specifications (210X297 mm) 4026twf.doc / 006 A7 B7 Printed by the Central Consumers Bureau of the Ministry of Economic Affairs Consumer Cooperatives V. Description of the invention (J) No hit cache memory And whether the data in the main memory should be moved to the memory again, the method of judging the hit is to compare the address sent by the CPU with the content of the tag memory when receiving a read or write request from the CPU. The same is a hit. The cache memory system uses a label mapping method to make the memory of Cachable, which is the aforementioned hit. Of course, the general tag bit is limited, usually only 8 or 7 bits, so the memory is divided into cacheable part and non-cachable part by the influence of the tag bit. If the system memory size is 256M, taking a cache memory of 512K and an 8-bit tag as an example, the range that can be mapped (mappmg) to the maximum cache (cachable) is limited by the size of the tag bit , Which is 2s times 512K, or 128M, as shown in Figure 3. The conventional mapping method of the cache memory makes the range of the cache memory continuous, such as the first part 200a 'of the 256M system memory in FIG. 3, that is, the range of 0M to 128M. It is the cacheable part, and the memory in the range of Π8M to 256M is set by the cachable controller to the range of non-cachable memory. From the above, it can be known that the conventional cache memory label mapping method divides the memory into two parts, one part is the lower level cache memory 200a, and the other part is the upper level non-accessible memory 200a. Cache memory 200b. However, the operating system (OS) often uses the upper layer of memory for stacking or status keeping, such as stacking down from a 256M address or upward from a 0M address. Stacking, therefore, the conventional cacheable range label mapping method can only transfer the lower part of the memory 4 | (! I ί > I n I nn I HI nnn T f--I n. .1 n / V / i (Please read the precautions on the back before filling this page} This paper size applies to the National Standard of China (CNS) A4 (210X297 mm) 4026twf-doc / 0〇6 A7 B7 V. Description of the invention (3) 200a, 0 ~ 128M is the cacheable range. Therefore, this is very inefficient for the operating system. Therefore, how to make the uppermost layer 200b and the lowest layer 200a of the system memory both cacheable ranges is to control the cache speed. In summary, the label mapping method of conventional cache memory can only be mapped to a continuous block of accessible memory area, and it is the lower part of the system memory, causing the operating system. The top most frequently accessed memory cannot be cached The memory range 'causes the efficiency of the entire system to decrease. Therefore, the object of the present invention is to provide a mapping method of distributed cache memory, which can make the uppermost part and the lowermost part of the system memory cacheable. Another object of the present invention is to provide a mapping method of decentralized cache memory, which can decentralize the cacheable range of system memory according to needs, without having to be limited to a continuous memory block. Another object of the present invention is to provide a distributed cache memory mapping method, so that the system memory portion most frequently accessed by the operating system can be set to the cacheable range / to increase the efficiency of the system. Ministry of Economic Affairs Printed by the Central Bureau of Standards and Industry and Consumer Cooperatives To achieve the above and other objectives, the present invention proposes a mapping method of distributed cache memory, which is briefly described as follows: The mapping method of distributed cache memory disclosed by the present invention 'In ― the address space selects a set of addresses to correspond to the label mapping table. With the specific address selected The possible combination of the user-defined system memory to which the corresponding label mapping table is mapped is cacheable and non-cacheable. Because the system to which the label mapping table is mapped is 5 papers Standards apply to China Yanjia Standard (CNS) A4 grid (210X297 mm) 4 0 2 6 twf. Doc / 0 06 A7 B7 Printed by the Consumer Cooperatives of the Central Standards Bureau of the Ministry of Economic Affairs 5. Description of the invention Cache blocks and non-cacheable blocks are defined by users for specific address bits, so the uppermost part of the system memory and the < lowermost part can be defined as cacheable parts at the same time. Or define according to the user's needs as which blocks are cacheable or non-cacheable, so that the cacheable range of the recorder is no longer a continuous distribution, but is scattered as needed. . With this, the user can define the uppermost block and the lowermost block of the system memory frequently accessed by the operating system through the mapping method proposed by the present invention, thereby increasing the efficiency of the system. In order to make the above-mentioned objects, features, and advantages of the present invention more comprehensible, the following describes specific embodiments in combination with the accompanying drawings in detail as follows: Brief description of the drawings: FIG. 1 shows a kind of Block diagram of the conventional cache device; Figure 2A shows the correspondence between the cache memory and the main memory; Figure 2B is the cache memory addressing method: Figure 3 shows the memory of the conventional system Schematic diagram of cacheable range and non-cacheable range; FIG. 4 shows the mapping method of the distributed cache memory according to the present invention, the corresponding relationship between the label mapping table and the memory; and FIG. 5 shows Figure 4 is a schematic diagram of an example of address correspondence. Explanation of the symbols in the figure: 110 cache device 111 cache memory 6! Nt I f H ordering line (please read the precautions on the back before filling this page) This paper is in accordance with China National Standards (CNS) Λ4 ^ (210X297mm) 4026twf.doc / 006 A7 B7 Printed by the Shellfish Consumer Cooperative of the Central Standards Bureau of the Ministry of Economic Affairs 5. Description of the Invention (Evening) 112 Cache Control Circuit 1Π Data Memory 114 Tag Memory 120 CPU 140 Main Memory 200a Cacheable memory 200b Non-cacheable memory memory embodiment The mapping method of the decentralized cache memory system disclosed in the present invention enables a memory block that is accessed frequently by the system to be defined as a cacheable memory In order to increase the efficiency of the system to access the memory, > select a group of bits in a higher bit position in an address space to correspond to a tag mapping table. The system memory to which the corresponding tag mapping table is mapped by the user's definition of the corresponding combination of the selected set of bits is a cacheable block and a non-cacheable block. Because the system to which the label mapping table is mapped is a cacheable block and a non-cacheable block is defined by the user, the uppermost part of the system memory and the most The lower part can be defined as a cacheable part at the same time, or which blocks are cacheable or non-cacheable according to the needs of the user, so that the cacheable range of the memory is no longer continuously distributed, and Scattered as needed. Please refer to FIG. 4, which shows the mapping relationship between the mapping method of the distributed cache memory mapping method “method” label mapping table and the memory, so as to explain the operation mode and the effect of the invention ”in the address space , One bit of the higher bit position is used as the label bit 7 ---------- install --------- order ------ line {Please read the precautions on the back first (Fill in this page again) This paper size applies to China's national standard rate {CNS) 4 specifications (210X297 public goods) 4Π048 4 0 2 6 twf.: / 006 A7 B7 Central Standard of the Ministry of Economic Affairs

五、發明説明(6) 元對應的位址,將此組位元經由一編碼程序。編碼後的位 元組,若與標籤位元一致,則記憶體爲可快取部分,藉此 使得記憶體之可快取範圍位分散式而非如傳統一般之連續 分佈的型態。 舉例而言,如以一位址空間之位址A[22:20]做爲與標 籤相對應的位元,藉此三個位元之組合來決定標籤映射表 所對應到的可快取記憶體部分。在此以記憶體容量爲8M 爲例子。快取記憶體大小爲512K,標籤(tag)三位數,則藉 由標籤所映射到之可快取記憶體的大小爲512K的23倍, 即4M的大小。亦即,8M的記憶體中有4M是可快取的。 A[22:20]的位址位元合計有8種不同的組合,如第4 圖所示依序由(000)到(111),其分別對應到記憶體的八個 部分:8M〜7M、7M~6M、...、.2M〜1M以及1M〜0M等八個 記憶體區塊。在此例中,將記憶體畫分成八等份。 因爲作業系統會時常利用上層的記憶體做堆疊或狀態的維 持,所以該區域的記憶體將存取頻繁,因此將最上層部分 與最下層部分先設定成可快取範圍,亦即例如第4圖中之 8M〜7M與1M〜0M兩部分,其大小爲2M。剩餘的2M範圍 則可以依據所需來加以設定。最後記憶體中可快取範圍便 如圖所示,呈現一種分散式的分佈,並且系統存取最頻繁 的最上層與最下層,如8M~7M與1M〜0M的兩部分被設定 爲可快取部分。因此,系統存取記憶體的效率便可以大爲 提昇。 第5圖則繪示A[22:20]爲(0Π)的情形’並透過一編碼 S (請先閲讀背面之注意事項#,填寫本頁}V. Description of the invention (6) The address corresponding to the element, this group of bits is passed through an encoding process. If the encoded byte is consistent with the tag bit, the memory is a cacheable part, so that the cacheable range of the memory is bit-distributed instead of the traditional continuous distribution. For example, if the address A [22:20] of a bit space is used as the bit corresponding to the label, then the combination of the three bits is used to determine the cacheable memory corresponding to the label mapping table. Body part. Take the memory capacity of 8M as an example. The size of the cache memory is 512K, and the tag has three digits. The size of the cache memory mapped to the tag is 23 times that of 512K, which is 4M. That is, 4M of the 8M memory is cacheable. There are 8 different combinations of the address bits of A [22:20], as shown in Figure 4, from (000) to (111), which correspond to the eight parts of the memory: 8M ~ 7M , 7M ~ 6M, ..., .2M ~ 1M and 1M ~ 0M. In this example, the memory picture is divided into eight equal parts. Because the operating system often uses the upper layer of memory to maintain stacking or status, the memory in this area will be accessed frequently, so the top and bottom sections are set to the cacheable range first, that is, for example, the fourth In the picture, 8M ~ 7M and 1M ~ 0M are two parts, and the size is 2M. The remaining 2M range can be set as needed. Finally, the cacheable range in the memory is as shown in the figure, showing a distributed distribution, and the top and bottom layers that are accessed most frequently by the system, such as 8M ~ 7M and 1M ~ 0M are set to be fast. Take part. Therefore, the efficiency of system memory access can be greatly improved. Figure 5 shows the situation where A [22:20] is (0Π) ’and passes a code S (Please read the note on the back #, fill out this page}

Ja 丁 本紙浪尺度適用中國阗家標學-(CNS ) Μ規格(2ί〇Χ297公釐) 4026twf.d〇c/006 A7 B7 铿濟部中央標準局負工消費合作社印龙 五、發明説明(7) 程序將標籤編碼成(01),而此對應的記憶體範圍爲5M〜4M 的一對應圖示。 因此,當系統記憶體具有512M,且有8位元的標籤 時,其可快取記憶體的大小爲128M。以8M爲一單位,藉 由上述的方法,依據所選擇的位址與一編碼程序將標籤位 元指向記億體可快取的範圍加以打散,更可以將記憶體上 層與下層部分等範圍系統長用的範圍設定成可快取記憶 體,以提高系統的效率。 因此,本發明的特徵係將位址空間中一特定的位址做 爲與標籤位元對應的位址,將此特定位址經由一編碼程 序,若與標籤位元一致,記憶體變爲可快取部分,藉此使 得記憶體之可快取範圍爲分散式而非如傳統一般之連續分 佈的型態。 本發明的另一特徵係系統記憶體的最上層與最下層部 分均設定成爲可快取範圍。此部份係作業系統存取最頻繁 的部分,所以系統存取記憶體的效率可以大幅提高。 本發明的再一特徵係系統記憶體的可快取範圍可以任 意設定,不在偈限於傳統的連續分佈型態。 綜上所述,雖然本發明已以較佳實施例揭露如上,然 其並非用以限定本發明,任何熟習此技藝者’在不脫離本 發明之精神和範圍內,當可作各種之更動與潤飾’因此本 發明之保護範圍當視後附之申請專利範圍所界定者爲準。 9 本紙张尺度適用中圏囡家標準(CNS ) Λ4現格(公釐) (請先聞讀背面之注項再填寫本頁) .裝·Ja Ding Ben's paper scale is applicable to China's family standard- (CNS) M specifications (2ί297 × 297 mm) 4026twf.d〇c / 006 A7 B7 Yin Long, Consumer Work Cooperative of the Central Standards Bureau of the Ministry of Economic Affairs 7) The program encodes the label as (01), and the corresponding memory range is a corresponding icon of 5M ~ 4M. Therefore, when the system memory has 512M and has 8-bit tags, the size of its cache memory is 128M. With 8M as a unit, by the above method, according to the selected address and an encoding program, the tag bits are pointed to the range that can be cached by the memory, and the upper and lower parts of the memory can be further divided. The system long-term range is set to cache memory to improve the efficiency of the system. Therefore, the feature of the present invention is to use a specific address in the address space as the address corresponding to the tag bit, and pass this specific address through an encoding process. If it is consistent with the tag bit, the memory becomes available. The cache part, thereby making the cacheable range of the memory a decentralized type rather than a continuously distributed type as traditional. Another feature of the present invention is that both the uppermost layer and the lowermost portion of the system memory are set to a cacheable range. This part is the most frequently accessed part of the operating system, so the efficiency of system memory access can be greatly improved. Another feature of the present invention is that the cacheable range of the system memory can be arbitrarily set, and is not limited to the traditional continuous distribution mode. In summary, although the present invention has been disclosed in the preferred embodiment as above, it is not intended to limit the present invention. Any person skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention. Retouching 'Therefore, the scope of protection of the present invention shall be determined by the scope of the appended patent application. 9 This paper size is applicable to China Standard (CNS) Λ4 grid (mm) (please read the note on the back before filling this page).

-1T 線-1T line

Claims (1)

4 17048 六、申請專利範圍 1. 一種分散式快取記憶體的映射方法,用以將一記憶 體設定爲一可快取範圍,包括: 將一位址空間中之較高位址位元的複數個位元做爲與 一標籤位元對應的位元; 將該些位元經由一編碼程序加以編碼;以及 當編碼後之該些位元與該標籤位元一致時,該記憶體 係可快取部分,其中該些位元對應到該記憶體之最上層與 最下層範圍預先被設定爲可快取範圍。 2. —種分散式快取記憶體的映射方法,用以將一記憶 體設定爲一可快取範圍,包括: 將一位址空間中之較高位址位元的複數個位元做爲與 一標籤位元對應的位元: 將該些由一編碼程序加以編碼;以及 當編碼後些位元與該標籤位元一致時,該記憶體 係可快取部分 3. 申請專第2項所述之快取記憶體的映射方 法,其中編碼後之該些位元與該標籤位元一致係由系統所 設定。 4. 申請專利範圍第2項所述之快取記憶體的映射方 法,其中該些位元對應到該記憶體被系統所使用最頻繁的 範圍預先被設定爲可快取範圍。 I.--^--------^------^------^ (請先閲讀背面之注意ΐ項Γ;寫本頁) 經濟部中央標率局貝工消費合作社印製 本紙張尺度適用中國围家揉準(CNS ) Α4况格(210X 297公釐)4 17048 VI. Application for Patent Scope 1. A mapping method of decentralized cache memory for setting a memory as a cacheable range, including: a complex number of higher address bits in a bit space Each bit is used as a bit corresponding to a tag bit; the bits are encoded by an encoding program; and when the encoded bits are consistent with the tag bit, the memory system can cache In some cases, the bits corresponding to the uppermost layer and the lowermost layer of the memory are preset as cacheable ranges. 2. —A mapping method of distributed cache memory for setting a memory to a cacheable range, including: using a plurality of bits of a higher address bit in an address space as an AND Bits corresponding to a tag bit: These are encoded by an encoding program; and when the encoded bits are consistent with the tag bit, the memory system can cache the part 3. described in item 2 of the application The mapping method of the cache memory, wherein the encoded bits are consistent with the tag bits are set by the system. 4. The mapping method of the cache memory described in item 2 of the scope of the patent application, wherein the bits correspond to the most frequently used range of the memory by the system, which is set in advance as a cacheable range. I .-- ^ -------- ^ ------ ^ ------ ^ (Please read the note on the back Γ; write this page) Central Standards Bureau, Ministry of Economic Affairs The paper size printed by the Industrial and Consumer Cooperatives is applicable to China's Weijia Standard (CNS) Α4 condition (210X 297 mm)
TW88103217A 1999-03-03 1999-03-03 Mapping method of distributed cache memory TW417048B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
TW88103217A TW417048B (en) 1999-03-03 1999-03-03 Mapping method of distributed cache memory
DE1999157810 DE19957810A1 (en) 1999-03-03 1999-12-01 Scatter imaging method for cache memory device involves comparing encoded address tag with tags from tag imaging table, whereby tags represent cacheable memory locations
JP2000019307A JP3841998B2 (en) 1999-03-03 2000-01-27 Distributed cache memory mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW88103217A TW417048B (en) 1999-03-03 1999-03-03 Mapping method of distributed cache memory

Publications (1)

Publication Number Publication Date
TW417048B true TW417048B (en) 2001-01-01

Family

ID=21639846

Family Applications (1)

Application Number Title Priority Date Filing Date
TW88103217A TW417048B (en) 1999-03-03 1999-03-03 Mapping method of distributed cache memory

Country Status (3)

Country Link
JP (1) JP3841998B2 (en)
DE (1) DE19957810A1 (en)
TW (1) TW417048B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968721A (en) * 2009-07-27 2011-02-09 巴比禄股份有限公司 Method to speed up access to an external storage device and external storage device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10101552A1 (en) * 2001-01-15 2002-07-25 Infineon Technologies Ag Cache memory and addressing method
DE10158393A1 (en) 2001-11-28 2003-06-12 Infineon Technologies Ag Memory for the central unit of a computer system, computer system and method for synchronizing a memory with the main memory of a computer system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968721A (en) * 2009-07-27 2011-02-09 巴比禄股份有限公司 Method to speed up access to an external storage device and external storage device

Also Published As

Publication number Publication date
DE19957810A1 (en) 2000-09-07
JP3841998B2 (en) 2006-11-08
JP2000259499A (en) 2000-09-22

Similar Documents

Publication Publication Date Title
US8533426B2 (en) Methods and apparatus for providing independent logical address space and access management
CN105630703B (en) Utilize the method and related cache controller of the control cache access of programmable Hash Round Robin data partition
KR930004430B1 (en) Apparatus for maintaining consistency in a multiprocessor computer system using caching
CN102893266B (en) Memory controller mapping on-the-fly
US7777752B2 (en) Method of implementing an accelerated graphics port for a multiple memory controller computer system
WO2016082196A1 (en) File access method and apparatus and storage device
TW591385B (en) Apparatus and method for determining a physical address from a virtual address by using a hierarchical mapping regulation with compressed nodes
US6252612B1 (en) Accelerated graphics port for multiple memory controller computer system
US10628303B2 (en) Storage device that maintains a plurality of layers of address mapping
WO2016082191A1 (en) File access method and apparatus
TW502164B (en) Method and apparatus for reducing power in cache memories and a data processing system having cache
CA2057494A1 (en) Translation lookaside buffer
CN110532200B (en) Memory system based on hybrid memory architecture
US5132927A (en) System for cache space allocation using selective addressing
TW417048B (en) Mapping method of distributed cache memory
RU2003136262A (en) USING A CONTEXTAL ID IN THE MEMORY CACHE
JP4022369B2 (en) Accelerated graphics port for multi-memory controller computer system
JP2007280421A (en) Data processor
CN107526528B (en) Mechanism for realizing on-chip low-delay memory
CN107155306B (en) File page management unit, processing device and file page management method
US7071946B2 (en) Accelerated graphics port for a multiple memory controller computer system
US6816943B2 (en) Scratch pad memories
TWI223145B (en) Method for detecting logical address of non-volatile storage medium
CN110309081A (en) The method of FTL read-write data page based on compression storage and address of cache list item
JP2005108262A (en) Data processor

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent
MK4A Expiration of patent term of an invention patent