CN102360339A - Method for improving utilization efficiency of TLB (translation lookaside buffer) - Google Patents

Method for improving utilization efficiency of TLB (translation lookaside buffer) Download PDF

Info

Publication number
CN102360339A
CN102360339A CN2011103012312A CN201110301231A CN102360339A CN 102360339 A CN102360339 A CN 102360339A CN 2011103012312 A CN2011103012312 A CN 2011103012312A CN 201110301231 A CN201110301231 A CN 201110301231A CN 102360339 A CN102360339 A CN 102360339A
Authority
CN
China
Prior art keywords
tlb
piece
frequency reuse
disappearance
reuse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011103012312A
Other languages
Chinese (zh)
Inventor
陈天洲
马建良
虞保忠
邵景程
全佰行
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN2011103012312A priority Critical patent/CN102360339A/en
Publication of CN102360339A publication Critical patent/CN102360339A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method for improving utilization efficiency of a TLB (translation lookaside buffer). The method comprises the following steps: adding an RF (radio frequency) predictor and a filter buffer; in the case of no loss of the TLB, adding 1 to the actual reuse frequency of a hit block; in the case of loss of the TLB, predicting the reuse frequency of a lost block by the RF predictor, wherein, reuse information of a recovered block is stored in the RF predictor; and presetting a filter threshold, comparing the predicted reuse frequency with the filter threshold, inserting the lost block into the TLB and setting the actual reuse frequency domain of the lost block in the TLB to be zero if the predicted reuse frequency is greater than the filter threshold, otherwise inserting the lost block into the filter buffer and setting the actual reuse frequency domain of the lost block in the filter buffer to be zero. The method has the beneficial effects that the less-reused TLBs are put in the filter buffer rather than the TLB according to a filtering mechanism, thereby improving the utilization efficiency of the TLB, indirectly lowering the loss rate of the TLB and enhancing the overall performance of a system.

Description

A kind of method that improves the TLB utilization ratio
Technical field
The present invention relates to improve in the microprocessor technical field of TLB utilization ratio, relate in particular to a kind of method of using strobe utility to improve the TLB utilization ratio.
Background technology
Because techniques of virtual memory not only can and be distributed to different processes and use the storer piecemeal, diode-capacitor storage hierarchical structure automatically, and simplified the loading procedure of program, therefore all present microprocessors all adopt techniques of virtual memory.In virtual memory system, no matter be to adopt segmentation method or paging method, perhaps both methods of combining, processor all generates virtual address, carries out a series of conversion through hardware, software then, just can obtain the physical address of actual access.This process is called memory mapped or address mapping.Microprocessor uses memory management unit (Memory Management Unit is called for short MMU) to come the mapping process of managing virtual address and physical address.Page and segmentation all depend on one by page number or segment number indexed data structure, and this data structure that comprises physical page address adopts the form of page table usually.Along with the capacity increase of physical storage, the capacity of page table is also increasing.Jumbo page table will be placed in the internal memory usually, this means that will from storer, visit data needs twice, memory access of reference-to storage to obtain physical address at least, and another time memory access obtains data.Utilize principle of locality; Can be kept at these address translation among the special Cache; Thereby reduce the situation that needs the secondary reference-to storage, this special address mapping Cache is called conversion lookaside buffer (Translation Lookaside Buffer is called for short TLB).
Because the visit behavior of data cell and instruction unit has very big otherness, therefore in modern times in the Computer Architecture, TLB is as the mode of operation that cache has taked data cell and instruction unit to separate, and promptly instructs TLB and data TLB.Fig. 1 shows a kind of TLB structural drawing known in the art.In Fig. 1, TLB adopts the complete association mapping, 48 virtual addresses that virtual address 110 produces for CPU, and wherein high 36 bit representation virtual page numbers are used for comparing with each list item of TLB.Low 12 bit representation page or leaf bias internals.In Fig. 1, MUX 120 is 40: 1 a MUX, so this TLB comprises 40 list items.List item 130 is in 40 list items of this TLB, and list item 130 comprises a lot of territories, and the length in each territory is different, and has represented the specific meaning, and for example the V territory is 1 bit length, and whether expression list item 130 is effective.For example, value " 1 " is used to indicate list item 130 effective, and " 0 " is used to indicate list item 130 invalid.The tag field length that should be noted that list item 130 is 36, is used for comparing with high 36 virtual page numbers of virtual address 110, and comparative result is as the control end input of MUX 120; The physical address length of field of list item 130 is 28, is used to indicate the high 28 of physical address, as the data terminal input of MUX 120.
In Fig. 1, transfer process is sent to all list items of TLB from virtual address 110 and begins, and high 36 virtual page numbers of virtual address and the tag field of each list item compare, and comparative result is as the control end input of MUX 120.Last 28 data terminal inputs of each list item of TLB as MUX 120; High 28 of the selection physical address of process MUX 120 can obtain; Physical page number as physical address; And low 12 pages or leaves skew of virtual address just of squinting of low 12 pages or leaves of physical address is formed 40 complete physical addresss to page or leaf skew and physical page number splicing.That is to say TLB be with 48 virtual address as retrieval, and result for retrieval is one 40 a physical address.If the virtual address of request just is kept among the TLB; Be virtual address 110 high 36 virtual page numbers with have and only have the tag field of a list item 130 to be complementary; Pass through TLB so; 40 physical address can obtain soon, and the physical address that obtains can be used to visit physical memory, and this just is called a TLB and hits.On the contrary, if the physical address of request is not kept among the TLB, just be called the TLB disappearance one time; Virtual address must arrive in the page table to the transfer process of physical address and carry out so, and said according to the front, page table is bigger usually; Be placed in the main memory; Therefore this is a numerous and diverse process, can cause very big time-delay, has a strong impact on the performance of system.
Because the disappearance of TLB can cause great time-delay, the efficient that therefore improves TLB has very big influence for the performance of system.Some traditional researchs show that the TLB operation has occupied the system operation time of 5-10%, and under some extreme case, this ratio can reach 40%.TLB operation by software (referring to OS) management has occupied for 80% system kernel operation time sometimes.More existing researchs attempt to reduce access time and the miss rate of TLB, thereby improve the overall performance of system.It has pointed out some TLB characteristics, as size, the degree of association and multistage hierarchical structure etc., access time and the disappearance cost of TLB is had very important influence, but for the utilization ratio that how to improve TLB, then rarely have to relate to.So, be necessary to research and develop in fact, a kind of scheme that addresses the aforementioned drawbacks is provided, improve the TLB utilization ratio, thereby improve the overall performance of system.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of method of the TLB of raising utilization ratio, and it uses strobe utility to improve the TLB utilization ratio, improves the utilization ratio of TLB, improves the overall performance of system.
The embodiment of the invention is achieved in that a kind of method of the TLB of raising utilization ratio, comprises the steps:
Add a RF fallout predictor and a filtering cache;
When TLB does not lack,, add 1 to the actual frequency reuse of hitting piece no matter in TLB, hit or in filtering cache, hit;
When disappearance takes place in TLB, use the RF fallout predictor to predict the frequency reuse of disappearance piece, the RF fallout predictor has been preserved the piece reuse information that reclaims piece;
Filter threshold values through being provided with one in advance; With the frequency reuse of prediction with filter threshold ratio; If the frequency reuse of prediction is just inserted TLB to the disappearance piece greater than filtering threshold value, the actual frequency reuse territory of disappearance piece is changed to 0 among the TLB; Otherwise insert filtering cache to the disappearance piece, the actual frequency reuse territory of disappearance piece is changed to 0 in the filtering cache.
Further, if the disappearance piece inserts data TLB, then according to the replacement algorithm, data TLB can produce one and reclaim piece, uses the actual frequency reuse that reclaims piece to upgrade the piece reuse information of this piece in the fallout predictor, simultaneously, abandons reclaiming piece.
Further; If the disappearance piece inserts filtering cache, then filtering cache replacement algorithm can produce one and reclaim piece, if the frequency reuse of this recovery piece equals its actual frequency reuse; Then upgrade the piece reuse information of RF fallout predictor, and abandon reclaiming piece with this actual frequency reuse; Otherwise reclaim piece to this and insert data TLB.
Further, the domain structure of said TLB includes an actual frequency reuse territory, and the domain structure of said filtering cache is consistent with the domain structure of said TLB.
Further, said filtering cache is independent of TLB, on the logical organization between TLB and page table.
Further, the capacity of said filtering cache is less than said TLB.
Further, said RF fallout predictor use said disappearance piece virtual address low 4 with high 4 and exclusive disjunction result as index.
Further, when said filtering cache reclaimed piece insertion TLB, the actual frequency reuse that said filtering cache reclaims piece was passed to TLB together.
Further, the actual frequency reuse territory size of the domain structure of said TLB is 2 bits, is used to indicate the actual access times of this TLB piece
Compared to prior art; The present invention improves the method for TLB utilization ratio through increasing strobe utility, utilizes strobe utility to be put into the less TLB that reuses among filtering cache rather than the TLB, to improve the utilization ratio of TLB; Reduce the miss rate of TLB indirectly, improved the overall performance of system; In addition, the present invention adopts the LAST algorithm to predict, has very high accuracy.
Description of drawings
Fig. 1 is the TLB structural drawing of prior art;
Fig. 2 is the data TLB hierarchical chart of the embodiment of the invention;
Fig. 3 is the process flow diagram of the strobe utility of the embodiment of the invention;
Fig. 4 is the domain structure diagram of the RF fallout predictor of the embodiment of the invention;
Fig. 5 is the domain structure diagram of the data TLB and the filtering cache of the embodiment of the invention.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
The present invention improves the method for TLB utilization ratio and in the TLB hierarchical structure, adds strobe utility, comprising adding a RF fallout predictor, a filtering cache; When TLB does not lack,, add 1 to the actual frequency reuse of hitting piece no matter in TLB, hit or in filtering cache, hit; And when disappearance takes place TLB, use the RF fallout predictor to predict that the frequency reuse of disappearance piece, RF fallout predictor preserved the piece reuse information that reclaims piece; Filter threshold values through being provided with one in advance; With the frequency reuse of prediction with filter threshold ratio; If the frequency reuse of prediction is just inserted TLB to the disappearance piece greater than filtering threshold value, the actual frequency reuse territory of disappearance piece is changed to 0 among the TLB; Otherwise insert filtering cache to the disappearance piece, the actual frequency reuse territory of disappearance piece is changed to 0 in the filtering cache.
Wherein, if the disappearance piece inserts TLB, TLB can produce a TLB and reclaim piece, and the actual frequency reuse of using TLB to reclaim piece is upgraded the piece reuse information of corresponding address in the RF fallout predictor and reclaimed piece to TLB and abandons; If the disappearance piece inserts filtering cache; Filtering cache can produce a filtering cache and reclaim piece; If reclaiming the frequency reuse of piece, the filtering cache of prediction equates with the actual frequency reuse that filtering cache reclaims piece; The actual frequency reuse of then using filtering cache to reclaim piece is upgraded the piece reuse information of corresponding address in the RF fallout predictor, and abandons filtering cache recovery piece, inserts TLB otherwise reclaim piece to filtering cache.Filtering cache reclaims piece insertion TLB, and TLB can produce a TLB again and reclaim piece, and the actual frequency reuse of using TLB to reclaim piece is upgraded the piece reuse information of corresponding address in the RF fallout predictor, and abandons TLB recovery piece.Wherein, the capacity of filtering cache is less than said TLB, and the domain structure of TLB is that traditional T LB domain structure adds an actual frequency reuse territory, and the domain structure of said filtering cache is consistent with the domain structure of said TLB; Said filtering cache is independent of TLB, on the logical organization between TLB and page table.
The present invention both can be applicable to data TLB, also can be applicable to instruct TLB.Present embodiment describes to be applied to data TLB, on SimpleScalar 3.0d simulator, realizes.As shown in table 1 below, it shows the shared respectively ratio of number of times of reusing of piece among the data TLB of embodiment.The ratio that number of times is less than 4 times data TLB piece of reusing through to each test procedure of PARSEC is added up, and find to exist many TLB pieces not reuse or reused number of times seldom, and these less pieces of reusing has caused the utilization ratio of TLB not high enough.The embodiment of the invention is through strobe utility, is filled into those less pieces of reusing (frequency reuse less than 2 piece) in the filtering cache, thereby can provides more TLB space to reuse piece often to those, improves the utilization ratio of TLB with this.
Table 1
Figure BDA0000095437020000061
See also shown in Figure 2ly, shown in Figure 2 is data TLB logical level structural drawing, wherein between data TLB210 and page table 230, inserts a filtrator 220, realizes strobe utility of the present invention through this filtrator 220; Said page table 230 is managed by MMU.
When the TLB disappearance does not take place, must be in data TLB or filtering cache, to hit,, add 1 to the actual frequency reuse territory of hitting piece no matter be to hit at data TLB or filtering cache.And when the TLB disappearance takes place; With reference to shown in Figure 3; The RF fallout predictor use miss request virtual address low 4 with high 4 and the index of exclusive disjunction result as the RF fallout predictor, the frequency reuse of prediction disappearance piece, the piece reuse information that adopts the manipulative indexing address is as lacking predicting the outcome of piece.With the frequency reuse of the disappearance piece of prediction and the filtration threshold ratio that is provided with in advance, if the prediction frequency reuse of disappearance piece is then inserted data TLB to the disappearance piece greater than filtering threshold value, the actual frequency reuse territory of this piece is changed to 0 among the data TLB; Otherwise insert filtering cache to the disappearance piece, the actual frequency reuse territory of this piece is changed to 0 in the filtering cache.If the disappearance piece has inserted data TLB, then according to specific replacement algorithm, as: LRU replaces algorithm, and data TLB can produce one and reclaim piece, with the piece reuse information of this piece in the actual frequency reuse renewal fallout predictor that reclaims piece, so that next prediction accuracy; Simultaneously, because data TLB need not write back, therefore a handle recovery piece abandons and gets final product.If the disappearance piece has inserted filtering cache; Filtering cache replacement algorithm can produce one and reclaim piece so, and the replacement algorithm that uses in the present embodiment is replaced algorithm as LRU, if the frequency reuse of this recovery piece equals its actual frequency reuse; Then this predictor predicts is correct; And this recovery piece can not be accessed to again, therefore upgrades the reuse information of piece in the RF fallout predictor with this actual frequency reuse, and abandons reclaiming piece; Otherwise think that fallout predictor does not have prediction correct, this recovery piece probably also can be accessed to, and therefore reclaims piece insertion data TLB to this.In this process, the actual frequency reuse of this recovery piece is passed to data TLB together, and is not changed to 0.The recovery piece of filtering cache inserts data TLB, and data TLB can produce one again and reclaim piece, with the piece reuse information of this piece in actual frequency reuse renewal fallout predictor of this recovery piece, so that improve next prediction accuracy; Simultaneously, because data TLB need not write back, therefore abandon this recovery piece and get final product.
See also shown in Figure 4ly, it shows the domain structure of RF fallout predictor.Wherein first territory is an index address, and index address is used for confirming the position of corresponding TLB disappearance piece at internal memory.Since in the present embodiment RF fallout predictor use miss request virtual address low 4 with high 4 and the index of exclusive disjunction result as the RF fallout predictor, promptly use one 4 allocation index, so RF fallout predictor ability index 2 altogether 4=16 disappearance pieces.Second territory is the piece reuse information, indicates the TLB piece actual frequency reuse of last time, is used to predict the frequency reuse of this piece next time.
See also shown in Figure 5ly, it shows the domain structure of present embodiment data TLB and filtering cache.Wherein first territory is the significance bit sign, and the significance bit sign is used for confirming whether this TLB piece is effective, must guarantee that the TLB piece is effective during the comparison of address.Second territory is the read-write flag, is used to indicate the operation of this TLB piece is read or write.The 3rd territory is tag field, stored the high position of virtual address, is used for mating with the virtual address high position of request, thereby determines whether to comprise the virtual address of request.The 4th territory is physical address, stored the high position of physical address, if after the tag field coupling is accomplished, comprise the virtual address of request really, then the high position of physical address has been indicated in this territory.The 5th territory is actual frequency reuse territory, and size is 2 bits, is used to indicate the actual access times of this TLB piece.
In the present embodiment, being provided with respectively and filtering threshold value is 0 and 2, tests, and following table 2 shows the present invention and creates the lifting situation of scheme to the TLB performance.Be made as 0 o'clock filtering threshold value, only filter those frequency reuse and be 0 TLB piece, just those disposable; Be made as 2 o'clock filtering threshold value, then filter those frequency reuse smaller or equal to 2 TLB piece.Visible by table 2, under all test procedures, the filtration threshold value is that the performance boost of 2 couples of TLB is the performance boost to TLB in 0 o'clock greater than filtering threshold value all.Therefore, the present invention's creation has not only improved the utilization ratio of TLB, the bright performance that also improves TLB.
Table 2
The present invention's creation is added strobe utility in the TLB hierarchical structure, use strobe utility to be filled in the filtering cache being predicted as the TLB piece that is of little use, and directly do not insert among the TLB; It adopts the LAST algorithm to predict, has very high accuracy.Reduced among the TLB like this and only used minority TLB piece several times, can improve the utilization ratio of TLB, thus indirect reduction the miss rate of TLB, improved the overall performance of system.
The above is merely preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a method that improves the TLB utilization ratio is characterized in that, comprises the steps:
Add a RF fallout predictor and a filtering cache;
When TLB does not lack,, add 1 to the actual frequency reuse of hitting piece no matter in TLB, hit or in filtering cache, hit;
When disappearance takes place in TLB, use the RF fallout predictor to predict the frequency reuse of disappearance piece, the RF fallout predictor has been preserved the piece reuse information that reclaims piece;
Filter threshold values through being provided with one in advance; With the frequency reuse of prediction with filter threshold ratio; If the frequency reuse of prediction is just inserted TLB to the disappearance piece greater than filtering threshold value, the actual frequency reuse territory of disappearance piece is changed to 0 among the TLB; Otherwise insert filtering cache to the disappearance piece, the actual frequency reuse territory of disappearance piece is changed to 0 in the filtering cache.
2. improve the method for TLB utilization ratio according to claim 1; It is characterized in that: if the disappearance piece inserts data TLB; Then according to the replacement algorithm, data TLB can produce one and reclaim piece, upgrades the piece reuse information of this piece in the fallout predictor with the actual frequency reuse that reclaims piece; Simultaneously, abandon reclaiming piece.
3. improve the method for TLB utilization ratio according to claim 1; It is characterized in that: if the disappearance piece inserts filtering cache; Then filtering cache replacement algorithm can produce one and reclaim piece; If the frequency reuse of this recovery piece equals its actual frequency reuse, then upgrade the piece reuse information of RF fallout predictor, and abandon reclaiming piece with this actual frequency reuse; Otherwise reclaim piece to this and insert data TLB.
4. like the method for claim 2 or 3 said raising TLB utilization ratios, it is characterized in that: the domain structure of TLB includes an actual frequency reuse territory, and the domain structure of said filtering cache is consistent with the domain structure of said TLB.
5. like the method for the said raising of claim 4 TLB utilization ratio, it is characterized in that: said filtering cache is independent of TLB, on the logical organization between TLB and page table.
6. like the method for the said raising of claim 5 TLB utilization ratio, it is characterized in that: the capacity of said filtering cache is less than said TLB.
7. like the method for the said raising of claim 6 TLB utilization ratio, it is characterized in that: said RF fallout predictor use said disappearance piece virtual address low 4 with high 4 and exclusive disjunction result as index.
8. like the method for the said raising of claim 7 TLB utilization ratio, it is characterized in that: when said filtering cache reclaimed piece insertion TLB, the actual frequency reuse that said filtering cache reclaims piece was passed to TLB together.
9. like the method for the said raising of claim 8 TLB utilization ratio, it is characterized in that: the replacement algorithm of said filtering cache is a LRU replacement algorithm.
10. like the method for the said raising of claim 9 TLB utilization ratio, it is characterized in that: the actual frequency reuse territory size of the domain structure of said TLB is 2 bits, is used to indicate the actual access times of this TLB piece.
CN2011103012312A 2011-10-08 2011-10-08 Method for improving utilization efficiency of TLB (translation lookaside buffer) Pending CN102360339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103012312A CN102360339A (en) 2011-10-08 2011-10-08 Method for improving utilization efficiency of TLB (translation lookaside buffer)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103012312A CN102360339A (en) 2011-10-08 2011-10-08 Method for improving utilization efficiency of TLB (translation lookaside buffer)

Publications (1)

Publication Number Publication Date
CN102360339A true CN102360339A (en) 2012-02-22

Family

ID=45585668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103012312A Pending CN102360339A (en) 2011-10-08 2011-10-08 Method for improving utilization efficiency of TLB (translation lookaside buffer)

Country Status (1)

Country Link
CN (1) CN102360339A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077176A (en) * 2014-06-25 2014-10-01 龙芯中科技术有限公司 Method and device for increasing virtual processor identifiers
CN107783912A (en) * 2016-08-26 2018-03-09 北京中科寒武纪科技有限公司 It is a kind of to support the TLB devices of multiple data stream and the update method of TLB module
CN110188026A (en) * 2019-05-31 2019-08-30 龙芯中科技术有限公司 The determination method and device of fast table default parameters
CN117331854A (en) * 2023-10-11 2024-01-02 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium
CN117389630A (en) * 2023-12-11 2024-01-12 北京开源芯片研究院 Data caching method and device, electronic equipment and readable storage medium
CN117331854B (en) * 2023-10-11 2024-04-30 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007048134A1 (en) * 2005-10-20 2007-04-26 Qualcomm Incorporated Updating multiple levels of translation lookaside buffers (tlbs) field
CN101896892A (en) * 2007-11-07 2010-11-24 高通股份有限公司 Configurable translation lookaside buffer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007048134A1 (en) * 2005-10-20 2007-04-26 Qualcomm Incorporated Updating multiple levels of translation lookaside buffers (tlbs) field
CN101896892A (en) * 2007-11-07 2010-11-24 高通股份有限公司 Configurable translation lookaside buffer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANG LINGXIANG等: "Less reused filter: improving l2 cache performance via filtering less reused lines", 《PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON SUPERCOMPUTING. ACM》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077176A (en) * 2014-06-25 2014-10-01 龙芯中科技术有限公司 Method and device for increasing virtual processor identifiers
CN104077176B (en) * 2014-06-25 2017-05-03 龙芯中科技术有限公司 Method and device for increasing virtual processor identifiers
CN107783912A (en) * 2016-08-26 2018-03-09 北京中科寒武纪科技有限公司 It is a kind of to support the TLB devices of multiple data stream and the update method of TLB module
CN110188026A (en) * 2019-05-31 2019-08-30 龙芯中科技术有限公司 The determination method and device of fast table default parameters
CN110188026B (en) * 2019-05-31 2023-05-12 龙芯中科技术股份有限公司 Method and device for determining missing parameters of fast table
CN117331854A (en) * 2023-10-11 2024-01-02 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium
CN117331854B (en) * 2023-10-11 2024-04-30 上海合芯数字科技有限公司 Cache processing method, device, electronic equipment and medium
CN117389630A (en) * 2023-12-11 2024-01-12 北京开源芯片研究院 Data caching method and device, electronic equipment and readable storage medium
CN117389630B (en) * 2023-12-11 2024-03-05 北京开源芯片研究院 Data caching method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN108804350B (en) Memory access method and computer system
EP3204859B1 (en) Methods and systems for cache lines de-duplication
US10802987B2 (en) Computer processor employing cache memory storing backless cache lines
US8935484B2 (en) Write-absorbing buffer for non-volatile memory
US7793049B2 (en) Mechanism for data cache replacement based on region policies
CN106560798B (en) Memory access method and device and computer system
CN109582593B (en) FTL address mapping reading and writing method based on calculation
CN111858404B (en) Method and system for address translation, and computer readable medium
CN105095116A (en) Cache replacing method, cache controller and processor
CN102754086A (en) Virtual-memory system with variable-sized pages
CN102662860A (en) Translation lookaside buffer (TLB) for process switching and address matching method therein
CN102768645A (en) Solid state disk (SSD) prefetching method for mixed caching and SSD
CN104252425A (en) Management method for instruction cache and processor
JP2009512943A (en) Multi-level translation index buffer (TLBs) field updates
CN110196757A (en) TLB filling method, device and the storage medium of virtual machine
CN103744611A (en) Computer system based on solid state disc as cache and cache accelerating method
CN102360339A (en) Method for improving utilization efficiency of TLB (translation lookaside buffer)
CN109983538B (en) Memory address translation
CN111124954B (en) Management device and method for two-stage conversion bypass buffering
US11836092B2 (en) Non-volatile storage controller with partial logical-to-physical (L2P) address translation table
US7979640B2 (en) Cache line duplication in response to a way prediction conflict
CN112148639A (en) High-efficiency small-capacity cache memory replacement method and system
CN109478163B (en) System and method for identifying a pending memory access request at a cache entry
CN104156178A (en) Data access method for embedded terminal
CN104156324A (en) Program run method for embedded system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120222