CN1499382A - Method for implementing cache in high efficiency in redundancy array of inexpensive discs - Google Patents

Method for implementing cache in high efficiency in redundancy array of inexpensive discs Download PDF

Info

Publication number
CN1499382A
CN1499382A CNA021466920A CN02146692A CN1499382A CN 1499382 A CN1499382 A CN 1499382A CN A021466920 A CNA021466920 A CN A021466920A CN 02146692 A CN02146692 A CN 02146692A CN 1499382 A CN1499382 A CN 1499382A
Authority
CN
China
Prior art keywords
speed cache
fritter
cache
level
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA021466920A
Other languages
Chinese (zh)
Inventor
潘征宇
陈绍元
罗传藻
袁友良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNA021466920A priority Critical patent/CN1499382A/en
Publication of CN1499382A publication Critical patent/CN1499382A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The method includes following logical steps. Two stages of high-speed buffer are setup: group associative mapping mode between first stage high-speed buffer and RAID is adopted, and whole associative mapping mode is adopted for second stage high-speed buffer. First, search is carried out in first stage high-speed buffer. If hit, requested data are picked up, otherwise search is carried out in second stage high-speed buffer. If hit, requested data are picked up, otherwise it is determined whether second stage high-speed buffer is full. If not, the requested data and adjacent data are put into big idle block in second stage high-speed buffer, and the small block containing requested data is labeled in visit state as well as the requested data are picked up; If yes, big block is replaced, and the requested data are picked up; but, if there are small blocks with visit state being labeled, then before substitution, the small blocks are moved to first stage high-speed buffer.

Description

The implementation method of efficient high-speed buffer memory in the Redundant Arrays of Inexpensive Disks system
Technical field
The present invention relates to Redundant Arrays of Inexpensive Disks system field, particularly relate to the implementation method of efficient high-speed buffer memory in a kind of Redundant Arrays of Inexpensive Disks system.
Background technology
Redundant Arrays of Inexpensive Disks (RAID) system is called for short disk array, and nowadays it is low with its cost, power consumption is little, the transfer rate advantages of higher, is widely used in equipment such as the webserver.Quality in RAID system high speed buffer memory (Cache) design is the key factor that influences its system performance.Can the fast reading and writing data by Cache, can effectively improve I/O (I/O) performance of system.
Cache capacity and next stage storer are that the capacity of disk array compares is very little, generally is no more than 1%, the just subclass of next stage memory content that its is preserved, and the exchange of content is to be unit with the piece between Cache and the next stage storer.
Between Cache and the subordinate's storer there be typical mapping mode: complete association mapping mode (FullyAssociative Mapping), directly link mapping mode (Direct Mapping) and set associative mapping mode (Set Associative Mapping) etc.
Piece in subordinate's storer will be put among the Cache and go, and the position that allows among the Cache to deposit this piece has all been taken, and this just need replace.Replacement algorithm commonly used main as: least often use (LFU, Least Frequently Used) algorithm, recent minimum use (LRU, Least RecentlyUsed) algorithm and replacement (Random Replacement) at random.
The generation of Cache is according to the principle of locality in the data access, and the locality rule of visit storage space mainly comprises two aspects:
Temporal locality:, then may this can visit once more by very fast quilt if a Storage Item is accessed.
Spatial locality: if a Storage Item is accessed, then this item and contiguous item thereof also may be very fast accessed.
If therefore make good use of principle of locality, and adopt appropriate C ache institutional framework, mapping mode and replacement policy, can improve the efficient of RAID system effectively.
Cache in the tradition RAID system adopts one-level Cache pattern basically, uses LRU to replace algorithm.When an I/O (I/O) request arrives, at first in Cache, search, when not hitting in Cache, data directly are fetched among the Cache from disk array.The main benefit of this method is easy realization, but owing to have only single-stage Cache, can not take into account time and spatial locality simultaneously, and this is because time and spatial locality are a pair of contradiction, and is difficult to coordinate in single-stage Cache.In addition, prior art lacks prefetch policy, so the not hit rate of Cache is very high.
In sum, though the prior art algorithm is simple, implement more or less freely.But can only be in the application-specific of its application characteristic of cicada, adopt the design of time or spatial locality targetedly, and can not take into account the characteristics of time and spatial locality simultaneously, can't overcome the intrinsic contradictions that time and spatial locality exist in single-stage Cache, lack prefetch policy, not high for general occasion Cache hit rate, thus the entire system performance reduced.
Summary of the invention
In view of this, the implementation method that the purpose of this invention is to provide efficient high-speed buffer memory in a kind of Redundant Arrays of Inexpensive Disks system.Can overcome the intrinsic contradictions between time and spatial locality in single-stage Cache by this method, and make it comprise prefetch policy, improve the hit rate of general occasion Cache, thus the overall performance of elevator system.
The implementation method of efficient high-speed buffer memory in a kind of Redundant Arrays of Inexpensive Disks system, this system comprises high-speed cache and disk array at least, may further comprise the steps:
It is two-stage that high-speed cache is set, first order high-speed cache is divided into more than one group, every group contains more than one fritter, and adopt the mapping mode of set associative with disk array, second level high-speed cache is divided into more than one bulk, include more than one fritter in each bulk, and be the mapping mode that base unit and disk array adopt complete association with the bulk;
Data search at first carries out in first order high-speed cache, if hit, then extract requested date, otherwise, in the high-speed cache of the second level, carry out data search, if hit, the fritter that mark is hit is an Access status, extracts requested date, otherwise, judge whether second level high-speed cache is full, if less than, the data in the adjacent logical block with it of logical block of storage requested date in the disk array are put into the idle bulk of second level high-speed cache, the fritter that comprises requested date in the high-speed cache of the second level is labeled as Access status, extract requested date, if full, with the data in the bulk in the data replacement second level high-speed cache in the adjacent logical block of logical block of storage requested date in the disk array, if having fritter to be identified as Access status in the bulk that is replaced with it, then earlier the data in these fritters are moved on in the first order high-speed cache, replace again, extract requested date, otherwise, directly replace, extract requested date.
Each logical block in the described disk array of this method constitutes the set associative mapping with whole first order buffer memory respectively.
Each logical block in the described disk array of this method is assigned to the part of second level high-speed cache, and in this part formation complete association mapping, and the size of the second level cache part that each logical block in the disk array is distributed can be according to the actual conditions dynamic assignment in the data read process.
The described fritter of this method is a logical block.
The described data search that carries out in first order high-speed cache of this method specifically comprises: in first order high-speed cache, combination with logical unit number and LBA (Logical Block Addressing) adopts hash algorithm to search as key word, if miss, adopt hash algorithm to carry out searching the second time with the combination of logical unit number and LBA (Logical Block Addressing) as key word again.
The described data search that carries out in the high-speed cache of the second level of this method is to adopt the balanced binary tree search algorithm.
The described data of carrying out in the high-speed cache of the second level of this method are replaced and are adopted recent minimum use algorithm.
This method is described to form second level high-speed cache acceptance of the bid data in the fritter of Access status and moves on in the first order high-speed cache and specifically comprise:
The key word of replacing as Hash according to the combination of logical unit number and LBA (Logical Block Addressing) carries out Hash calculation one time, if corresponding fritter is empty, data in the high-speed cache fritter of the second level are directly moved on to this fritter, otherwise, carry out the Hash calculation second time,, the data of fritter in the high-speed cache of the second level are directly moved on to this fritter if corresponding fritter is empty, otherwise, replace the data in this fritter.
This method further comprises: when just starting in system, regularly the data that are noted as in the high-speed cache of the second level in the fritter of Access status are copied in the first order high-speed cache, the fritter that had duplicated in the high-speed cache of the second level is marked not Access status, if first order high-speed cache is full, then this process stops.
This method further comprises: a feature (flag) position is set before each fritter of second level high-speed cache in advance, the described Access status that marks into is with the feature before this fritter (flag) position 1, describedly marks not that Access status is with the feature before this fritter (flag) position 0.
The described high-speed cache that is provided with of this method is that two-stage realizes by a high-speed cache is divided into two parts.
The described high-speed cache that is provided with of this method is that two-stage is to adopt two high-speed caches to realize.
From such scheme as can be seen, the present invention adopts two-stage Cache structure to overcome the intrinsic contradictions that time and spatial locality exist dexterously in single-stage Cache, improved the life cycle of temporal locality data, the prefetch policy among the Cache of the second level has been taken into account spatial locality again simultaneously.Thereby have general applicability, reduced the unnecessary operation that swaps out that frequently changes to, reduced the data search time, increased and searched hit rate, improved system performance.
Description of drawings
Fig. 1 is a first order Cache structural representation;
Fig. 2 is the set associative mapping synoptic diagram of first order Cache and subordinate's storer;
Fig. 3 is a second level Cache structural representation;
Fig. 4 is the complete association mapping synoptic diagram of second level Cache and subordinate's storer;
Fig. 5 is key data structure and the related synoptic diagram of second level Cache.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is further described in detail again.
When the total volume of Cache fixedly the time, at the involved time of data access and the optimization of spatial locality be a pair of contradiction in fact, storage block quantity (block number) is inversely proportional among storage block size (block size) and the increase Cache because of increasing for this, therefore the design of usually relevant Cache all will be considered suitable block size, makes block size and block number reach a kind of compromise.
Comprehensive in the present invention realization principle, skill and algorithm about Cache, decision is divided into two-stage with Cache, make the feature of the first order binding time locality of Cache, has little block size and block number is very big, and the second level is considered in conjunction with the spatial locality feature, makes it have big block size and block number is very little.Satisfied the requirement of block size and block number under the particular case so simultaneously, thereby the fine contradiction that has solved between block size and the block number is significantly improved system performance.
Two-stage Cache is constructed as follows: the first order for the Cache of time correlation, with subordinate's storer employing set associative mapping mode, dispose less block size and bigger block number, can be with the fritter (SB) of basic logical block 512 bytes (bytes) as one storage, and then these fritters are divided into several groups (Set); The second level is the Cache with space correlation, mode with subordinate's storer employing complete association mapping, because this level is with relevant with prefetching technique, so block size that configuration is bigger and less block number, it can be divided into several bulks (LB), make each bulk comprise the logical block of some, i.e. fritter.
More than like this configuration be to consider: when when Cache is miss, requested data and adjacent data thereof are formed a chunk data, move on to the Cache of space, the second level from memory device; When the chunk data that comprises the plurality of small blocks data among the Cache of space is replaced, only pass by the several little blocks of data of accessed mistake to be moved among the first order time Cache, help reducing conflict like this.Do not need the larger data piece of looking ahead for time Cache in this scheme, so adopt the block size try one's best little, also will increase the inlet number in addition, promptly block number can reduce like this owing to frequently change to the problem of the institute's efficient of the bringing reduction that swaps out as far as possible.For space Cache, exist the bulk comprise a plurality of adjacent isles to help bringing into play prefetch policy, improve the Cache hit rate, but bulk can not be too big, too senior general strengthens data transmission period, so need be at hit rate and seeking balance between the transmission time.
Firsts and seconds Cache concrete structure illustrates as follows:
The structure of one-level Cache is referring to shown in Figure 1, and in the present embodiment, one-level Cache total volume is taken as 512M, and each fritter 512bytes of Cache is divided into 1024 groups with Cache, then every group of 1024 fritters.
One-level Cache is that all logical blocks of disk array are shared by subordinate's storer, and any logical block of disk array all can be occupied whole one-level Cache, and with the mapping relations of its formation set associative.This is to consider the less and temporal locality principle of its volume ratio, adopts the mapping relations of set associative between itself and the logical block.The logical block of subordinate's storer is calculated with max cap. 512G, is a district with 1024 fritters, and then every district capacity 512Kbytes can reach 1M block.Mapping relations between one-level Cache and the subordinate's storer are referring to shown in Figure 2.
If one-level Cache has a group, every group of b piece is that a district divides with a piece in subordinate's storer, establishes k and is stored in group number among the Cache for subordinate's storage block, and j is the piece number of subordinate's storage block.Following relation is then arranged:
n c=a * b n cIt is the total piece number among the Cache
k=j?mod?a
Wherein, a=1024, b=1024, j=LBA (Logical Block Addressing) (LBA, Logic Block Address) is the piece number of fritter.In this manner, which group is each subordinate's storage block be stored in is determined, as for then being variable in which piece that exists in this group.
Consider mapping mode and temporal locality, and consider that one-level Cache searches the method that algorithm adopts secondary Hash (Hash) in replacing and organizing from algorithm and efficient.
The key word of at first replacing as Hash according to the combination LUN+LBA of logical unit number (LUN) and LBA (Logical Block Addressing) during replacement calculates a Hash, if corresponding fritter be a sky, the data in the high-speed cache fritter of the second level is directly moved on to this fritter; If the corresponding blocks conflict, the key word of replacing as Hash with LUN+LBA carries out the Hash calculating second time again, if the data in this fritter are then replaced in also conflict.
Be to adopt the secondary hash algorithm too when searching, that is: in first order high-speed cache, adopt hash algorithm to search with LUN+LBA as key word,, then extract requested date if hit, if it is miss, LUN+LBA carries out second time Hash as key word and searches again, if hit, then extracts requested date, if still miss, then replace as stated above.
The benefit of this method is, when guaranteeing hit rate, both reduced complexity, not too influences performance, the overhead of also having avoided complicated algorithm and data control information to be brought again.
The capacity of second-level cache is bigger, its size is configurable, be that total volume can be carved up by different logical blocks, each logical block can dispose used second-level cache size separately, and with its formation mapping relations, and layoutprocedure can be dynamic, can carry out dynamic assignment to the cache size of each logical block according to the data read process of reality.Do to prevent that whole second-level cache from being taken for a long time by certain logical block like this, and influence other logical block work.Referring to shown in Figure 3, when 2GB, every fritter 512K, 32 fritters are formed 1 bulk, can be divided into 128K bulk at most.In addition, before every fritter, also have the flag position of 4bytes to be used for identifying the state of this fritter.Because the flag position is very little, when calculating byte number, can ignores and not remember.
Referring to shown in Figure 4, second-level cache and subordinate's storer adopt the mapping relations of complete association, and its basic operation and algorithm are summarized as follows:
Search
Referring to shown in Figure 5, the algorithm of searching of second-level cache is that each logical block at disk array adopts balanced binary tree search algorithm.Corresponding each logical block all has a balanced binary tree that is used to search, and balanced binary tree is searched as key word with LBA in each logical block.
When searching, at first according to the current pointer, compare with current data of looking ahead, if miss,, search in the whole balanced binary tree to this LUN correspondence again according to the root pointer.When hitting, the flag position 1 with corresponding fritter is designated Access status, and extracts requested date.When miss, from subordinate's storer, read comprise little blocks of data a chunk data in second-level cache, two kinds of situations are arranged, the space full and less than.When the space less than, handle simplyr, directly from the Free List of second-level cache, obtain a bulk of data that are used to store; When the space is full, then need replacement operation, replacement operation adopts lru algorithm, and decision needs replace block according to LRU List.At last, according to lookup result, corresponding renewal LRU List, Free List and flag information.
The benefit of balanced binary search tree algorithm is to compare other search algorithms, has higher search efficiency.Even under extreme case, put distribution as certain LUN full configuration, promptly 2GB Cache all distributes to this LUN, and this moment, average inquiry times was about 8.5 times, and number of comparisons is not too large, and velocity ratio is very fast, moreover generally extreme case can not occur.Simultaneously, it can also realize prefetch policy easily, the data map less-restrictive, and institutional framework is very flexible.
Replace
The replacement algorithm of second-level cache adopts LRU: decision needs the piece of replacement according to LRU List, move among the one-level Cache being designated recently accessed fritter in the piece, replace original old data with new data then, readjust balanced binary tree according to the LBA value, make it meet the definition of balanced binary tree.At last, upgrade LRU List and flag information.
Mapping
Second-level cache adopts the complete association mapping mode, and the data that read can be placed on any position among the Cache as required, does not have the mapping ruler restriction.
Move
Moving of second-level cache is unidirectional, promptly can only be in one-level Cache with the data-moving in the second-level cache, because the trigger condition difference of moving, move and be divided into two kinds:
Moving when second-level cache is replaced: replace when taking place, the fritter of accessed mistake in the former data bulk need be moved among the one-level Cache, can corresponding fritter be copied among the one-level Cache according to the position of flag mid-1, part can be replaced referring to the relevant one-level Cache in front in the position that specifically is placed on one-level Cache.After moving end, with the fritter institute corresponding flag position clear 0 of having moved.
What clock regularly caused moves: regularly move when normally just starting in system and take place, at this moment often in the quite a while one-level Cache be empty, in order to make full use of system resource, therefore designed regularly and moved, when level cache is expired, regularly move and to stop automatically.Corresponding each LUN of RAID system has a timer, regularly with recently accessed fritter data-moving in the second-level cache in one-level Cache, it is the same to move principle, promptly the position according to flag mid-1 copies to corresponding fritter among the one-level Cache, after moving end, with the fritter institute corresponding flag position clear 0 of having moved.
The algorithm general description of present embodiment is as follows:
Hitting at one-level Cache is among the set associative Cache:
If little blocks of data is found among the set associative Cache, handle simplyr, only need to extract requested data and get final product.
Hit in the Cache of complete association space:
If data are found among the complete association Cache, need in bulk of mark actually which fritter is hit, be about to the flag position 1 of this fritter, so that when bulk is replaced in the future, the fritter of accessed mistake moves on among the one-level Cache, extracts requested data at last.
All miss in two-stage Cache:
If two-stage Cache is all miss, a bulk that comprises desired data will be taken out from memory device, and be placed in the second-level cache, and concrete operations are divided into two kinds of situations:
Space Cache less than:
Space Cache also has item of a free time at least, will comprise the logical block of desired data in the disk array, and promptly the data in fritter and the adjacent fritter thereof are fetched into the Cache free time bulk from memory device as a chunk data.Here adjacent isles is removed, and has embodied forecasting method.Simultaneously required small data piece is identified as Access status, so that when bulk is replaced in the future, the small data piece moves on among the Cache that directly links, and extracts this requested date at last.
Space Cache is full:
Space Cache to secondary adopts the LRU replacement policy, has little blocks of data to be identified as Access status in the chunk data if be replaced, and then these little blocks of data are moved among the Cache that directly links of one-level, replace again, extract requested date, otherwise, directly replace, extract requested date.Help improving the life cycle of the medium and small blocks of data of one-level time Cache like this, avoid frequently changing to the operation that swaps out.
Along with application deepens continuously, adopt the two-stage Cache of binding time and spatial locality more and more to embody its superior performance, according to analytic statistics, roughly can draw such conclusion: the space that only needs 1/4 size of use one-level Cache capacity, two-stage Cache can obtain the performance identical with one-level Cache, has not only provided cost savings but also has improved performance.
Compared with prior art, two-stage Cache of the present invention has more performance than single-stage Cache, and the Cache design unavoidably is subjected to such as effects limit such as Cache capacity, algorithm complexes usually.And the Cache in the RAID system, situation has had variation, and restrictions such as Cache capacity, algorithm complex be the substitute is the contradiction between temporal locality and the spatial locality not in the factor that is most critical.For giving full play to the Cache effect, the better system performance that improves, adopt two-stage Cache structure then can overcome the intrinsic contradictions that time and spatial locality exist in single-stage Cache, thereby the life cycle of bigger raising temporal locality data, reduce the unnecessary operation that swaps out that frequently changes to, reduced the time of searching simultaneously, improved system speed.
Be Performance Evaluation below to this Cache scheme:
Hit rate (Hit Ratio) is the important indicator of system performance, directly reflects the efficient of Cache.Be by corresponding document operation in the file system below, promptly be distributed in the routine operation of the different big small documents on the different disk, add up the result of Cache hit rate situation, form is as follows:
The Performance Evaluation of table 1, second-level cache
Times?of?simulation ???Hit?ratio?of ??one-level?cache ??Hit?ratio?of?proposed ?????two-level?cache
???1 ????0.21 ????0.78
???2 ????0.24 ????0.77
???3 ????0.22 ????0.78
???4 ????0.22 ????0.77
???5 ????0.23 ????0.78
???Average?hit?ratio ????0.22 ????0.78
From statistics as can be seen, this two-stage Cache method has fully adopted time and spatial locality, compares with single-stage Cache hit rate, and hit rate brings up to 0.78 by 0.22, and effect is very desirable.Though this programme is comparatively complicated aspect management algorithm, algorithm complex can not influence RAID entire system response speed.The RAID system has separate processor and handles relevant algorithm, can not influence host performance, hit rate and hit-count improve and have reduced the data communication link simultaneously, have saved the involved tracking of unnecessary visit dish, rotational time etc., so algorithm complex influences the entire system performance hardly.
The above is preferred embodiment of the present invention only, is not to be used to limit protection scope of the present invention.

Claims (13)

1, the implementation method of efficient high-speed buffer memory in a kind of Redundant Arrays of Inexpensive Disks system, this system comprises high-speed cache and disk array at least, it is characterized in that, this method may further comprise the steps:
It is two-stage that high-speed cache is set, first order high-speed cache is divided into more than one group, every group contains more than one fritter, and adopt the mapping mode of set associative with disk array, second level high-speed cache is divided into more than one bulk, include more than one fritter in each bulk, and be the mapping mode that base unit and disk array adopt complete association with the bulk;
Data search at first carries out in first order high-speed cache, if hit, then extract requested date, otherwise, in the high-speed cache of the second level, carry out data search, if hit, the fritter that mark is hit is an Access status, extracts requested date, otherwise, judge whether second level high-speed cache is full, if less than, the data in the adjacent logical block with it of logical block of storage requested date in the disk array are put into the idle bulk of second level high-speed cache, the fritter that comprises requested date in the high-speed cache of the second level is labeled as Access status, extract requested date, if full, with the data in the bulk in the data replacement second level high-speed cache in the adjacent logical block of logical block of storage requested date in the disk array, if having fritter to be identified as Access status in the bulk that is replaced with it, then earlier the data in these fritters are moved on in the first order high-speed cache, replace again, extract requested date, otherwise, directly replace, extract requested date.
2, method according to claim 1 is characterized in that, each logical block in the disk array constitutes the set associative mapping with whole first order buffer memory respectively.
3, method according to claim 1 is characterized in that, each logical block in the disk array is assigned to the part of second level high-speed cache, and constitutes the complete association mapping in this part.
4, method according to claim 3 is characterized in that, the size of the second level cache part that each logical block in the disk array is distributed can be according to the actual conditions dynamic assignment in the data read process.
5, method according to claim 1 is characterized in that, described fritter is a logical block.
6, method according to claim 1, it is characterized in that, the described data search that carries out in first order high-speed cache specifically comprises: in first order high-speed cache, combination with logical unit number and LBA (Logical Block Addressing) adopts hash algorithm to search as key word, if miss, adopt hash algorithm to carry out searching the second time with the combination of logical unit number and LBA (Logical Block Addressing) as key word again.
7, method according to claim 1 is characterized in that, the described data search that carries out in the high-speed cache of the second level is to adopt the balanced binary tree search algorithm.
8, method according to claim 1 is characterized in that, described data of carrying out in the high-speed cache of the second level are replaced and adopted recent minimum use algorithm.
9, method according to claim 1 is characterized in that, the data in the described fritter that second level high-speed cache acceptance of the bid is formed Access status move on in the first order high-speed cache and specifically comprise:
The key word of replacing as Hash according to the combination of logical unit number and LBA (Logical Block Addressing) carries out Hash calculation one time, if corresponding fritter is empty, data in the high-speed cache fritter of the second level are directly moved on to this fritter, otherwise, carry out the Hash calculation second time,, the data of fritter in the high-speed cache of the second level are directly moved on to this fritter if corresponding fritter is empty, otherwise, replace the data in this fritter.
10, method according to claim 1, it is characterized in that, this method further comprises: when just starting in system, regularly the data that are noted as in the high-speed cache of the second level in the fritter of Access status are copied in the first order high-speed cache, the fritter that had duplicated in the high-speed cache of the second level is marked not Access status, if first order high-speed cache is full, then this process stops.
11, according to claim 1 or 10 described methods, it is characterized in that, further comprise: a feature (flag) position is set before each fritter of second level high-speed cache in advance, the described Access status that marks into is with the feature before this fritter (flag) position 1, describedly marks not that Access status is with the feature before this fritter (flag) position 0.
12, method according to claim 1 is characterized in that, the described high-speed cache that is provided with is that two-stage realizes by a high-speed cache is divided into two parts.
13, method according to claim 1 is characterized in that, the described high-speed cache that is provided with is that two-stage is to adopt two high-speed caches to realize.
CNA021466920A 2002-11-05 2002-11-05 Method for implementing cache in high efficiency in redundancy array of inexpensive discs Pending CN1499382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA021466920A CN1499382A (en) 2002-11-05 2002-11-05 Method for implementing cache in high efficiency in redundancy array of inexpensive discs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA021466920A CN1499382A (en) 2002-11-05 2002-11-05 Method for implementing cache in high efficiency in redundancy array of inexpensive discs

Publications (1)

Publication Number Publication Date
CN1499382A true CN1499382A (en) 2004-05-26

Family

ID=34232840

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA021466920A Pending CN1499382A (en) 2002-11-05 2002-11-05 Method for implementing cache in high efficiency in redundancy array of inexpensive discs

Country Status (1)

Country Link
CN (1) CN1499382A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006000138A1 (en) * 2004-06-29 2006-01-05 Deyuan Wang A buffer apparatus and method
CN100375066C (en) * 2005-10-28 2008-03-12 中国人民解放军国防科学技术大学 Method realizing priority reading memory based on cache memory line shifting
CN100399299C (en) * 2005-10-28 2008-07-02 中国科学院计算技术研究所 Memory data processing method of cache failure processor
CN100428195C (en) * 2005-03-31 2008-10-22 国际商业机器公司 Data processing system and method
CN100430907C (en) * 2005-04-29 2008-11-05 国际商业机器公司 Methods and arrangements for reducing latency and snooping cost in non-uniform cache memory architectures
CN101046760B (en) * 2006-03-29 2011-01-26 日本电气株式会社 Storage device, data arrangement method and program
CN102169464A (en) * 2010-11-30 2011-08-31 北京握奇数据系统有限公司 Caching method and device used for non-volatile memory, and intelligent card
CN102521161A (en) * 2011-11-21 2012-06-27 华为技术有限公司 Data caching method, device and server
WO2012109882A1 (en) * 2011-08-05 2012-08-23 华为技术有限公司 Data reading method and ddr controller
WO2012109879A1 (en) * 2011-08-04 2012-08-23 华为技术有限公司 Method, device and system for caching data in multi-node system
CN102763070A (en) * 2011-11-01 2012-10-31 华为技术有限公司 Method and device for managing disk cache
CN101464840B (en) * 2007-12-19 2012-11-21 国际商业机器公司 Processor and method for managing cache in a data processing system
CN103309820A (en) * 2013-06-28 2013-09-18 曙光信息产业(北京)有限公司 Implementation method for disk array cache
CN103383666A (en) * 2013-07-16 2013-11-06 中国科学院计算技术研究所 Method and system for improving cache prefetch data locality and cache assess method
CN104094254A (en) * 2011-12-02 2014-10-08 康佩伦特科技公司 System and method for unbalanced raid management
CN104484288A (en) * 2014-12-30 2015-04-01 浪潮电子信息产业股份有限公司 Method and device for replacing contents items
CN105975406A (en) * 2016-04-29 2016-09-28 浪潮(北京)电子信息产业有限公司 Data access method and device
WO2016155522A1 (en) * 2015-03-30 2016-10-06 Huawei Technologies Co., Ltd. Distributed content discovery with in-network caching
CN106126440A (en) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 A kind of caching method improving data spatial locality in the buffer and device
CN107229575A (en) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 The appraisal procedure and device of caching performance
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN108090529A (en) * 2016-11-22 2018-05-29 上海宝信软件股份有限公司 The storage method of on-site terminal operation process data based on Radio Frequency Identification Technology
CN108228649A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For the method and apparatus of data access
CN108664211A (en) * 2017-03-31 2018-10-16 深圳市中兴微电子技术有限公司 A kind of method and device for realizing reading and writing data
CN109739780A (en) * 2018-11-20 2019-05-10 北京航空航天大学 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
CN109918131A (en) * 2019-03-11 2019-06-21 中电海康无锡科技有限公司 A kind of instruction read method based on non-obstruction command cache
CN111602377A (en) * 2017-12-27 2020-08-28 华为技术有限公司 Resource adjusting method in cache, data access method and device
CN112069091A (en) * 2020-08-17 2020-12-11 北京科技大学 Access optimization method and device applied to molecular dynamics simulation software
CN114281762A (en) * 2022-03-02 2022-04-05 苏州浪潮智能科技有限公司 Log storage acceleration method, device, equipment and medium
WO2022193126A1 (en) * 2021-03-16 2022-09-22 Micron Technology, Inc. Performance benchmark for host performance booster

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7627719B2 (en) 2004-06-29 2009-12-01 Deyuan Wang Cache device and method for determining LRU identifier by pointer values
WO2006000138A1 (en) * 2004-06-29 2006-01-05 Deyuan Wang A buffer apparatus and method
CN100428195C (en) * 2005-03-31 2008-10-22 国际商业机器公司 Data processing system and method
CN100430907C (en) * 2005-04-29 2008-11-05 国际商业机器公司 Methods and arrangements for reducing latency and snooping cost in non-uniform cache memory architectures
CN100375066C (en) * 2005-10-28 2008-03-12 中国人民解放军国防科学技术大学 Method realizing priority reading memory based on cache memory line shifting
CN100399299C (en) * 2005-10-28 2008-07-02 中国科学院计算技术研究所 Memory data processing method of cache failure processor
CN101046760B (en) * 2006-03-29 2011-01-26 日本电气株式会社 Storage device, data arrangement method and program
CN101464840B (en) * 2007-12-19 2012-11-21 国际商业机器公司 Processor and method for managing cache in a data processing system
CN102169464B (en) * 2010-11-30 2013-01-30 北京握奇数据系统有限公司 Caching method and device used for non-volatile memory, and intelligent card
CN102169464A (en) * 2010-11-30 2011-08-31 北京握奇数据系统有限公司 Caching method and device used for non-volatile memory, and intelligent card
WO2012109879A1 (en) * 2011-08-04 2012-08-23 华为技术有限公司 Method, device and system for caching data in multi-node system
US9223712B2 (en) 2011-08-04 2015-12-29 Huawei Technologies Co., Ltd. Data cache method, device, and system in a multi-node system
CN103038755A (en) * 2011-08-04 2013-04-10 华为技术有限公司 Method, Device And System For Caching Data In Multi-Node System
CN103038755B (en) * 2011-08-04 2015-11-25 华为技术有限公司 Method, the Apparatus and system of data buffer storage in multi-node system
WO2012109882A1 (en) * 2011-08-05 2012-08-23 华为技术有限公司 Data reading method and ddr controller
CN102763070A (en) * 2011-11-01 2012-10-31 华为技术有限公司 Method and device for managing disk cache
WO2012149815A1 (en) * 2011-11-01 2012-11-08 华为技术有限公司 Method and device for managing disk cache
CN102763070B (en) * 2011-11-01 2015-08-19 华为技术有限公司 The management method of disk buffering and device
CN102521161B (en) * 2011-11-21 2015-01-21 华为技术有限公司 Data caching method, device and server
CN102521161A (en) * 2011-11-21 2012-06-27 华为技术有限公司 Data caching method, device and server
CN104094254A (en) * 2011-12-02 2014-10-08 康佩伦特科技公司 System and method for unbalanced raid management
CN104094254B (en) * 2011-12-02 2018-01-09 康佩伦特科技公司 System and method for non-equilibrium RAID management
CN103309820A (en) * 2013-06-28 2013-09-18 曙光信息产业(北京)有限公司 Implementation method for disk array cache
CN103383666A (en) * 2013-07-16 2013-11-06 中国科学院计算技术研究所 Method and system for improving cache prefetch data locality and cache assess method
CN103383666B (en) * 2013-07-16 2016-12-28 中国科学院计算技术研究所 Improve method and system and the cache access method of cache prefetching data locality
CN104484288A (en) * 2014-12-30 2015-04-01 浪潮电子信息产业股份有限公司 Method and device for replacing contents items
CN104484288B (en) * 2014-12-30 2018-01-02 浪潮电子信息产业股份有限公司 A kind of method and device being replaced to catalogue entry
WO2016155522A1 (en) * 2015-03-30 2016-10-06 Huawei Technologies Co., Ltd. Distributed content discovery with in-network caching
US10298713B2 (en) 2015-03-30 2019-05-21 Huawei Technologies Co., Ltd. Distributed content discovery for in-network caching
CN107229575A (en) * 2016-03-23 2017-10-03 上海复旦微电子集团股份有限公司 The appraisal procedure and device of caching performance
CN105975406A (en) * 2016-04-29 2016-09-28 浪潮(北京)电子信息产业有限公司 Data access method and device
CN105975406B (en) * 2016-04-29 2019-05-10 浪潮(北京)电子信息产业有限公司 A kind of data access method and device
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN106126440A (en) * 2016-06-22 2016-11-16 中国科学院计算技术研究所 A kind of caching method improving data spatial locality in the buffer and device
CN106126440B (en) * 2016-06-22 2019-01-25 中国科学院计算技术研究所 A kind of caching method and device improving data spatial locality in the buffer
CN108090529B (en) * 2016-11-22 2021-08-06 上海宝信软件股份有限公司 Method for storing field terminal operation process data based on radio frequency identification technology
CN108090529A (en) * 2016-11-22 2018-05-29 上海宝信软件股份有限公司 The storage method of on-site terminal operation process data based on Radio Frequency Identification Technology
CN108228649A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For the method and apparatus of data access
CN108664211A (en) * 2017-03-31 2018-10-16 深圳市中兴微电子技术有限公司 A kind of method and device for realizing reading and writing data
CN111602377A (en) * 2017-12-27 2020-08-28 华为技术有限公司 Resource adjusting method in cache, data access method and device
CN109739780A (en) * 2018-11-20 2019-05-10 北京航空航天大学 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method
CN109918131A (en) * 2019-03-11 2019-06-21 中电海康无锡科技有限公司 A kind of instruction read method based on non-obstruction command cache
CN112069091A (en) * 2020-08-17 2020-12-11 北京科技大学 Access optimization method and device applied to molecular dynamics simulation software
CN112069091B (en) * 2020-08-17 2023-09-01 北京科技大学 Memory access optimization method and device applied to molecular dynamics simulation software
WO2022193126A1 (en) * 2021-03-16 2022-09-22 Micron Technology, Inc. Performance benchmark for host performance booster
CN114281762A (en) * 2022-03-02 2022-04-05 苏州浪潮智能科技有限公司 Log storage acceleration method, device, equipment and medium
CN114281762B (en) * 2022-03-02 2022-06-03 苏州浪潮智能科技有限公司 Log storage acceleration method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN1499382A (en) Method for implementing cache in high efficiency in redundancy array of inexpensive discs
TWI238935B (en) Reconfigurable cache controller for nonuniform memory access computer systems
CN1317644C (en) Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy
CN1292371C (en) Inverted index storage method, inverted index mechanism and on-line updating method
CN1659526A (en) Method and apparatus for multithreaded cache with cache eviction based on thread identifier
CN1652092A (en) Multi-level cache having overlapping congruence groups of associativity sets in different cache levels
CN102789427A (en) Data storage device and operation method thereof
CN101510176B (en) Control method of general-purpose operating system for accessing CPU two stage caching
TW201732603A (en) Profiling cache replacement
CN102981963A (en) Implementation method for flash translation layer of solid-state disc
CN102314397B (en) Method for processing cache data block
Fevgas et al. Indexing in flash storage devices: a survey on challenges, current approaches, and future trends
CN101576856A (en) Buffer data replacement method based on access frequency within long and short cycle
CN106681668A (en) Hybrid storage system and storage method based on solid state disk caching
CN107423229A (en) A kind of buffering area improved method towards page level FTL
CN1652091A (en) Data preacquring method for use in data storage system
CN100339837C (en) System for balancing multiple memory buffer sizes and method therefor
CN106055679A (en) Multi-level cache sensitive indexing method
CN1902602A (en) Mechanism to store reordered data with compression
Xiao et al. P3Stor: A parallel, durable flash-based SSD for enterprise-scale storage systems
Park et al. A workload-aware adaptive hybrid flash translation layer with an efficient caching strategy
Wang et al. ADAPT: Efficient workload-sensitive flash management based on adaptation, prediction and aggregation
CN1607510A (en) Method and system for improving performance of a cache
CN1617095A (en) Cache system and method for managing cache system
CN105988720A (en) Data storage device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication