CN103246616B - A kind of globally shared buffer replacing method of access frequency within long and short cycle - Google Patents

A kind of globally shared buffer replacing method of access frequency within long and short cycle Download PDF

Info

Publication number
CN103246616B
CN103246616B CN201310195427.7A CN201310195427A CN103246616B CN 103246616 B CN103246616 B CN 103246616B CN 201310195427 A CN201310195427 A CN 201310195427A CN 103246616 B CN103246616 B CN 103246616B
Authority
CN
China
Prior art keywords
data
access
buffer
long
term
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310195427.7A
Other languages
Chinese (zh)
Other versions
CN103246616A (en
Inventor
王恩东
吕烁
文中领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN201310195427.7A priority Critical patent/CN103246616B/en
Publication of CN103246616A publication Critical patent/CN103246616A/en
Application granted granted Critical
Publication of CN103246616B publication Critical patent/CN103246616B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The present invention provides a kind of globally shared buffer replacing method of access frequency within long and short cycle, and host side mainly has two kinds of access modes to data:Long-term cycle access and short-term frequently access, long-term cycle access refers to data block in the longer time of interval by periodic repeated accesses, and it is traditional the data access of this type can not be recognized based on burin-in process replacement policy, cached when causing to be buffered in the such access of reply entirely ineffective.In order to solve the above-mentioned technical problem, the invention provides a kind of globally shared buffer replacing method for realizing access frequency within long and short cycle, this method includes map manager, caching replacement processing, cache distribution processing, five modules of mirror image processing and cache consistency treatments.

Description

A kind of globally shared buffer replacing method of access frequency within long and short cycle
Technical field
The present invention relates to computer application field, the globally shared caching of specifically a kind of access frequency within long and short cycle Replacement method.
Background technology
Digital information on network is in explosive increase, and to the year two thousand twenty, global metadata amount will be enlarged by 50 times, simultaneously number According to presenting at a high speed, multifarious feature, data diversity is mainly due to new many structured datas, and including network day The data types such as will, social media, internet hunt, mobile phone call history and sensor network are caused.While the number of host side Amount is also presented level several levels and increased, and different host sides is square in response time, transmission bandwidth, transmission granularity etc. to I/O process demands Face is far different, along with the generation of these demands, and multi-control disk array is arisen at the historic moment, and multi-control disk array is used with the overall situation Cache as core, the architecture interconnected between controller, buffer unit by cross matrix switch.Buffer unit is single thing Part is managed, clog-free, high band wide data transmission with each controller are realized by the Crossbar link exchanges of redundancy.It is all slow Memory cell constitutes globally shared caching, and visible to all controllers, and each controller has the control of equality to it.It is global How shared buffer memory is managed optimization to meet different business need as the core component of multi-control disk array to it One of systematic function critical bottleneck will be turned into by asking.
The content of the invention
It is an object of the invention to provide a kind of globally shared buffer replacing method of access frequency within long and short cycle.
The purpose of the present invention realized in the following manner, and block-based visiting frequency value is replaced to page, simultaneously Long period and short-period type of data access have been taken into account, cache hit rate is effectively improved, system includes map manager Module, caches replacement processing module, caching distribution processing module, mirror image processing module and cache consistency treatments module five Module is wherein:
Map manager module, for the mapping of local memory and shared virtual memory address space, its function be by Local memory is mapped in shared virtual memory address space and maintains the uniformity of data;
Replacement processing module is cached, the replacement for globally shared caching is handled, during address of cache, if in the page The middle discovery page to be accessed then produces page faults not in internal memory, and when generation page faults, operating system must be Internal memory selects a page to be moved out internal memory, to concede space for the page that will call in;
Caching distribution processing module, using two-stage memory management scheme, row address is entered using the mode of multi-level page-table and is turned Change, tried one's best uniformly for the distribution that makes shared drive, the distribution of system shared drive is carried out by the way of cycle assignment;
Data, in order to prevent single-unit point failure, in reading and writing data, are written to two by mirror image processing module simultaneously In independent region of memory, effectively prevent any node causes the loss of data in caching due to chance failure;
Cache consistency treatment modules, the implementation based on catalogue ensures same data in the multiple nodes of system Multiple copies it is consistent, it is that every page in internal memory maintains a directory entry, and the directory entry, which is recorded, all currently holds this copy Node and data block status information, include whether to be written, if exclusive or be total to by multiple nodes by some node Enjoy, when a node is intended to toward a certain piece of write-in data and when data may be caused inconsistent, it just according to the content of catalogue to Those nodes for holding the backup of this copy send nullified or more new signal to maintain the uniformity of data.
The globally shared caching system operation principles of access frequency within long and short cycle:
1)As shown in Fig. 2 all buffer units are organized into a global buffer pond, buffer unit because failure or liter Cache pool is left or added to the reason for level, and system is realized using cache resources discovery mechanism and seamless increased or decreased global buffer Capacity, it is ensured that storage service will not be influenceed by resource change, buffer unit passes through the active of periodic broadcasting resource declarations message The availability oneself partly or completely cached is declared, new buffer unit is wide to all controllers, buffer unit when adding Resource declarations message is broadcast, controller adds this element in available cache memory resource index after receiving;
2)As illustrated in fig. 2, it is assumed that six buffer units of A, B, C, D, E, F are included in common buffer pool, in order to ensure data High Availabitity, caching mirror image is using annular High Availabitity strategy, i.e., every part data cached to be all written in two buffer units, have The mirror image order of body is annular strategy.The business of storage system carrying is isolated by rolling up, and volume is to carry out Service Data Management Base unit, be the buffer unit that each volume sets an owner type, certain is rolled up business datum all of the above and can be buffered To owner buffer unit and the buffer unit backward buffer unit on.Assuming that volume 1 owner be buffer unit A, then its Data are cached to buffer unit A and buffer unit B;If its owner is B, its is data cached be cached to buffer unit B and Buffer unit C, by that analogy, so as to form data cached annular High Availabitity strategy.
3)It is globally shared to be cached with three kinds of operational modes, it is normal mode, degraded mode and direct write pattern respectively, caching is single First redundancy scheme uses the state-detection mechanism of the software and hardware combining in multi-controller redundancy scheme to realize, reliable in order to improve Property, its buffer service only will be taken over by the forward direction node in annular High Availabitity after the failure of certain buffer unit, while cache pool enters Degraded mode, in normal mode and degraded mode, global buffer pond uses and writes back pattern data cached write-in back-end physical Disk, and tolerate that any one buffer unit breaks down in synchronization;Under extreme case, one buffer unit of system spare During operation, globally shared caching will be taken under data direct write pattern, direct write pattern, and buffer unit failure will not be to data integrity There is any influence with uniformity;
4)Buffer unit equally includes logical address mapping mechanism and long-distance inner access protocol, realizes and cache client The logical address mapping and data communication of module, buffer unit are divided into index area, three parts in data field and MIRROR SITE, indexed Area is used for carrying out image and the conversion of data, preserves all data field pages of this buffer unit, the index of the MIRROR SITE page, number The read-write cache data of storage service are deposited according to area, MIRROR SITE deposits the mirror image data that other controllers are sended over, Yi Jiqi The index data of his buffer unit, realizes read-write data, the full mirror image of index data;
Access frequency within long and short cycle replacement method and step:
1)Host side has two kinds of access modes to data:Long-term cycle access and short-term frequently access, long-term cycle access Refer to that data block is being spaced in longer time by periodic repeated accesses, it is short-term frequent using including daily record, timed backup Access refers to that data block is frequently accessed in the short period of time, should be tried one's best in cache hit rate, global buffer to maximize Preserve the data that are accessed frequently in short term, if but the short-term data frequently accessed do not accessed always, then they also should be first Global buffer is replaced out in the data of long-term cycle access, this is accomplished by taking into account long-term and short-term access situation, research Global buffer replacement processing based on shot and long term visiting frequency, as shown in figure 1, increasing data access for the data block of global buffer Frequency.
2)Each byte data block access frequency value is arranged from high to low, is represented in the cycle of interval from short to long, number Situation is accessed according to block, the value of n-th of byte i-th bit from high to low is represented with Bn [i], if in unit interval T, data block Accessed, then it is 1 to put B0 [0], and it is 0 otherwise to put B0 [0], every the 8n*T times, using following Policy Updates data block access frequency Angle value:
Bn[i+1]= Bn [i] (i=0,1,……,6)
Bn +1[i+1]= Bn +1[i] (i=0,1,……,6)
Bn +1[0]= Bn [0] || Bn [1] || … … || Bn [6] || Bn [7]
Above rule ensures, if access module is long-term cycle access, then will have the multidigit to be continuously in low byte 1, if access module frequently accesses to be short-term, then when will have the multidigit to be continuously 1 in upper byte, and low byte can be with few Probability occur multidigit continuously be 1 situation;
3)When replacing the data in global buffer, select data block visiting frequency value is minimum to be replaced every time, this It is that there is larger visiting frequency value because of the short-term data frequently accessed, if but be not accessed within longer one period, it is short The data access frequency value that phase frequently accesses will gradually be less than the data of long-term cycle access, so can guarantee that meeting in global buffer The short-term data frequently accessed are preserved as far as possible, if but be not accessed in longer one period, they will be prior to long-term cycle access Data and be replaced.
Brief description of the drawings
Fig. 1 is data access frequency value schematic diagram;
Fig. 2 is globally shared cache pool topological diagram.
Embodiment
Technical scheme is set forth in below in conjunction with accompanying drawing and example.
Host side mainly has two kinds of access modes to data:Long-term cycle access and short-term frequently access, the long-term cycle visit Ask and refer to that data block is being spaced in longer time by periodic repeated accesses, and it is traditional based on burin-in process replacement policy The data access of this type can not be recognized, is cached when causing to be buffered in the such access of reply entirely ineffective.
In order to solve the above-mentioned technical problem, the globally shared slow of access frequency within long and short cycle is realized the invention provides a kind of Deposit replacement method, including map manager, caching replacement processing, caching distribution processing, mirror image processing and cache consistency treatments Five modules, wherein:
Map manager module, for the mapping of local memory and shared virtual memory address space, its function be by Local memory is mapped in shared virtual memory address space and maintains the uniformity of data.
Replacement processing module is cached, the replacement for globally shared caching is handled, during address of cache, if in the page The middle discovery page to be accessed then produces page faults not in internal memory.When occurring page faults, operating system must be Internal memory selects a page to be moved out internal memory, and to concede space for the page that will call in, this replacements processing is based on data Access type, identification long period data and the data that frequently access of short cycle, according to this strategy carry out caching replacement.
Distribution processing is cached, using two-stage memory management scheme, address conversion is carried out using the mode of multi-level page-table.For The distribution of shared drive is set to try one's best uniformly, the distribution of system shared drive is carried out by the way of cycle assignment.
Data, in order to prevent single-unit point failure, in reading and writing data, are written to two by mirror image processing module simultaneously In independent region of memory, effectively prevent any node causes the loss of data in caching due to chance failure.
Cache consistency treatment modules, the implementation based on catalogue ensures same data in the multiple nodes of system Multiple copies it is consistent, it is that every page in internal memory maintains a directory entry, and the directory entry, which is recorded, all currently holds this copy Node and the status information of data block (such as whether being written, monopolized by some node or by multiple nodes sharings Deng).When a node is intended to write number and when data may be caused inconsistent toward a certain piece, it just can according to the content of catalogue only to Those nodes for holding the backup of this copy send nullified or more new signal to maintain the uniformity of data.
All buffer units are organized into a global buffer pond.Buffer unit is because the reason such as failure or upgrading is left Or cache pool is added, system uses cache resources discovery mechanism, realizes and seamless increases or decreases global buffer capacity, it is ensured that storage Service will not be influenceed by resource change.Buffer unit by periodic broadcasting resource declarations message actively declare oneself part or The availability of the whole caching of person.Message is stated to all controllers, buffer unit broadcast resource when new buffer unit is added, Controller adds this element in available cache memory resource index after receiving.Assuming that in common buffer pool comprising A, B, C, D, E, Six buffer units of F, in order to ensure data High Availabitity, caching mirror image using annular High Availabitity strategy, i.e., every part it is data cached all It can be written in two buffer units, specific mirror image order is annular strategy.The business of storage system carrying is by involving in Row isolation, volume is the base unit for carrying out Service Data Management, is the buffer unit that each volume sets an owner type, certain Rolling up business datum all of the above can be cached on the backward buffer unit of owner buffer unit and the buffer unit.It is false If the owner of volume 1 is buffer unit A, then its data is cached to buffer unit A and buffer unit B;If its owner is B, Its is data cached to be cached to buffer unit B and buffer unit C, by that analogy, so as to form data cached annular High Availabitity Strategy.
It is globally shared to be cached with three kinds of operational modes, it is normal mode, degraded mode and direct write pattern respectively.Buffer unit Redundancy scheme uses the state-detection mechanism of the software and hardware combining in multi-controller redundancy scheme to realize.In order to improve reliability, Its buffer service only will be taken over by the forward direction node in annular High Availabitity after the failure of certain buffer unit, degraded while cache pool enters Pattern.In normal mode and degraded mode, global buffer pond writes back-end physical disk using the pattern that writes back data cached, And it can tolerate that any buffer unit breaks down in synchronization;Under extreme case, one buffer unit fortune of system spare During row, globally shared caching will take data direct write pattern.Under direct write pattern, buffer unit failure will not to data integrity and Uniformity has any influence.
Buffer unit equally includes logical address mapping mechanism and long-distance inner access protocol, realizes and cache client mould The logical address mapping and data communication of block.Buffer unit is divided into index area, three parts in data field and MIRROR SITE.Index area For carrying out image and the conversion of data, all data field pages of this buffer unit, the index of the MIRROR SITE page are preserved.Data Deposit the read-write cache data of storage service in area.MIRROR SITE deposits the mirror image data that other controllers are sended over, and other The index data of buffer unit, realizes read-write data, the full mirror image of index data.
Host side mainly has two kinds of access modes to data:Long-term cycle access and short-term frequently access.The long-term cycle visits Ask and refer to that data block is being spaced in longer time by periodic repeated accesses, typical case's application includes daily record, timed backup etc.. Short-term frequently access refers to that data block is frequently accessed in the short period of time.To maximize cache hit rate, global buffer In should try one's best and preserve the data that are accessed frequently in short term, if but the short-term data frequently accessed do not accessed always, then it Also should be replaced out global buffer prior to the data of long-term cycle access.This is accomplished by taking into account long-term and short-term access feelings Condition, studies the global buffer replacement processing based on shot and long term visiting frequency, as shown in figure 1, being the data block increase of global buffer Data access frequency,
Each byte data block access frequency value is arranged from high to low, is represented in the cycle of interval from short to long, data Block is accessed situation.The value of n-th of byte i-th bit from high to low is represented with Bn [i], if in unit interval T, data block quilt Access, then it is 1 to put B0 [0], it is 0 otherwise to put B0 [0].Every the 8n*T times, using following Policy Updates data block visiting frequency Value:
Bn[i+1]= Bn [i] (i=0,1,……,6)
Bn +1[i+1]= Bn +1[i] (i=0,1,……,6)
Bn +1[0]= Bn [0] || Bn [1] || … … || Bn [6] || Bn [7]
Above rule can ensure, if access module is long-term cycle access, then will have multidigit company in low byte Continue for 1, if access module frequently accesses to be short-term, then when there will be the multidigit to be continuously 1 in upper byte, and low byte can be with There is the situation that multidigit is continuously 1 in few probability.
When replacing the data in global buffer, select data block visiting frequency value is minimum to be replaced every time.This is Because the short-term data frequently accessed have larger visiting frequency value, if but be not accessed within longer one period, in short term The data access frequency value frequently accessed will gradually be less than the data of long-term cycle access.It can so ensure use up in global buffer Amount preserves the short-term data frequently accessed, if but be not accessed in longer one period, they will be prior to long-term cycle access number According to and be replaced.
It is the known technology of those skilled in the art in addition to the technical characteristic described in specification.

Claims (1)

1. a kind of globally shared buffer replacing method of access frequency within long and short cycle, it is characterised in that block-based visiting frequency Value is replaced to page, while having taken into account long period and short-period type of data access, is effectively improved cache hit Rate, system include map manager module, cache replacement processing module, caching distribution processing module, mirror image processing module and Five modules of cache consistency treatments module, wherein:
Map manager module, for the mapping of local memory and shared virtual memory address space, its function is will be local Internal memory is mapped in shared virtual memory address space and maintains the uniformity of data;
Replacement processing module is cached, the replacement for globally shared caching is handled, during address of cache, if being sent out in the page The existing page to be accessed then produces page faults not in internal memory, and when occurring page faults, operating system must be in internal memory One page of selection is moved out internal memory, to concede space for the page that will call in;
Caching distribution processing module, using two-stage memory management scheme, carries out address conversion using the mode of multi-level page-table, is The distribution of shared drive is set to try one's best uniformly, the distribution of system shared drive is carried out by the way of cycle assignment;
Data, in order to prevent single-unit point failure, in reading and writing data, are written to two independences by mirror image processing module simultaneously Region of memory in, effectively prevent any node caused due to chance failure caching in loss of data;
Cache consistency treatment modules, the implementation based on catalogue ensures the multiple of same data in the multiple nodes of system Copy it is consistent, it is that every page in internal memory maintains a directory entry, and the directory entry records all sections for currently holding this copy The status information of point and data block, includes whether to be written, if exclusive or by multiple nodes sharings by some node, when One node is intended to toward a certain piece of write-in data and when data may be caused inconsistent, and it is just according to the content of catalogue to holding this Those nodes of copy backup send nullified or more new signal to maintain the uniformity of data, access frequency within long and short cycle it is complete Office's shared buffer memory system operation principle:
1)All buffer units are organized into a global buffer pond, buffer unit because leave the reason for failure or upgrading or Cache pool is added, system uses cache resources discovery mechanism, realizes and seamless increases or decreases global buffer capacity, it is ensured that storage clothes Business will not be influenceed by resource change, buffer unit by periodic broadcasting resource declarations message actively declare oneself partly or The availability of whole cachings, message, control are stated when new buffer unit is added to all controllers, buffer unit broadcast resource Device processed adds this element in available cache memory resource index after receiving;
2)Assuming that including six buffer units of A, B, C, D, E, F in common buffer pool, in order to ensure data High Availabitity, mirror image is cached Using annular High Availabitity strategy, i.e., every part data cached to be all written in two buffer units, and specific mirror image order is Annular strategy, the business of storage system carrying is isolated by rolling up, and volume is the base unit for carrying out Service Data Management, is every Individual volume sets the buffer unit of an owner type, and certain, which rolls up business datum all of the above, can be cached to owner caching list On the backward buffer unit of member and the buffer unit, it is assumed that the owner of volume 1 is buffer unit A, then its data is cached to caching Unit A and buffer unit B;If its owner is B, its is data cached to be cached to buffer unit B and buffer unit C, with such Push away, so as to form data cached annular High Availabitity strategy;
3)It is globally shared to be cached with three kinds of operational modes, it is normal mode, degraded mode and direct write pattern respectively, buffer unit is superfluous Remaining mechanism uses the state-detection mechanism of the software and hardware combining in multi-controller redundancy scheme to realize, in order to improve reliability, certain Its buffer service only will be taken over by the forward direction node in annular High Availabitity after buffer unit failure, while cache pool enters degradation mould Formula, in normal mode and degraded mode, global buffer pond uses the pattern that writes back data cached write-in back-end physical disk, and And tolerate that any one buffer unit breaks down in synchronization;Under extreme case, during one buffer unit operation of system spare, Globally shared caching will be taken under data direct write pattern, direct write pattern, buffer unit failure will not to data integrity with it is consistent Property has any influence;
4)Buffer unit equally includes logical address mapping mechanism and long-distance inner access protocol, realizes and cache client module Logical address mapping and data communication, buffer unit is divided into index area, three parts in data field and MIRROR SITE, and index area is used To carry out image and the conversion of data, all data field pages of this buffer unit, the index of the MIRROR SITE page, data field are preserved The read-write cache data of storage service are deposited, MIRROR SITE deposits the mirror image data that other controllers are sended over, and other delay The index data of memory cell, realizes read-write data, the full mirror image of index data;
Access frequency within long and short cycle replacement method and step:
1)Host side has two kinds of access modes to data:Long-term cycle access and short-term frequently access, long-term cycle access refer to Data block, by periodic repeated accesses, using including daily record, timed backup, is frequently accessed in short term in the longer time of interval Refer to that data block is frequently accessed in the short period of time, to maximize the preservation that should be tried one's best in cache hit rate, global buffer The data being accessed frequently in short term, if but the short-term data frequently accessed do not accessed always, then they also should be prior to length The data of phase cycle access and be replaced out global buffer, this is accomplished by taking into account long-term with short-term access situation, and research is based on The global buffer replacement processing of shot and long term visiting frequency, is the data block increase data access frequency of global buffer;
2)Each byte data block access frequency value is arranged from high to low, is represented in the cycle of interval from short to long, data block Accessed situation, the value of n-th of byte i-th bit from high to low is represented with Bn [i], if in unit interval T, data block is interviewed Ask, then it is 1 to put B0 [0], it is 0 otherwise to put B0 [0], every the 8n*T times, using following Policy Updates data block visiting frequency value:
Bn[i+1]= Bn [i] (i=0,1,……,6)
Bn +1[i+1]= Bn +1[i] (i=0,1,……,6)
Bn +1[0]= Bn [0] || Bn [1] || … … || Bn [6] || Bn [7]
Above rule ensures, if access module is long-term cycle access, then it is continuously 1 that will have multidigit in low byte, if Access module frequently accesses to be short-term, then when will have the multidigit to be continuously 1 in upper byte, and low byte can be with few general There is the situation that multidigit is continuously 1 in rate;
3)When replacing the data in global buffer, select data block visiting frequency value is minimum to be replaced every time, this be because There is larger visiting frequency value for the data that frequently access in short term, if but not being accessed within longer one period, short-term frequency The data access frequency value of numerous access will gradually be less than the data of long-term cycle access, and so can guarantee that in global buffer to try one's best The short-term data frequently accessed are preserved, if but be not accessed in longer one period, they will be prior to long-term cycle access data And be replaced.
CN201310195427.7A 2013-05-24 2013-05-24 A kind of globally shared buffer replacing method of access frequency within long and short cycle Active CN103246616B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310195427.7A CN103246616B (en) 2013-05-24 2013-05-24 A kind of globally shared buffer replacing method of access frequency within long and short cycle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310195427.7A CN103246616B (en) 2013-05-24 2013-05-24 A kind of globally shared buffer replacing method of access frequency within long and short cycle

Publications (2)

Publication Number Publication Date
CN103246616A CN103246616A (en) 2013-08-14
CN103246616B true CN103246616B (en) 2017-09-26

Family

ID=48926144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310195427.7A Active CN103246616B (en) 2013-05-24 2013-05-24 A kind of globally shared buffer replacing method of access frequency within long and short cycle

Country Status (1)

Country Link
CN (1) CN103246616B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268101B (en) * 2014-09-22 2017-12-05 无锡城市云计算中心有限公司 A kind of memory allocation method and device
CN104331352B (en) * 2014-11-19 2018-03-09 浪潮(北京)电子信息产业有限公司 Detection method and device are read outside cache uniformity chip address band
CN104461935B (en) * 2014-11-27 2018-03-13 华为技术有限公司 A kind of method, apparatus and system for carrying out data storage
US10019373B2 (en) * 2014-12-19 2018-07-10 Mediatek Inc. Memory management method for supporting shared virtual memories with hybrid page table utilization and related machine readable medium
GB2539383B (en) * 2015-06-01 2017-08-16 Advanced Risc Mach Ltd Cache coherency
CN107810490A (en) * 2015-06-18 2018-03-16 华为技术有限公司 System and method for the buffer consistency based on catalogue
WO2017117734A1 (en) * 2016-01-06 2017-07-13 华为技术有限公司 Cache management method, cache controller and computer system
CN107291635B (en) * 2017-06-16 2021-06-29 郑州云海信息技术有限公司 Cache replacement method and device
CN107329696B (en) * 2017-06-23 2019-05-14 华中科技大学 A kind of method and system guaranteeing data corruption consistency
CN109376022B (en) * 2018-09-29 2021-12-14 中国科学技术大学 Thread model implementation method for improving execution efficiency of Halide language in multi-core system
CN111158578B (en) * 2018-11-08 2022-09-06 浙江宇视科技有限公司 Storage space management method and device
CN109582895A (en) * 2018-12-04 2019-04-05 山东浪潮通软信息科技有限公司 A kind of cache implementing method
CN110750507B (en) * 2019-09-30 2022-09-20 华中科技大学 Persistent client caching method and system under global namespace facing DFS
CN111273860B (en) * 2020-01-15 2022-07-08 华东师范大学 Distributed memory management method based on network and page granularity management
WO2022021158A1 (en) * 2020-07-29 2022-02-03 华为技术有限公司 Cache system, method and chip
CN112612727B (en) * 2020-12-08 2023-07-07 成都海光微电子技术有限公司 Cache line replacement method and device and electronic equipment
CN115374046B (en) * 2022-10-21 2023-03-14 山东云海国创云计算装备产业创新中心有限公司 Multiprocessor data interaction method, device, equipment and storage medium
CN117032596B (en) * 2023-10-09 2024-01-26 苏州元脑智能科技有限公司 Data access method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1996944A (en) * 2006-11-09 2007-07-11 华中科技大学 A method for global buffer management of the cluster storage system
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN102262512A (en) * 2011-07-21 2011-11-30 浪潮(北京)电子信息产业有限公司 System, device and method for realizing disk array cache partition management
CN102609362A (en) * 2012-01-30 2012-07-25 复旦大学 Method for dynamically dividing shared high-speed caches and circuit

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529888B2 (en) * 2004-11-19 2009-05-05 Intel Corporation Software caching with bounded-error delayed update

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1996944A (en) * 2006-11-09 2007-07-11 华中科技大学 A method for global buffer management of the cluster storage system
CN101609432A (en) * 2009-07-13 2009-12-23 中国科学院计算技术研究所 Shared buffer memory management system and method
CN102262512A (en) * 2011-07-21 2011-11-30 浪潮(北京)电子信息产业有限公司 System, device and method for realizing disk array cache partition management
CN102609362A (en) * 2012-01-30 2012-07-25 复旦大学 Method for dynamically dividing shared high-speed caches and circuit

Also Published As

Publication number Publication date
CN103246616A (en) 2013-08-14

Similar Documents

Publication Publication Date Title
CN103246616B (en) A kind of globally shared buffer replacing method of access frequency within long and short cycle
CN107526546B (en) Spark distributed computing data processing method and system
US9348527B2 (en) Storing data in persistent hybrid memory
US9361236B2 (en) Handling write requests for a data array
US10169232B2 (en) Associative and atomic write-back caching system and method for storage subsystem
CN106126112B (en) Multiple stripe memory with multiple read ports and multiple write ports per cycle
CN102117248A (en) Caching system and method for caching data in caching system
CN103106286B (en) Method and device for managing metadata
US20120102273A1 (en) Memory agent to access memory blade as part of the cache coherency domain
CN103019948A (en) Working set exchange using continuously-sorted swap files
CN100383792C (en) Buffer data base data organization method
CN106066890B (en) Distributed high-performance database all-in-one machine system
US11093410B2 (en) Cache management method, storage system and computer program product
CN104145252A (en) Adaptive cache promotions in a two level caching system
CN105138292A (en) Disk data reading method
CN110196818A (en) Data cached method, buffer memory device and storage system
CN102763091A (en) Integrating a flash cache into large storage systems
CN106446268A (en) Database lateral extension system and method
CN105897859A (en) Storage system
CN107341114A (en) A kind of method of directory management, Node Controller and system
CN110297787A (en) The method, device and equipment of I/O equipment access memory
CN104298574A (en) Data high-speed storage processing system
US10705977B2 (en) Method of dirty cache line eviction
US20140297966A1 (en) Operation processing apparatus, information processing apparatus and method of controlling information processing apparatus
CN106126434B (en) The replacement method and its device of the cache lines of the buffer area of central processing unit

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant