CN108984130A - A kind of the caching read method and its device of distributed storage - Google Patents

A kind of the caching read method and its device of distributed storage Download PDF

Info

Publication number
CN108984130A
CN108984130A CN201810825942.1A CN201810825942A CN108984130A CN 108984130 A CN108984130 A CN 108984130A CN 201810825942 A CN201810825942 A CN 201810825942A CN 108984130 A CN108984130 A CN 108984130A
Authority
CN
China
Prior art keywords
caching
operation data
access frequency
data
read
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810825942.1A
Other languages
Chinese (zh)
Inventor
方兰春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Inspur Smart Computing Technology Co Ltd
Original Assignee
Guangdong Inspur Big Data Research Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Inspur Big Data Research Co Ltd filed Critical Guangdong Inspur Big Data Research Co Ltd
Priority to CN201810825942.1A priority Critical patent/CN108984130A/en
Publication of CN108984130A publication Critical patent/CN108984130A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Abstract

The invention discloses a kind of caching read methods of distributed storage, comprising: when receiving the read operation request of client, judges in caching with the presence or absence of to operation data;If reading is described to operation data from disk to operation data there is no described in caching, and judge whether the access frequency to operation data is higher than default access frequency;If the access frequency to operation data is higher than default access frequency, it is added to described in the caching to operation data.The present invention realizes in the case where limited buffer memory capacity, the data that priority cache client often accesses, so that cache hit rate improves, to improve the caching reading performance of distributed memory system.The invention also discloses the cachings of distributed storage to read system, device and its computer readable storage medium, has such as the identical beneficial effect of the above method.

Description

A kind of the caching read method and its device of distributed storage
Technical field
The present invention relates to technical field of memory, in particular to the caching read method of a kind of distributed storage.The present invention is also Be related to include above-mentioned distributed storage caching read method system, device and computer readable storage medium.
Background technique
In recent years, due to the development of internet, cause the scene of the transimission and storage of massive information increasing, in this back Under scape, data storage technology is also developed rapidly.
In cloud computing era, user is higher and higher to the performance requirement of distributed memory system, is not only required to quickly Storage can also quickly be read.In view of distributed memory system capacity effective rate of utilization, distributed memory system can generally be adopted With the redundancy rule of correcting and eleting codes.Obj ect file can be divided into K parts of source blocks by correcting and eleting codes redundancy rule, then delete algorithm meter by entangling Calculate M parts of redundant data blocks;On the hard disk for the memory node that this K+M block data block is respectively stored into different positions again, read Access according to when only need any K parts of data in K+M parts of data that can calculate source data.One is read in practical application scene The data of a file, distributed memory system need to read data from rear end at least K hard disk, read and complete from K hard disk After be then assembled into object data, the data of required file, the reading of this distributed memory system are obtained from object data It can be lower.
Currently, the method that the performance for improving the reading of distributed memory system mainly uses is the caching pair in main placement group The complete data of elephant, the storage of the data of cache object be not selectively, when reading data cache hit then directly from Data needed for being read in caching.But since client reads the randomness of file, in the case where buffer memory capacity is certain;Caching It hits lower, the reading performance of distributed memory system is promoted very limited.
Therefore how to provide a kind of scheme of solution above-mentioned technical problem is that those skilled in the art need to solve at present Problem.
Summary of the invention
In view of this, can be improved slow the purpose of the present invention is to provide a kind of caching read method of distributed storage Hit rate is deposited, to improve the caching reading rate of distributed memory system;It is a further object of the present invention to provide include above-mentioned point System, device and the computer readable storage medium of the step of caching read method of cloth storage, improves distributed storage Cache reading performance.
In order to solve the above technical problems, the present invention provides a kind of caching read method of distributed storage, comprising:
When receiving the read operation request of client, judge in caching with the presence or absence of to operation data;
If to operation data there is no described in caching, reading is described to operation data from disk, and described in judgement Whether the access frequency to operation data is higher than default access frequency;
If the access frequency to operation data is higher than default access frequency, it is added to institute to operation data for described It states in caching.
It preferably, will be described to operation data if the access frequency to operation data is higher than default access frequency It is added in the caching, comprising:
If the access frequency to operation data is higher than default access frequency, judge whether available free in the caching Block storage is described to operation data;
If whether the judgement access frequency to operation data, which is higher than in the caching, accesses frequency without the free block The low-limit frequency of rate;
If being higher than the low-limit frequency, the corresponding data of the low-limit frequency are subjected to delete operation;
It is added to described in the caching to operation data.
It preferably, will be described to operation data if the access frequency to operation data is higher than default access frequency It is added in the caching, comprising:
If the access frequency to operation data is higher than default access frequency, judge whether available free in the caching Block storage is described to operation data;
If filtering out superseded block, and the data of the superseded block are carried out delete operation without the free block;
It is added to described in the caching to operation data.
Preferably, superseded block is filtered out, further includes: the superseded block is filtered out by lru algorithm.
Preferably, further includes: if exist in the caching it is described to operation data, read from the caching it is described to Operation data.
The present invention also provides a kind of cachings of distributed storage to read system, comprising:
Judgment module, for when receiving the read operation request of client, judging in caching with the presence or absence of to operation data;
Temperature module, for counting the access frequency described in the access frequency to operation data, judgement to operation data Whether rate is higher than default access frequency;
Cache module is added to for the access frequency to be higher than described in the default access frequency to operation data In the caching.
Preferably, judgment module includes:
First processing units judge in the caching with the presence or absence of described to operation data;
The second processing unit, it is described to operation data if it exists, then it is read from the caching described to operation data;If There is no described to operation data, then carries out the judgement of the access frequency to operation data.
Preferably, temperature module includes:
Access frequency judging unit, for judging whether the access frequency to operation data is higher than the default access Frequency;
Judging unit is cached, for judging whether there is spare space in the caching;
Caching process unit, if whether having spare space in the caching, according to the access frequency calculate as a result, It selects superseded data and is deleted the superseded data using algorithm, and be added to described in the caching to operation data.
The present invention also provides a kind of caching reading devices of distributed storage, comprising:
Memory, for storing computer program;
Processor realizes that the caching of distributed storage described in any of the above-described is read when for executing the computer program The step of method.
The present invention also provides a kind of computer readable storage medium, calculating is stored on the computer readable storage medium The step of machine program, the computer program realizes the caching read method of distributed storage when being executed by processor.
The caching read method of distributed storage provided by the invention, when receiving the read operation request of client, if slow There is no to operation data in depositing, then reads from disk to operation data, first according to the access frequency of the data of statistics, sentence Whether the disconnected access frequency to operation data is higher than default access frequency;If the access frequency to operation data is higher than default visit It asks frequency, then adds in the buffer above-mentioned to operation data, reading result is returned to client.Access frequency is added in caching It after data higher than default access frequency, realizes in the case where limited buffer memory capacity, priority cache client often accesses Data therefore accelerate the reading speed to operation data from the method for data needed for data cached middle reading, improve caching Hit rate improves the competitive advantage of consumer product to improve the reading performance of distributed memory system.The present invention also provides The caching of distributed storage reads system, device and its computer readable storage medium, and having has as the above method is identical Beneficial effect, details are not described herein.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis The attached drawing of offer obtains other attached drawings.
Fig. 1 is a kind of process flow diagram flow chart of the caching read method of distributed storage provided by the invention;
Fig. 2 is the process flow diagram flow chart of the caching read method of another distributed storage provided by the invention;
Fig. 3 is the process flow diagram flow chart of the caching read method of another distributed storage provided by the invention;
Fig. 4 is a kind of structural schematic diagram of the caching reading system of distributed storage provided by the invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
The method that the performance of the reading of raising distributed memory system mainly uses in the prior art is slow when being reading data Data needed for hit is then read directly from caching are deposited, the data of caching are also not selective data, when client reads text When part, in the case where buffer memory capacity is certain, cache hit is lower.In order to read data faster, so the present embodiment passes through Selective cache access frequency is higher than the data of default access frequency in the buffer, to solve the above problems, improves caching Hit rate, and then data reading speed is improved, improve the competitive advantage of consumer product.Specifically referring to FIG. 1, Fig. 1 is this hair A kind of process flow diagram flow chart of the caching read method of distributed storage of bright offer, comprising:
S101, when receiving the read operation request of client, judge in caching with the presence or absence of to operation data.
Specifically, user can to server input data read instruction by way of come read required data i.e. to Operation data, wherein the information of data to be read is carried in the instruction of the reading data, as access frequency information, data are big Small information, data storage address etc..Want to read in the buffer when operation data, it is necessary first to exist in caching to operand According to, thus the present embodiment also need further by treat operation data carry out dissection process, cached in whether there is To the information of operation data, to be that the data whether directly read in caching are prepared.
If being read from disk to operation data, and judge to operand in S102, caching there is no to operation data According to access frequency whether be higher than default access frequency.
Specifically, the present embodiment by treat operation data carry out dissection process, cache in whether there is wait operate The information of data is partly or completely not present to exist in operation data i.e. caching to operation data if being not present in caching, Therefore it can not read from caching to operation data, need to read from disk to operation data, to enable the client to read It gets completely to operation data.In order to improve the hit rate of data in caching, it is therefore desirable to further treat operand According to the judgement for the frequency that accesses.Access frequency is the frequency that client accesses to data, and user can be according to the actual situation Data are set, such as the access frequency in the time in nearly January or the access frequency in nearly year, is also had The case where may be other, no matter which kind of situation, all can no longer be illustrated one by one herein according to user's sets itself.
Further, in order to distinguish the accessed frequent degree of data, the present embodiment judges the access frequency to operation data Whether rate is higher than default access frequency, and then determines to which whether often accessed operation data is, if to operation data Access frequency is higher than default access frequency, then be often accessed to operation data, it, can be faster if be added in caching It is accessed to, to improve the efficiency of reading data;If the access frequency to operation data is lower than default access frequency, It is accessed once in a while to operation data.The present embodiment is not defined default access frequency, user can according to the actual situation into Row setting.Certainly, due to the variation of physical condition, user can modify to predeterminated frequency.Such as when being to operation data Attention is very high, although its access frequency is lower, access frequency improve rate it is fast, therefore user can to predeterminated frequency into The corresponding modification of row.
If S103, being higher than default access frequency to the access frequency of operation data, caching will be added to operation data In.
Specifically, if the access frequency to operation data is higher than default access frequency, to the frequent quilt of operation data Access, will be added in caching to operation data, can be accessed to faster to operation data, to improve reading data Efficiency.Generally, the probability of the high data of client read access frequency is larger, therefore when client carries out reading data herein When, to operation data be the data of high access frequency probability it is very big, if the high data of access frequency are in the buffer, just It can read directly from caching to operation data, it is achieved that having cached client in the case where limited buffer memory capacity The data often accessed, improve cache hit rate, improve the speed of reading data.
Caching technology is the data buffer storage that will be accessed frequently to the place that can faster access, to improve access effect Rate, anyway, the size of caching are certain, thus cache when data selectivity it is most important.In the present embodiment not Remaining space in caching is defined.Whether judge has enough spare spaces to treat operation data in caching is deposited Storage, when there is enough spaces in caching, will directly be stored in caching to operation data;When there is no enough skies in caching Between when, no matter in which way the access frequency for needing to treat operation data is further judged, certainly, finally wait grasp Make data to be stored in caching.For example, enough space storages can be obtained by carrying out compression processing to data in caching To operation data, no matter sufficient space storage can also be obtained to operation data by carrying out delete operation to data in caching Using which kind of method, it is ultimately stored in caching to operation data, it, can be faster when obtaining read operation instruction next time It is accessed to, to improve the efficiency of reading data, adds in the buffer above-mentioned to operation data, reading result is returned to client End, realizes and timely updates to caching, to improve cache hit rate when client receives operation.
Based on the above-mentioned technical proposal, the caching read method of distributed storage provided in an embodiment of the present invention, it is objective when receiving When the read operation request at family end, if being read from disk to operation data, first root in caching there is no when operation data The access frequency of data according to statistics judges whether the access frequency to operation data is higher than default access frequency;If wait grasp The access frequency for making data is higher than default access frequency, shows to be accessed frequently to operation data, by above-mentioned number It according to being added in caching, can be accessed to faster, so that the efficiency of reading data is improved, therefore, in the buffer in addition It states to operation data, realizes and timely update to caching, reading result is returned to client, to improve client reception Cache hit rate when to read operation.The access frequency to operation data is read by statistics, is added to access in the buffer After frequency is higher than the data of default access frequency, realize in the case where limited buffer memory capacity, priority cache client is frequent The data of access improve cache hit rate, to improve the reading performance of distributed memory system.
Based on the above embodiment, it is higher than default access frequency to the access frequency of operation data, will be added to operation data When into caching, further judge the space in caching only there is spare space in caching, to operation data ability It is stored into caching.Specifically referring to FIG. 2, Fig. 2 is the caching read method of another distributed storage provided by the invention Process flow diagram flow chart, preferred embodiment a kind of, this method may include:
S201, when receiving the read operation request of client, judge in caching with the presence or absence of to operation data.
If being read from disk to operation data, and judge to operand in S202, caching there is no to operation data According to access frequency whether be higher than default access frequency.
If S203, being higher than default access frequency to the access frequency of operation data, judge in caching whether available free piece Storage is to operation data.
The size of free block meets the memory requirement of the data completely cached.For example, there are two free blocks, then each One can be stored in free block to operation data.
Specifically, if the access frequency to operation data is higher than default access frequency, item is carried out to the free block in caching Part judgement, cached in whether available free piece store to operation data;If in caching, there are free blocks, to operand According in the free block for being stored in caching.
If S204, without free block, judge whether the access frequency to operation data is higher than in caching access frequency most Low frequency.
Specifically, it when there is no free block in caching, needs further to judge whether access frequency to be operated is higher than caching The low-limit frequency of middle access frequency, and then judge the access temperature to operation data.
Server counts the access frequency of data all in caching, and Request System is to all data according to preset Condition is ranked up, this preset condition can with default access frequency be arranged it is consistent, can also be inconsistent, can according to The actual demand at family is set, and low-limit frequency is obtained.
Access frequency to operation data is compared with low-limit frequency, if the access frequency to operation data is lower than Low-limit frequency is it is understood that anyway, using other methods, in ensuring to cache in the presence of legacy data, Ensure to can store in caching to operation data, for example, can will press to the partial data in operation data and caching Contracting, so as to there are free block so as to operation data save in the buffer.
If S205, being higher than low-limit frequency, the data with low-limit frequency are subjected to delete operation.
Specifically, when the access frequency of operation data is higher than low-limit frequency in caching, then low-limit frequency is corresponding Data carry out delete operation, so that available free piece of caching is to operation data.
S206, will be added in caching to operation data.
In this embodiment, the access frequency of operation data is treated there is no in the case where free block in the buffer Compared with low-limit frequency in caching has carried out size, when the access frequency of operation data is higher than low-limit frequency, in caching Data carry out selectivity deletion will be to operation data in the case where ensuring that available free piece can cache to operation data It is added in caching, completes and timely update to data cached, to improves when client receives read operation request Cache hit rate improves hit rate, improves reading performance in the case where being no more than the caching upper limit.
Likewise, as another preferred embodiment, it is specific referring to FIG. 3, Fig. 3 is another point provided by the invention The process flow diagram flow chart of the caching read method of cloth storage, preferred embodiment a kind of, this method may include:
S301, when receiving the read operation request of client, judge in caching with the presence or absence of to operation data.
If being read from disk to operation data, and judge to operand in S302, caching there is no to operation data According to access frequency whether be higher than default access frequency.
If S303, being higher than default access frequency to the access frequency of operation data, judge in caching whether available free piece Storage is to operation data.
Specifically preferred embodiment is consistent in situation as above one, and details are not described herein again.
If S304, without the free block, superseded block is filtered out, and the data of superseded block are subjected to delete operation.
Specifically, if without free block, superseded data are filtered out from caching, by the corresponding superseded block of superseded data into Row delete operation generates free block storage to operation data.
No matter can be LFU, LRU, ARC, FIFO, MRU scheduling algorithm it should be noted that screening from caching and eliminating block How, it all should be the algorithm to realize the object of the invention.
S305, will be added in caching to operation data.
Wherein, preferably lru algorithm filters out the superseded block.
In this embodiment, it is found out in the buffer by algorithm required in caching there is no in the case where free block The superseded block deleted carries out delete operation to superseded block, will in the case where ensuring that available free piece can cache to operation data It is added in caching to operation data, completes and timely update to data cached, so that improving client receives reading behaviour Cache hit rate when requesting improves hit rate, improves reading performance in the case where being no more than the caching upper limit.
Based on the above-mentioned technical proposal, the caching read method of distributed storage provided in an embodiment of the present invention, to data It can be improved the hit rate of caching when being read out, and then improve data reading speed, improve the competitive advantage of consumer product.
Further, it if existing in caching to operation data, reads from caching to operation data.
Specifically, reading from caching to operation data, therefore, accelerating when existing when operation data in caching To the speed that operation data is read, the selective data that access frequency is higher than to default access frequency are stored through the above way In caching, the reading performance of distributed storage devices is improved, improves the competitive advantage of consumer product.
System, device and computer-readable storage are read to the caching of distributed storage provided in an embodiment of the present invention below Medium is introduced, the caching of distributed storage described below read system, device and computer readable storage medium with it is upper The caching read method of the distributed storage of text description can correspond to each other reference.
Referring to FIG. 4, Fig. 4 is a kind of structural representation of the caching reading system 100 of distributed storage provided by the invention Figure, the caching read system 100, comprising:
Judgment module 200, for when receiving the read operation request of client, judging in caching with the presence or absence of to operand According to;
Temperature module 300, for count the access frequency to operation data, judge to operation data access frequency whether Higher than default access frequency, judge whether the access frequency to operation data is higher than low-limit frequency;
Cache module 400, for access frequency to be higher than being added in caching to operation data for default access frequency.
As a kind of specific embodiment, judgment module 200 includes:
First processing units judge in caching with the presence or absence of to operation data;
The second processing unit is then read from caching to operation data if it exists to operation data;If it does not exist wait operate Data then carry out the judgement of the access frequency to operation data;
As a kind of specific embodiment, temperature module 300 includes:
Access frequency judging unit, for judging whether the access frequency to operation data is higher than default access frequency;Sentence Whether the disconnected access frequency to operation data is higher than low-limit frequency;
Judging unit is cached, for judging whether there is spare space in caching;
Caching process unit, if filtering out superseded data without vacant interior space in caching, superseded data being deleted, and will It is added in caching to operation data.
Above-described embodiment is please referred to for the introduction of the caching reading system of distributed storage provided by the invention, the present invention It repeats no more.
The present invention also provides a kind of caching reading devices of distributed storage, comprising:
Memory, for storing computer program;
Processor, the step of caching read method of above-mentioned distributed storage is realized when for executing computer program.
Above-mentioned fact Example is please referred to for the introduction of the caching reading device of distributed storage provided by the invention, the present invention It repeats no more.
The present invention also provides a kind of computer readable storage medium, computer is stored on computer readable storage medium Program realizes the caching read method step of above-mentioned distributed storage when computer program is executed by processor.
The computer readable storage medium may include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. is various to deposit Store up the medium of program code.
Above-mentioned fact Example is please referred to for the introduction of computer readable storage medium provided by the invention, the present invention is no longer superfluous It states.
Each embodiment is described in a progressive manner in specification, the highlights of each of the examples are with other realities The difference of example is applied, the same or similar parts in each embodiment may refer to each other.For device disclosed in embodiment Speech, since it is corresponded to the methods disclosed in the examples, so being described relatively simple, related place is referring to method part illustration ?.
Professional further appreciates that, unit described in conjunction with the examples disclosed in the embodiments of the present disclosure And algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and The interchangeability of software generally describes each exemplary composition and step according to function in the above description.These Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered Think beyond the scope of this invention.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can directly be held with hardware, processor The combination of capable software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only deposit Reservoir (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technology In any other form of storage medium well known in field.
Above to a kind of caching read method, system, device and its computer of distributed storage provided by the present invention Readable storage medium storing program for executing is described in detail.Specific case used herein carries out the principle of the present invention and embodiment It illustrates, the above description of the embodiment is only used to help understand the method for the present invention and its core ideas.It should be pointed out that for this For the those of ordinary skill of technical field, without departing from the principle of the present invention, the present invention can also be carried out several Improvement and modification, these improvements and modifications also fall within the scope of protection of the claims of the present invention.

Claims (10)

1. a kind of caching read method of distributed storage characterized by comprising
When receiving the read operation request of client, judge in caching with the presence or absence of to operation data;
If reading is described to operation data from disk to operation data there is no described in caching, and judge described wait grasp Whether the access frequency for making data is higher than default access frequency;
If the access frequency to operation data is higher than default access frequency, it is added to described delay to operation data for described In depositing.
2. caching read method according to claim 1, which is characterized in that if the access frequency to operation data is high In default access frequency, then it is added to described in the caching to operation data, comprising:
If the access frequency to operation data is higher than default access frequency, judge in the caching whether available free piece deposit Storage is described to operation data;
If whether the judgement access frequency to operation data is higher than access frequency in the caching without the free block Low-limit frequency;
If being higher than the low-limit frequency, the corresponding data of the low-limit frequency are subjected to delete operation;
It is added to described in the caching to operation data.
3. caching read method according to claim 1, which is characterized in that if the access frequency to operation data is high In default access frequency, then it is added to described in the caching to operation data, comprising:
If the access frequency to operation data is higher than default access frequency, judge in the caching whether available free piece deposit Storage is described to operation data;
If filtering out superseded block, and the data of the superseded block are carried out delete operation without the free block;
It is added to described in the caching to operation data.
4. caching read method according to claim 3, which is characterized in that described to filter out superseded block, comprising: to pass through Lru algorithm filters out the superseded block.
5. caching read method according to claim 1, which is characterized in that further include:
If being read from the caching described to operation data to operation data described in existing in the caching.
6. a kind of caching of distributed storage reads system characterized by comprising
Judgment module, for when receiving the read operation request of client, judging in caching with the presence or absence of to operation data;
Temperature module is for counting the access frequency described in the access frequency to operation data, judgement to operation data Whether the no access frequency being higher than described in default access frequency, judgement to operation data is higher than low-limit frequency;
Cache module, for the access frequency is described higher than being added to described in the default access frequency to operation data In caching.
7. caching reading system according to claim 6, which is characterized in that judgment module includes:
First processing units judge in the caching with the presence or absence of described to operation data;
The second processing unit, it is described to operation data if it exists, then it is read from the caching described to operation data;If not depositing Described to operation data, then the judgement of the access frequency to operation data is carried out.
8. caching reading system according to claim 6, which is characterized in that temperature module includes:
Access frequency judging unit, for judging whether the access frequency to operation data is higher than the default access frequency Whether it is higher than the low-limit frequency to the access frequency of operation data described in rate, judgement;
Judging unit is cached, for judging whether there is spare space in the caching;
Caching process unit, if whether having spare space in the caching, according to access frequency calculating as a result, selecting It eliminates data and is deleted the superseded data using algorithm, and be added to described in the caching to operation data.
9. a kind of caching reading device of distributed storage characterized by comprising
Memory, for storing computer program;
Processor, when for executing the computer program realize as described in any one of claim 1 to 5 distributed storage delay The step of depositing read method.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium Program realizes the caching of the distributed storage as described in any one of claim 1 to 5 when the computer program is executed by processor The step of read method.
CN201810825942.1A 2018-07-25 2018-07-25 A kind of the caching read method and its device of distributed storage Pending CN108984130A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810825942.1A CN108984130A (en) 2018-07-25 2018-07-25 A kind of the caching read method and its device of distributed storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810825942.1A CN108984130A (en) 2018-07-25 2018-07-25 A kind of the caching read method and its device of distributed storage

Publications (1)

Publication Number Publication Date
CN108984130A true CN108984130A (en) 2018-12-11

Family

ID=64550491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810825942.1A Pending CN108984130A (en) 2018-07-25 2018-07-25 A kind of the caching read method and its device of distributed storage

Country Status (1)

Country Link
CN (1) CN108984130A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110389837A (en) * 2019-07-23 2019-10-29 中国工商银行股份有限公司 Data access method, device, equipment, storage medium and system
CN110674169A (en) * 2019-08-30 2020-01-10 北京浪潮数据技术有限公司 Website database protection method and related device
CN110764708A (en) * 2019-10-25 2020-02-07 北京浪潮数据技术有限公司 Data reading method, device, equipment and storage medium
CN111026761A (en) * 2019-12-11 2020-04-17 上海鲸骞金融信息服务有限公司 Financial data storage system, processing method and device
CN112417058A (en) * 2019-08-23 2021-02-26 华为技术有限公司 Data processing method, storage system and storage medium
CN113220211A (en) * 2020-01-21 2021-08-06 上海商汤智能科技有限公司 Data storage system, data access method and related device
CN113495678A (en) * 2020-04-01 2021-10-12 荣耀终端有限公司 DM cache allocation method and device
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298506A (en) * 2010-06-24 2011-12-28 国际商业机器公司 Storage system and method for implementing the same
CN102870100A (en) * 2012-06-30 2013-01-09 华为技术有限公司 Data buffer device, data storage system and method
US20180024877A1 (en) * 2016-07-22 2018-01-25 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
CN107632784A (en) * 2017-09-14 2018-01-26 郑州云海信息技术有限公司 The caching method of a kind of storage medium and distributed memory system, device and equipment
CN108183947A (en) * 2017-12-27 2018-06-19 深圳天源迪科信息技术股份有限公司 Distributed caching method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298506A (en) * 2010-06-24 2011-12-28 国际商业机器公司 Storage system and method for implementing the same
CN102870100A (en) * 2012-06-30 2013-01-09 华为技术有限公司 Data buffer device, data storage system and method
US20180024877A1 (en) * 2016-07-22 2018-01-25 Pure Storage, Inc. Optimize data protection layouts based on distributed flash wear leveling
CN107632784A (en) * 2017-09-14 2018-01-26 郑州云海信息技术有限公司 The caching method of a kind of storage medium and distributed memory system, device and equipment
CN108183947A (en) * 2017-12-27 2018-06-19 深圳天源迪科信息技术股份有限公司 Distributed caching method and system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110389837A (en) * 2019-07-23 2019-10-29 中国工商银行股份有限公司 Data access method, device, equipment, storage medium and system
CN110389837B (en) * 2019-07-23 2023-04-18 中国工商银行股份有限公司 Data access method, device, equipment, storage medium and system
CN112417058A (en) * 2019-08-23 2021-02-26 华为技术有限公司 Data processing method, storage system and storage medium
CN110674169A (en) * 2019-08-30 2020-01-10 北京浪潮数据技术有限公司 Website database protection method and related device
CN110674169B (en) * 2019-08-30 2022-06-10 北京浪潮数据技术有限公司 Website database protection method and related device
CN110764708A (en) * 2019-10-25 2020-02-07 北京浪潮数据技术有限公司 Data reading method, device, equipment and storage medium
CN111026761A (en) * 2019-12-11 2020-04-17 上海鲸骞金融信息服务有限公司 Financial data storage system, processing method and device
CN111026761B (en) * 2019-12-11 2024-04-02 上海鲸骞金融信息服务有限公司 Financial data storage system, processing method and device
CN113220211A (en) * 2020-01-21 2021-08-06 上海商汤智能科技有限公司 Data storage system, data access method and related device
CN113495678A (en) * 2020-04-01 2021-10-12 荣耀终端有限公司 DM cache allocation method and device
CN114237518A (en) * 2022-02-22 2022-03-25 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal
CN114237518B (en) * 2022-02-22 2022-05-24 苏州浪潮智能科技有限公司 Data reading method, system, device and terminal

Similar Documents

Publication Publication Date Title
CN108984130A (en) A kind of the caching read method and its device of distributed storage
CN103019962B (en) Data buffer storage disposal route, device and system
TWI684099B (en) Profiling cache replacement
KR100577384B1 (en) Method for page replacement using information on page
CN103098014B (en) Storage system
KR101361945B1 (en) Mapping of computer threads onto heterogeneous resources
CN108829344A (en) Date storage method, device and storage medium
JP4317531B2 (en) System and method for balancing multiple memory buffer sizes
CN111159436B (en) Method, device and computing equipment for recommending multimedia content
EP2843570B1 (en) File reading method, storage device and reading system
CN105677580A (en) Method and device for accessing cache
US6654855B1 (en) Method and apparatus for improving the efficiency of cache memories using chained metrics
CN105373487B (en) The scrap cleaning method and system of a kind of storage program area
CN104077242A (en) Cache management method and device
US10146783B2 (en) Using file element accesses to select file elements in a file system to defragment
CN105468541B (en) A kind of buffer memory management method towards lucidification disposal intelligent terminal
WO2021062982A1 (en) Method and apparatus for managing hmb memory, and computer device and storage medium
CN109086462A (en) The management method of metadata in a kind of distributed file system
CN112148736A (en) Method, device and storage medium for caching data
JP2017162194A (en) Data management program, data management device, and data management method
US9858204B2 (en) Cache device, cache system, and cache method
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
CN111522512B (en) Optimized cold and hot data separation method, device, computer equipment and storage medium
CN110825652B (en) Method, device and equipment for eliminating cache data on disk block
JP6112193B2 (en) Access control program, disk device, and access control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211

RJ01 Rejection of invention patent application after publication