CN105426126B - The construction method and device of cloud storage client multichannel constant rate of speed IO cachings - Google Patents

The construction method and device of cloud storage client multichannel constant rate of speed IO cachings Download PDF

Info

Publication number
CN105426126B
CN105426126B CN201510766088.2A CN201510766088A CN105426126B CN 105426126 B CN105426126 B CN 105426126B CN 201510766088 A CN201510766088 A CN 201510766088A CN 105426126 B CN105426126 B CN 105426126B
Authority
CN
China
Prior art keywords
file
cache
prestores
threshold
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510766088.2A
Other languages
Chinese (zh)
Other versions
CN105426126A (en
Inventor
李�杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CN201510766088.2A priority Critical patent/CN105426126B/en
Publication of CN105426126A publication Critical patent/CN105426126A/en
Application granted granted Critical
Publication of CN105426126B publication Critical patent/CN105426126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a kind of cloud storage client multichannel constant rate of speed IO caching construction method and device, including:Multiple file in parallel are write in corresponding target cache node;Judge whether the cache size of each file is more than the first cache threshold;If so, determine that cache size is more than the first cache node corresponding to the file of first cache threshold, and discharge first in first cache node according to the first predetermined Substitution Rules and prestore file;Judge whether total cache size of the multiple file is more than the second cache threshold;If, then second in the second cache node is discharged according to the second pre-defined rule to prestore file, it can be seen that, the present embodiment is by the way that multiple file in parallel are write in corresponding target cache node, and pass through and be replaced the file that prestores in cache node according to pre-defined rule, efficient caching is constructed, con current control and caching can be carried out to the IO of each file.

Description

The construction method and device of cloud storage client multichannel constant rate of speed IO cachings
Technical field
The present invention relates to computer field of storage, more specifically to a kind of cloud storage client multichannel constant rate of speed The construction method and device of IO cachings.
Background technology
As flourishing for internet is changed with the digitlization of society, the growth of data explosion formula proposes storage system Huge challenge.Traditional direct-connected storage is difficult to meet increasingly serious data storage requirement, and people is promoted constantly to explore newly The storage system of type, cloud storage technology are come into being in this context.Cloud storage is in the conceptive extension of cloud computing and development A new concept out, is a kind of emerging Network storage technology, refers to through cluster application, network technology or distribution A large amount of various types of storage devices in network are gathered collaboration work by the functions such as file system by application software Make, the common system that data storage and Operational Visit function are externally provided.The storage vendor of relatively mainstream all exists at present The field has put into huge manpower and materials.
As the main software method for solving I/O bottleneck problems, caching technology is a research heat of storage system always Point.Caching technology can improve subsequent repeated accesses speed by the way that data are temporarily stored in relatively high speed memory.Due to The capacity of caching is generally much smaller than the capacity of next stage memory in I/O paths, it is necessary at one just after caching has been expired When the appropriate data of timing eliminated, recycling spatial cache is to store new data.The foundation of this selection is exactly Cache replacement policy is the key factor for influencing caching performance.But in Streaming Media, video monitoring is passed through in broadcast television industry It often needs huge IO concurrent, and constant code stream is needed per IO all the way, and traditional cloud storage system can not support the spy Different demand.
Therefore, efficient caching how is built, is to need now so as to carry out con current control and caching to the IO of each file It solves the problems, such as.
The content of the invention
It is an object of the invention to provide a kind of cloud storage client multichannel constant rate of speed IO caching construction method and dress, Efficient caching is built to realize, so as to carry out con current control and caching to the IO of each file.
To achieve the above object, an embodiment of the present invention provides following technical solutions:
A kind of construction method of cloud storage client multichannel constant rate of speed IO cachings, including:
Multiple file in parallel are write in corresponding target cache node;
Judge whether the cache size of each file is more than the first cache threshold;Wherein, first cache threshold is dynamic The cache threshold of each node obtained;
If so, determining that cache size is more than the first cache node corresponding to the file of first cache threshold, and press First in first cache node is discharged according to the first predetermined Substitution Rules to prestore file;
Judge whether total cache size of the multiple file is more than the second cache threshold;Wherein, second cache threshold For total cache threshold of cloud storage client;
It prestores file if so, discharging second in the second cache node according to the second pre-defined rule.
Preferably, the definite cache size is more than the first cache node corresponding to the file of first cache threshold, And discharge first in first cache node according to the first predetermined Substitution Rules and prestore file, including:
According to lru algorithm, determine that the file that prestores of afterbody in the double-linked circular list of first cache node is described First prestores file, and discharges the described first data block for prestoring file;And/or
The reservation degree for the file that prestores in first cache node is calculated, determines that the file that prestores of reservation degree minimum is described First prestores file, and discharges the described first data block for prestoring file.
Preferably, it is described to discharge second in the second cache node according to the second pre-defined rule and prestore file, including:
According to lru algorithm, determine the file that prestores of the afterbody in the double-linked circular list of all cache nodes for described the Two prestore file, and discharge the described second data block for prestoring file;And/or
The reservation degree for the file that prestores in all cache nodes is calculated, determines that the minimum file that prestores of reservation degree is described second Prestore file, and discharges the data block of second file.
Preferably, the computational methods of degree of reservation, including:
According to the last access time and access frequency of the file that prestores, the first parameter is calculated;
According to the memory value shared by file that prestores, the second parameter is calculated;Wherein, second parameter is with prestoring in shared by file Value is deposited to be inversely proportional;
According to first parameter and second parameter, the reservation degree is calculated.
A kind of construction device of cloud storage client multichannel constant rate of speed IO cachings, including:
Writing module, for multiple file in parallel to be write in corresponding target cache node;
First judgment module, for judging whether the cache size of each file is more than the first cache threshold;Wherein, described The cache threshold for each node that one cache threshold obtains for dynamic;
If so, the first release module of triggering;
First release module, for determining that cache size is more than first corresponding to the file of first cache threshold Cache node, and discharge first in first cache node according to the first predetermined Substitution Rules and prestore file;
Second judgment module, for judging whether total cache size of the multiple file is more than the second cache threshold;Wherein, Second cache threshold is total cache threshold of cloud storage client;
If so, the second release module of triggering;
Second release module prestores text for discharging second in the second cache node according to the second pre-defined rule Part.
Preferably, first release module, including:
First releasing unit, for according to lru algorithm, determining afterbody in the double-linked circular list of first cache node The file that prestores prestore file for described first, and discharge the described first data block for prestoring file;
Second releasing unit for calculating the reservation degree for the file that prestores in first cache node, determines reservation degree most The small file that prestores prestores file for described first, and discharges the described first data block for prestoring file.
Preferably, second release module, including:
3rd releasing unit, for according to lru algorithm, determining the afterbody in the double-linked circular list of all cache nodes The file that prestores prestores file for described second, and discharges the described second data block for prestoring file;
4th releasing unit for calculating the reservation degree for the file that prestores in all cache nodes, determines reservation degree minimum The file that prestores prestores file for described second, and discharges the data block of second file.
Preferably, the construction device includes reservation degree computing module;Wherein, the reservation degree computing module includes:
First computing unit for the last access time and access frequency according to the file that prestores, calculates the first parameter;
Second computing unit, for according to the memory value shared by file that prestores, calculating the second parameter;Wherein, second ginseng Number is inversely proportional with the memory value shared by file that prestores;
3rd computing unit, for according to first parameter and second parameter, calculating the reservation degree.
By above scheme, a kind of cloud storage client multichannel constant rate of speed IO cachings provided in an embodiment of the present invention Construction method and device, including:Multiple file in parallel are write in corresponding target cache node;Judge the slow of each file Deposit whether value is more than the first cache threshold;Wherein, the cache threshold for each node that first cache threshold obtains for dynamic; If so, determine that cache size is more than the first cache node corresponding to the file of first cache threshold, and it is pre- according to first Determine that Substitution Rules are discharged in first cache node first prestores file;Judge the multiple file total cache size whether More than the second cache threshold;Wherein, second cache threshold is total cache threshold of cloud storage client;If so, according to What the second pre-defined rule was discharged in the second cache node second prestores file, it is seen then that the present embodiment is by by multiple file in parallel It writes in corresponding target cache node, and passes through and be replaced the file that prestores in cache node according to pre-defined rule, structure Efficient caching has been built, con current control and caching can have been carried out to the IO of each file.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention, for those of ordinary skill in the art, without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is a kind of construction method stream of cloud storage client multichannel constant rate of speed IO cachings disclosed by the embodiments of the present invention Journey schematic diagram;
Fig. 2 is the double-linked circular list schematic diagram of cache node disclosed by the embodiments of the present invention;
Fig. 3 is the double-linked circular list schematic diagram of cache node in client disclosed by the embodiments of the present invention;
Fig. 4 is a kind of construction device knot of cloud storage client multichannel constant rate of speed IO cachings disclosed by the embodiments of the present invention Structure schematic diagram.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts Embodiment belongs to the scope of protection of the invention.
The embodiment of the invention discloses a kind of cloud storage client multichannel constant rate of speed IO caching construction method and dress, with The efficient caching of structure is realized, so as to carry out con current control and caching to the IO of each file.
Referring to Fig. 1, a kind of structure side of cloud storage client multichannel constant rate of speed IO cachings provided in an embodiment of the present invention Method, including:
S101, multiple file in parallel are write in corresponding target cache node;
S102, judge whether the cache size of each file is more than the first cache threshold;Wherein, first cache threshold is The cache threshold for each node that dynamic obtains;
Specifically, in the present embodiment, when multichannel file is carried out at the same time write-in, the caching of distribution fixed size per road, When the occupied caching of single file is more than the first cache threshold, the caching that can trigger file is replaced.
If so, it performs S103, determine that cache size is more than the first caching corresponding to the file of first cache threshold Node, and discharge first in first cache node according to the first predetermined Substitution Rules and prestore file;
Preferably, the definite cache size is more than the first cache node corresponding to the file of first cache threshold, And discharge first in first cache node according to the first predetermined Substitution Rules and prestore file, including:
According to lru algorithm, determine that the file that prestores of afterbody in the double-linked circular list of first cache node is described First prestores file, and discharges the described first data block for prestoring file;And/or
The reservation degree for the file that prestores in first cache node is calculated, determines that the file that prestores of reservation degree minimum is described First prestores file, and discharges the described first data block for prestoring file.
Specifically, in the present embodiment in constituent act during caching, after file is mapped as in a manner that object stores The block data structure at end, the data block of caching carry out tissue by double-linked circular list, replace and access according to the thought of lru algorithm The block crossed.Wherein it should be noted that all cache blocks of client can be all hung over onto a double-linked circular list.When certain If miss when secondary some block of access, loads from rear end and is inserted into lru chained lists head, that is, be loaded into bidirectional circulating The block is moved to head by the afterbody of chained list if hit, and the pointer of lru circular linked lists where it, institute are included in each block Some block movements are operated all referring to pin, so additional secondary data structure is not required, can individually be completed by the node of operation Adjustment operation.Wherein, referring to the double-linked circular list schematic diagram that Fig. 2 is cache node provided in this embodiment;It is this referring to Fig. 3 The double-linked circular list schematic diagram of cache node in the client that embodiment provides.
And in the present embodiment, it can determine which file is replaced by lru algorithm and reservation degree.
LRU is that Least Recently Used at least use algorithm in the recent period, is that a kind of page frame replacement of memory management is calculated Method, in memory but no data block (memory block) is called LRU, operating system can belong to LRU according to which data And it is moved out memory and vacating space loads other data.Therefore that file is replaced by lru algorithm decision It changes, then discharges the data block of afterbody in double-linked circular list, is i.e. access frequency is low.
In general the access frequency of file and the size of file do not have apparent correlation, but identical memory holds Amount can accommodate more small documents compared with big file, that is to say, that small documents are placed in identical caching has higher life Middle rate so we give the higher priority of small documents, makes access hit rate higher.So reservation degree should be with the size of file It is inversely proportional.
Preferably, the computational methods of degree of reservation, including:
According to the last access time and access frequency of the file that prestores, the first parameter is calculated;
According to the memory value shared by file that prestores, the second parameter is calculated;Wherein, second parameter is with prestoring in shared by file Value is deposited to be inversely proportional;
According to first parameter and second parameter, the reservation degree is calculated.
Specifically, in the present embodiment, the calculation formula of degree of reservation is expressed as:
S=g (FAGE) * u (FLEN);
Wherein, S:Represent the reservation degree of each file;
g(FAGE):The coefficient obtained by according to the last access time of file and the visiting frequency of a period of time is represented, i.e., originally The first parameter in embodiment.Its calculation corresponds to the long shaping (labeled as K) of 64bit for each file, all every specifying Phase (can match somebody with somebody adjustable) counts some document time whether accessed (if accessing otherwise tagging flg=1 was 0) if visited It asks, says that the K values corresponding to this document move to right 1 and then highest order is carried out with flg or computing.The in specific the present embodiment The calculation formula of one parameter can be expressed as:G (FAGE)=(K>>1)|(flag&0x FFFFFFFFFFFFFFFF);
u(FLEN):Represent the coefficient that gained is calculated according to file size, i.e., the second parameter in the present embodiment.The coefficient It is inversely proportional with file size, simplest mode is:U (FLEN)=1/FLEN, then the calculation formula of degree of reservation be:S=g (FAGE)/FLEN;
But 1K is compared with the file of 1M, 1K is equivalent to g (FAGE) and moves right 10, and 1M, which is equivalent to, moves right 20 Position, the aging speed of this file is too big, and 1K differs 10 with 1MB, and gap is too big.U (FLEN) function is set now It is calculated asThen 1K, which is equivalent to, has moved to right 4, and 1M has moved to right 7, and 1G has moved right 9 no matter It is proper between order of magnitude inside or the order of magnitude.Therefore, the calculation formula of the reservation degree in the present embodiment is:
Therefore, the reservation degree for the file that prestores is calculated according to the method, it is possible to which the file that prestores of selection reservation degree minimum is released It puts.
S104, judge whether total cache size of the multiple file is more than the second cache threshold;Wherein, second caching Threshold value is total cache threshold of cloud storage client;
Specifically, the buffer memory capacity of entire client can also set a upper limit, i.e., the second caching threshold in the present embodiment Value when entire client-cache capacity is more than the second cache threshold, can equally trigger file replacement.
If so, perform S105, prestore file according to second in the second pre-defined rule the second cache node of release.
Preferably, it is described to discharge second in the second cache node according to the second pre-defined rule and prestore file, including:
According to lru algorithm, determine the file that prestores of the afterbody in the double-linked circular list of all cache nodes for described the Two prestore file, and discharge the described second data block for prestoring file;And/or
The reservation degree for the file that prestores in all cache nodes is calculated, determines that the minimum file that prestores of reservation degree is described second Prestore file, and discharges the data block of second file.
Specifically, in the present embodiment, the file if the caching total value of multiple cache files prestores more than second, according to Two pre-defined rules release second prestores file, such as is prestored accordingly file or according to the file that prestores by lru algorithm release Reservation degree, discharge reservation degree minimum the file that prestores.How it specifically discharges the file that prestores according to lru algorithm and reservation degree exists Above will describe, which is not described herein again.
By above it can be concluded that the Substitution Rules of data of the client to prestoring are as follows:
The 1st, if file internal, which reaches, caches upper limit threshold values, inside selecting file pointed by the tail of double-linked circular list Block discharge;
If the 2, entire client-cache capacity is more than upper limit threshold values, the file that reference count is 0 is searched, i.e. release is protected All data blocks of the file of Office Copy minimum;
If the 3, client-cache has been expired, and all files have reference, to the LRU chained lists of entire client-cache Tail nodes discharged.
It should be noted that the present embodiment is to build buffer structure by two kinds of dimensions between the block and file in file (pay attention to:It is to be built by two kinds of structures rather than two parts of data, data are all data blocks, and only portion), and institute The increase of some cache nodes is deleted, and adjustment is all constant grade.
A kind of construction method of cloud storage client multichannel constant rate of speed IO cachings provided in an embodiment of the present invention, including: Multiple file in parallel are write in corresponding target cache node;Judge whether the cache size of each file is more than the first caching threshold Value;Wherein, the cache threshold for each node that first cache threshold obtains for dynamic;If so, determine that cache size is more than The first cache node corresponding to the file of first cache threshold, and according to the first predetermined Substitution Rules release described first First in cache node prestores file;Judge whether total cache size of the multiple file is more than the second cache threshold;Wherein, Second cache threshold is total cache threshold of cloud storage client;It is if so, slow according to the second pre-defined rule release second It deposits second in node to prestore file, it is seen then that the present embodiment by multiple file in parallel by writing corresponding target cache node In, and pass through and be replaced the file that prestores in cache node according to pre-defined rule, efficient caching is constructed, it can be to each The IO of file carries out con current control and caching.
The construction device of cloud storage client multichannel constant rate of speed IO provided in an embodiment of the present invention cachings is carried out below It introduces, the construction device of cloud storage client multichannel constant rate of speed IO cachings described below and above-described cloud storage client Hold the construction method of multichannel constant rate of speed IO cachings can be cross-referenced.
Referring to Fig. 4, a kind of structure dress of cloud storage client multichannel constant rate of speed IO cachings provided in an embodiment of the present invention It puts, including:
Writing module 100, for multiple file in parallel to be write in corresponding target cache node;
First judgment module 200, for judging whether the cache size of each file is more than the first cache threshold;Wherein, institute State the cache threshold for each node that the first cache threshold obtains for dynamic;
If so, the first release module 300 of triggering;
First release module 300, for determining that cache size is more than corresponding to the file of first cache threshold First cache node, and discharge first in first cache node according to the first predetermined Substitution Rules and prestore file;
Second judgment module 400, for judging whether total cache size of the multiple file is more than the second cache threshold;Its In, second cache threshold is total cache threshold of cloud storage client;
If so, the second release module 500 of triggering;
Second release module 500 prestores for discharging second in the second cache node according to the second pre-defined rule File.
A kind of construction device of cloud storage client multichannel constant rate of speed IO cachings provided in an embodiment of the present invention, including: Writing module 100, for multiple file in parallel to be write in corresponding target cache node;First judgment module 200, for sentencing Whether the cache size of disconnected each file is more than the first cache threshold;Wherein, first cache threshold obtains each for dynamic The cache threshold of node;If so, the first release module 300 of triggering;First release module 300, for determining cache size More than the first cache node corresponding to the file of first cache threshold, and according to described in the first predetermined Substitution Rules release First in first cache node prestores file;Second judgment module 400, total cache size for judging the multiple file are It is no to be more than the second cache threshold;Wherein, second cache threshold is total cache threshold of cloud storage client;If so, it touches Send out the second release module 500;Second release module 500, for being discharged according to the second pre-defined rule in the second cache node Second prestore file.As it can be seen that the present embodiment is by the way that multiple file in parallel are write in corresponding target cache node, and pass through The file that prestores in cache node according to pre-defined rule is replaced, constructs efficient caching, it can be to the IO of each file Carry out con current control and caching.
Preferably, first release module 300, including:
First releasing unit 310, for according to lru algorithm, in the double-linked circular list for determining first cache node The file that prestores of afterbody prestores file for described first, and discharges the described first data block for prestoring file;
Second releasing unit 320 for calculating the reservation degree for the file that prestores in first cache node, determines reservation degree The minimum file that prestores prestores file for described first, and discharges the described first data block for prestoring file.
Preferably, second release module 500, including:
3rd releasing unit 510, for according to lru algorithm, determining the tail in the double-linked circular list of all cache nodes The file that prestores in portion prestores file for described second, and discharges the described second data block for prestoring file;
4th releasing unit 520 for calculating the reservation degree for the file that prestores in all cache nodes, determines that reservation degree is minimum The file that prestores prestore file for described second, and discharge the data block of second file.
Preferably, the construction device includes reservation degree computing module;Wherein, the reservation degree computing module includes:
First computing unit for the last access time and access frequency according to the file that prestores, calculates the first parameter;
Second computing unit, for according to the memory value shared by file that prestores, calculating the second parameter;Wherein, second ginseng Number is inversely proportional with the memory value shared by file that prestores;
3rd computing unit, for according to first parameter and second parameter, calculating the reservation degree.
Each embodiment is described by the way of progressive in this specification, the highlights of each of the examples are with other The difference of embodiment, just to refer each other for identical similar portion between each embodiment.
The foregoing description of the disclosed embodiments enables professional and technical personnel in the field to realize or use the present invention. A variety of modifications of these embodiments will be apparent for those skilled in the art, it is as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention The embodiments shown herein is not intended to be limited to, and is to fit to and the principles and novel features disclosed herein phase one The most wide scope caused.

Claims (8)

1. a kind of construction method of cloud storage client multichannel constant rate of speed IO cachings, which is characterized in that including:
Multiple file in parallel are write in corresponding target cache node;
Judge whether the cache size of each file is more than the first cache threshold;Wherein, first cache threshold obtains for dynamic Each node cache threshold;
If so, determine that cache size is more than the first cache node corresponding to the file of first cache threshold, and according to the What one predetermined Substitution Rules were discharged in first cache node first prestores file;
Judge whether total cache size of the multiple file is more than the second cache threshold;Wherein, second cache threshold is cloud Store total cache threshold of client;
It prestores file if so, discharging second in the second cache node according to the second pre-defined rule.
2. construction method according to claim 1, which is characterized in that the definite cache size is more than the described first caching threshold The first cache node corresponding to the file of value, and discharge according to the first predetermined Substitution Rules the in first cache node One prestores file, including:
According to lru algorithm, the file that prestores for determining afterbody in the double-linked circular list of first cache node is described first Prestore file, and discharges the described first data block for prestoring file;And/or
The reservation degree for the file that prestores in first cache node is calculated, determines that the minimum file that prestores of reservation degree is described first Prestore file, and discharges the described first data block for prestoring file.
3. construction method according to claim 1, which is characterized in that described according to the second caching of the second pre-defined rule release Second in node prestores file, including:
According to lru algorithm, determine that the file that prestores of the afterbody in the double-linked circular list of all cache nodes is pre- for described second File is deposited, and discharges the described second data block for prestoring file;And/or
The reservation degree for the file that prestores in all cache nodes is calculated, determines that the minimum file that prestores of reservation degree prestores for described second File, and discharge the described second data block for prestoring file.
4. according to the construction method described in any one in claim 2-3, which is characterized in that the computational methods of degree of reservation, bag It includes:
According to the last access time and access frequency of the file that prestores, the first parameter is calculated;
According to the memory value shared by file that prestores, the second parameter is calculated;Wherein, second parameter and the memory value shared by file that prestores It is inversely proportional;
According to first parameter and second parameter, the reservation degree is calculated.
5. a kind of construction device of cloud storage client multichannel constant rate of speed IO cachings, which is characterized in that including:
Writing module, for multiple file in parallel to be write in corresponding target cache node;
First judgment module, for judging whether the cache size of each file is more than the first cache threshold;Wherein, described first is slow Deposit the cache threshold for each node that threshold value obtains for dynamic;
If so, the first release module of triggering;
First release module, for determining that cache size is more than the first caching corresponding to the file of first cache threshold Node, and discharge first in first cache node according to the first predetermined Substitution Rules and prestore file;
Second judgment module, for judging whether total cache size of the multiple file is more than the second cache threshold;Wherein, it is described Second cache threshold is total cache threshold of cloud storage client;
If so, the second release module of triggering;
Second release module prestores file for discharging second in the second cache node according to the second pre-defined rule.
6. construction device according to claim 5, which is characterized in that first release module, including:
First releasing unit, for according to lru algorithm, determining the pre- of afterbody in the double-linked circular list of first cache node It deposits file to prestore file for described first, and discharges the described first data block for prestoring file;
Second releasing unit for calculating the reservation degree for the file that prestores in first cache node, determines reservation degree minimum The file that prestores prestores file for described first, and discharges the described first data block for prestoring file.
7. construction device according to claim 5, which is characterized in that second release module, including:
3rd releasing unit, for according to lru algorithm, determining prestoring for the afterbody in the double-linked circular list of all cache nodes File prestores file for described second, and discharges the described second data block for prestoring file;
4th releasing unit for calculating the reservation degree for the file that prestores in all cache nodes, determines that reservation degree is minimum and prestores File prestores file for described second, and discharges the described second data block for prestoring file.
8. according to the construction device described in any one in claim 6-7, which is characterized in that the construction device includes retaining Spend computing module;Wherein, the reservation degree computing module includes:
First computing unit for the last access time and access frequency according to the file that prestores, calculates the first parameter;
Second computing unit, for according to the memory value shared by file that prestores, calculating the second parameter;Wherein, second parameter with The memory value shared by file that prestores is inversely proportional;
3rd computing unit, for according to first parameter and second parameter, calculating the reservation degree.
CN201510766088.2A 2015-11-11 2015-11-11 The construction method and device of cloud storage client multichannel constant rate of speed IO cachings Active CN105426126B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510766088.2A CN105426126B (en) 2015-11-11 2015-11-11 The construction method and device of cloud storage client multichannel constant rate of speed IO cachings

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510766088.2A CN105426126B (en) 2015-11-11 2015-11-11 The construction method and device of cloud storage client multichannel constant rate of speed IO cachings

Publications (2)

Publication Number Publication Date
CN105426126A CN105426126A (en) 2016-03-23
CN105426126B true CN105426126B (en) 2018-06-05

Family

ID=55504359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510766088.2A Active CN105426126B (en) 2015-11-11 2015-11-11 The construction method and device of cloud storage client multichannel constant rate of speed IO cachings

Country Status (1)

Country Link
CN (1) CN105426126B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106888262A (en) * 2017-02-28 2017-06-23 北京邮电大学 A kind of buffer replacing method and device
CN110602238B (en) * 2019-09-23 2021-09-24 Oppo广东移动通信有限公司 TCP session management method and related product
CN113391766A (en) * 2021-06-28 2021-09-14 苏州浪潮智能科技有限公司 Method, device, equipment and medium for eliminating cache pages

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353892A (en) * 2013-07-05 2013-10-16 北京东方网信科技股份有限公司 Method and system for data cleaning suitable for mass storage
CN103714016A (en) * 2014-01-14 2014-04-09 贝壳网际(北京)安全技术有限公司 Cache cleaning method, device and client side
CN103761275A (en) * 2014-01-09 2014-04-30 浪潮电子信息产业股份有限公司 Management method for metadata in distributed file system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353892A (en) * 2013-07-05 2013-10-16 北京东方网信科技股份有限公司 Method and system for data cleaning suitable for mass storage
CN103761275A (en) * 2014-01-09 2014-04-30 浪潮电子信息产业股份有限公司 Management method for metadata in distributed file system
CN103714016A (en) * 2014-01-14 2014-04-09 贝壳网际(北京)安全技术有限公司 Cache cleaning method, device and client side

Also Published As

Publication number Publication date
CN105426126A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
US9396030B2 (en) Quota-based adaptive resource balancing in a scalable heap allocator for multithreaded applications
US10198363B2 (en) Reducing data I/O using in-memory data structures
EP3089039B1 (en) Cache management method and device
US8935478B2 (en) Variable cache line size management
US9348752B1 (en) Cached data replication for cache recovery
US8443149B2 (en) Evicting data from a cache via a batch file
CN102819586B (en) A kind of URL sorting technique based on high-speed cache and equipment
US9501419B2 (en) Apparatus, systems, and methods for providing a memory efficient cache
GB2509755A (en) Partitioning a shared cache using masks associated with threads to avoiding thrashing
CN105426126B (en) The construction method and device of cloud storage client multichannel constant rate of speed IO cachings
CN105339885B (en) The small efficient storage changed at random of data on disk
CN109684231A (en) The system and method for dsc data in solid-state disk and stream for identification
CN110147331A (en) Caching data processing method, system and readable storage medium storing program for executing
CN109582213A (en) Data reconstruction method and device, data-storage system
CN103778069B (en) The cacheline length regulating method and device of cache memory
US9699254B2 (en) Computer system, cache management method, and computer
US20130086325A1 (en) Dynamic cache system and method of formation
CN113138851B (en) Data management method, related device and system
US20210141738A1 (en) Method and Apparatus for Characterizing Workload Sequentiality for Cache Policy Optimization
Wang et al. Hash table design and optimization for software virtual switches
US20070101064A1 (en) Cache controller and method
Ghandeharizadeh et al. Cache replacement with memory allocation
JP6112193B2 (en) Access control program, disk device, and access control method
Owda et al. A comparison of page replacement algorithms in Linux memory management
Gokhale et al. {KVZone} and the Search for a {Write-Optimized}{Key-Value} Store

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant