CN105573669A - IO read speeding cache method and system of storage system - Google Patents

IO read speeding cache method and system of storage system Download PDF

Info

Publication number
CN105573669A
CN105573669A CN201510922595.0A CN201510922595A CN105573669A CN 105573669 A CN105573669 A CN 105573669A CN 201510922595 A CN201510922595 A CN 201510922595A CN 105573669 A CN105573669 A CN 105573669A
Authority
CN
China
Prior art keywords
data
solid state
hard disc
state hard
internal memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510922595.0A
Other languages
Chinese (zh)
Inventor
方钰翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eisoo Information Technology Co Ltd
Original Assignee
Shanghai Eisoo Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eisoo Information Technology Co Ltd filed Critical Shanghai Eisoo Information Technology Co Ltd
Priority to CN201510922595.0A priority Critical patent/CN105573669A/en
Publication of CN105573669A publication Critical patent/CN105573669A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention provides an IO read speeding cache method and system of a storage system. The method comprises the following steps: searching required data from a memory; if the required data is searched in the memory, reading the data and returning; if the required data is not searched in the memory, entering a solid-state disk to search the required data; if the required data is searched in the solid-state disk, loading the data into the memory and deleting the data from the solid-state disk; and if the data is not searched in the solid-state disk, reading the required data in a magnetic disk and loading the data into the memory. According to the method and system provided by the invention, the defect of data loss during the page cache writing operation is avoided and the high-speed access performance of the memory is fully utilized; the cache of the memory and the cache of the solid-state disk are mutually exclusive and classified, and the data does not exist in the memory and the solid-state disk at the same time, so that repeated waste is not caused; and the data eliminated from the memory is written in the solid-state disk, so that the hottest data is stored in the memory and, the less hot data is stored in the solid-state disk, and then the condition that the less hot data is not searched during the next access after being eliminated from the memory is avoided.

Description

A kind of IO of storage system reads to accelerate caching method and system
Technical field
The invention belongs to computer memory technical field, the IO being specifically related to a kind of storage system reads to accelerate caching method and system.
Background technology
Buffer memory is the scratchpad area (SPA) of setting up to realize data fast access.When we browse a web page, corresponding pagefile can be stored in the CACHE DIRECTORY under browser directory, when again browsing this page, browser obtains pagefile from local cache catalogue, distally server is avoided to obtain pagefile, thus shorten the access time, also mitigate network burden.Corresponding with buffer memory is the primary storage storing raw data, and when web accesses, local directory is buffer memory, and far-end server is primary storage.Widely, such as web page is buffered in this locality by web browser in buffer memory application in computer systems, which; Relevant information needed for data search and renewal is then buffered in internal memory by database server; The file data of accessing recently is buffered in internal memory by operating system; The internal storage data of accessing is buffered in L1/L2 buffer memory by CPU.There is various caching technology in computer system, these technology improve the overall performance of system.Data enter buffer memory and are triggered by read/write operation, the data write buffer memory that read operation will be read from primary storage, for subsequent access; Data then may be write buffer memory by write operation, or upgrade existing data in buffer memory.
Within the storage system, because mechanical hard disk is cheap, be still data storage medium most widely used at present, but its access speed is relatively slow, and performance boost for many years not obvious, be also called the key factor of keeping IO performance in check simultaneously.In this case, various IO caching technology arises.
At present, the caching mechanism existed in the kernel stack of standard is page cache (pagecache), page cache is that file (comprising device file) provides buffer memory, page cache is the buffer memory of a kind of write-back mode (writeback) in essence, because there is no persistence mechanism, loss of data may be caused.
When using buffer memory external member, application program needs to use direct read/write (DirectIO) mode to get around page cache, and I/O request is buffered external member and catches before acting on target hard disk, then process.Obviously, write operation can not use internal memory as buffer memory, otherwise can there is the problem identical with page cache, but read operation but can use.But the buffer memory external member of current existence has abandoned the use to internal memory completely, although solid state hard disc to read performance fine, for dsc data, from the speed of solid state hard disc access well below internal memory.
No matter be read operation or write operation scene, buffer memory device size is limited, and buffer memory device storage space generally can much smaller than main equipment, so how in buffer memory external member, reasonable employment internal memory is industry personnel good problems to study to improve the performance of read operation.
Summary of the invention
The shortcoming of prior art in view of the above, a kind of IO of storage system is the object of the present invention is to provide to read to accelerate caching method and system, for solving buffer memory device limited size in prior art, data are easily lost, non-Appropriate application internal memory, buffer memory speed slow and read operation performance is low problem.
In order to achieve the above object, the present invention takes following technical scheme: a kind of IO of storage system reads to accelerate caching method, comprises the following steps: from internal memory, search desired data, if hit, then reads data and returns; If miss, then enter in solid state hard disc and search desired data, if hit, then by Data import in internal memory, and these data to be deleted from solid state hard disc; If miss, then read desired data from disk, and by Data import in internal memory.
In one embodiment of the present invention, when during miss desired data, judging whether available free piece of described internal memory in internal memory; If so, then desired data is read from solid state hard disc, and by this Data import in internal memory; If not, then selected in superseded block write solid state hard disc by the algorithm of internal memory.
In one embodiment of the present invention, before the algorithm of internal memory selects superseded block write solid state hard disc, judge whether available free piece of solid state hard disc, in the superseded block write solid state hard disc if so, then selected by the algorithm of internal memory; If not, then select superseded block by the algorithm of solid state hard disc to abandon.
In one embodiment of the present invention, the algorithm of described internal memory and the algorithm of described solid state hard disc are lru algorithm.
The present invention also provides a kind of IO of storage system to read to accelerate caching system, is installed on main frame, mainly comprises operational module, Linux kernel module, solid state hard disc and disk; Described Linux kernel module comprises IO buffer memory external member module, internal memory and block layer interface module; Described internal memory is used for the level cache of data; Described solid state hard disc is used for the L2 cache of data; Described piece of layer interface module provides service for described operational module, and described piece of layer interface module uses general instruction directly to access described internal memory, solid state hard disc and described disk described in the interface accessing that described piece of layer interface module uses driver to provide; The searching of data when described IO buffer memory external member module is used for read operation, to load and buffer memory; Described internal memory, described solid state hard disc and described disk all send data to described operational module by described piece of layer interface module; Described solid state hard disc and described disk are by described IO buffer memory external member module loading data extremely described internal memory; Described internal memory writes data to described solid state hard disc; Described disk directly writes data to described operational module by described piece of layer interface module.
In one embodiment of the present invention, described IO buffer memory external member module is used for inquiring about described internal memory when operational module access disk, also for inquiring about described solid state hard disc during miss data in described internal memory, also for inquiring about described disk during miss data in described solid state hard disc.
In one embodiment of the present invention, described IO buffer memory external member module also for reading described data during hiting data and returning in described internal memory, also for loading described data during hiting data in described internal memory in described solid state hard disc, also for loading described data during hiting data in described internal memory in described disk.
In one embodiment of the present invention, described IO buffer memory external member module also for judging whether available free piece of described internal memory in described internal memory during miss data; The lru algorithm of described internal memory is used for writing described solid state hard disc at described internal memory without selecting superseded block during free block.
In one embodiment of the present invention, described IO buffer memory external member module also for judging whether available free piece of described solid state hard disc before described superseded block writes described solid state hard disc, also for described superseded block being write described solid state hard disc when available free piece of described solid state hard disc; The lru algorithm of described solid state hard disc is used for abandoning without selecting superseded block during free block from described solid state hard disc at described solid state hard disc.
As mentioned above, the IO of a kind of storage system of the present invention reads to accelerate caching method and system, has following beneficial effect:
1. avoid the defect that page cache write operation loses data, take full advantage of again the high speed access performance of internal memory simultaneously.
2. internal memory and solid state hard disc buffer memory mutual exclusion classification, data can not be present in internal memory and solid state hard disc simultaneously, thus can not form repetition and waste; The data of eliminating from internal memory can write solid state hard disc, and thus the hottest data are kept at internal memory, and the data of secondary heat are then kept at solid state hard disc, avoid after time dsc data eliminated from internal memory, and the situation of not hitting appears in access again.
Accompanying drawing explanation
Fig. 1 is shown as standard Linux kernel stack schematic diagram.
Fig. 2 is shown as the simple solid state hard disc that uses as the stack schematic diagram of buffer memory.
The IO that Fig. 3 is shown as storage system of the present invention reads the framed structure schematic diagram accelerating caching system.
The IO that Fig. 4 is shown as storage system of the present invention reads the stack schematic diagram accelerating caching system.
The IO that Fig. 5 is shown as storage system of the present invention reads to accelerate the read operation process flow diagram of caching method in an embodiment.
The IO that Fig. 6 is shown as storage system of the present invention reads to accelerate the deployment diagram of caching system in an embodiment.
Element numbers explanation
1 main frame
11Linux kernel module
12 operational modules
13 solid state hard discs
14 disks
111IO buffer memory external member module
112 internal memories
113 pieces of layer interface modules
S01 ~ S12 step
Embodiment
Below by way of specific instantiation, embodiments of the present invention are described, those skilled in the art the content disclosed by this instructions can understand other advantages of the present invention and effect easily.The present invention can also be implemented or be applied by embodiments different in addition, and the every details in this instructions also can based on different viewpoints and application, carries out various modification or change not deviating under spirit of the present invention.It should be noted that, when not conflicting, the feature in following examples and embodiment can combine mutually.
It should be noted that, the diagram provided in following examples only illustrates basic conception of the present invention in a schematic way, then only the assembly relevant with the present invention is shown in graphic but not component count, shape and size when implementing according to reality is drawn, it is actual when implementing, and the kenel of each assembly, quantity and ratio can be a kind of change arbitrarily, and its assembly layout kenel also may be more complicated.
Under caching technology, data may have portion or multiple copies, and normally two parts, the data in buffer memory are referred to as cached copies, and the data in the primary storage of its correspondence are referred to as primary copy.By described in above, the ultimate principle of caching technology be the data buffer storage that possible be accessed frequently to the place that can access sooner, thus lifting access efficiency.Buffer memory also can be used to write acceleration; In this case, data are first written into cached copies, are synchronized to primary copy more subsequently, and after therefore must ensureing that cached copies is updated, primary copy finally can be consistent with it.In addition, if only upgrade primary copy and do not upgrade cached copies, it is invalid cached copies should to be set to, and when so subsequent reads is fetched data, would not read cached copies.Finally, very crucial a bit, just work under the buffer memory scene that only extended meeting is accessed repeatedly after data, the scene be seldom accessed repeatedly in data works hardly.
Regardless of which kind of scene, buffer memory device size is limited, and buffer memory device storage space generally can much smaller than main equipment, when spatial cache is inadequate, cache software can take corresponding mechanism to reclaim storage space to deposit new data, similar mechanism is referred to as caching replacement algorithm, and common caching replacement algorithm has FIFO, LRU, MRU etc.
Refer to Fig. 1, in linux system, the caching mechanism existed in the kernel stack of standard is page cache, page cache is that file (comprising device file) provides buffer memory, page cache is a kind of buffer memory of writeback pattern in essence, because there is no persistence mechanism, loss of data may be caused.
Current industry tends to not use page cache, namely read-write operation uses direct pattern, some buffer memory external members use solid state hard disc (SSD) to provide buffer memory for disk, improve readwrite performance, as flashcache, enhanceio, bcache etc., Fig. 2 is shown as the simple solid state hard disc that uses as the stack schematic diagram of buffer memory.As can be seen from Figure 2, when using buffer memory external member, application program needs to use direct read/write mode to get around page cache, and read-write requests is buffered external member and catches before acting on target hard disk, then process.Obviously, write operation can not use internal memory as buffer memory, otherwise can there is the problem identical with page cache, but read operation but can use.But the buffer memory external member of current existence has abandoned the use to internal memory completely, although solid state hard disc to read performance fine, for dsc data, be far away higher than solid state hard disc from the speed of internal storage access.Therefore, if reasonably can use internal memory in buffer memory external member, the performance of read operation will be improved accordingly.
The IO that Fig. 3 is shown as storage system of the present invention reads the framed structure schematic diagram accelerating caching system.This system is installed on main frame 1 or virtual machine, mainly comprises operational module 12, Linux kernel module 11, solid state hard disc 13 and disk 14; Described Linux kernel module 11 comprises IO buffer memory external member module 111, internal memory 112 and block layer interface module 113; Described internal memory 112 is for the level cache of data; Described solid state hard disc 13 is for the L2 cache of data; Described piece of layer interface module 113 provides service for described operational module 12, described piece of layer interface module 113 uses general instruction directly to access described internal memory 112, solid state hard disc 13 and described disk 14 described in the interface accessing that described piece of layer interface module 113 uses driver to provide; Described IO buffer memory external member module 111 for the searching of data during read operation, load and buffer memory; Described internal memory 112, described solid state hard disc 13 and described disk 14 all send data to described operational module 12 by described piece of layer interface module 113; Described solid state hard disc 13 and described disk 14 load data to described internal memory 112 by described IO buffer memory external member module 111; Described internal memory 112 writes data to described solid state hard disc 13; Described disk 14 directly writes data to described operational module 12 by described piece of layer interface module 113.
In one embodiment of the invention, described IO buffer memory external member module 111 is for inquiring about described internal memory 112 when operational module 12 accesses disk 14, also for inquiring about described solid state hard disc 13 during miss data in described internal memory 112, also for inquiring about described disk 14 during miss data in described solid state hard disc 13.
In one embodiment of the invention, described IO buffer memory external member module 111 also for reading described data and return during hiting data in described internal memory 112, also for loading described data during hiting data in described internal memory 112 in described solid state hard disc 13, also for loading described data during hiting data in described internal memory 112 in described disk 14.
In one embodiment of the invention, described IO buffer memory external member module 111 also for judging whether available free piece of described internal memory 112 in described internal memory 112 during miss data; The lru algorithm of described internal memory 112 is used for writing described solid state hard disc 13 at described internal memory 112 without selecting superseded block during free block.
In one embodiment of the invention, described IO buffer memory external member module 111 also for judging whether available free piece of described solid state hard disc 13 before described superseded block writes described solid state hard disc 13, also for described superseded block being write described solid state hard disc 13 when described solid state hard disc 13 available free pieces; The lru algorithm of described solid state hard disc 13 is used for abandoning without selecting superseded block during free block from described solid state hard disc 13 at described solid state hard disc 13.
Refer to Fig. 4, the IO being shown as storage system of the present invention read to accelerate caching system stack schematic diagram.As shown in Figure 4, seeing on the whole can using internal memory and solid state hard disc jointly as the buffer memory of disk, and from level, inside save as level cache, solid state hard disc is L2 cache.In order to fully effectively utilize internal memory, the data in internal memory and the data in solid state hard disc form mutual exclusion, if namely data in the buffer, just can not be present in internal memory and solid state hard disc simultaneously.
The IO that present invention also offers a kind of storage system reads to accelerate caching method, comprises the following steps: from internal memory, search desired data, if hit, then reads data and returns; If miss, then enter in solid state hard disc and search desired data, if hit, then by Data import in internal memory, and these data to be deleted from solid state hard disc; If miss, then read desired data from disk, and by Data import in internal memory.
In one embodiment of the invention, when during miss desired data, judging whether available free piece of described internal memory in internal memory; If so, then desired data is read from solid state hard disc, and by this Data import in internal memory; If not, then selected in superseded block write solid state hard disc by the algorithm of internal memory.
In one embodiment of the invention, before the algorithm of internal memory selects superseded block write solid state hard disc, judge whether available free piece of solid state hard disc, in the superseded block write solid state hard disc if so, then selected by the algorithm of internal memory; If not, then select superseded block by the algorithm of solid state hard disc to abandon.
In one embodiment of the invention, the algorithm of described internal memory and the algorithm of described solid state hard disc are lru algorithm.
Fig. 5 is that the IO of storage system of the present invention reads to accelerate the read operation process flow diagram of caching method in another embodiment, and the dotted line in figure represents asynchronous execution, and concrete steps are as follows:
Step S01, IO buffer memory external member searches desired data from internal memory;
Step S02, if judge whether hiting data, if so, then reads data and returns; If not, then step S03 is entered;
Step S03, judges whether available free piece of the internal memory as level cache, if so, then enters step S04; If not, then step S08 is entered;
Step S04, level cache is selected to eliminate block by lru algorithm, prepares to put into L2 cache solid state hard disc;
Step S05, judges that whether L2 cache solid state hard disc is full, if then enter step S06; Then enter step S07 if not;
Step S06, L2 cache is selected to eliminate by lru algorithm and is abandoned soon;
Step S07, writes in L2 cache solid state hard disc by eliminating of level cache soon;
Step S08, IO buffer memory external member searches desired data from solid state hard disc;
Step S09, if judge whether hiting data, if not, enters step S10, if so, then enters step S11;
Step S10, IO buffer memory external member reads desired data from disk, and enters step S12;
The data of hit are deleted by step S11 from solid state hard disc;
Step S12, by the data write memory of hit, and terminates.
The IO that Fig. 6 is shown as storage system of the present invention reads to accelerate the deployment diagram of caching system in an embodiment.After buffer memory external member module is inserted kernel, the corresponding configuration order using external member to provide is certain block hard disk allocating cache.As shown in Figure 6, when application program first time reads data, will read from hard disk, write buffer memory simultaneously, then read data from buffer memory subsequently.In the treatment scheme of buffer memory external member, first search in internal memory (level cache), if hit, read data and return, if hit is not searched from L2 cache, finally just reading from disk again.If L2 cache hits, then data block is re-loaded to level cache, and it is deleted from L2 cache.If L2 cache does not hit, then read data from disk and be loaded into level cache.If level cache does not have the free block that can write, the lru algorithm of level cache can select superseded block write L2 cache, and if L2 cache does not have free block, the lru algorithm of L2 cache can be selected superseded block and abandon.
Use the design of such internal memory+solid state hard disc L2 cache, can ensure that the hottest data (data that access frequency is the highest) are accessed from internal memory, and the data of secondary heat are by after deletion from internal memory, solid state hard disc can be written into, for follow-up access, thus both taken full advantage of the high speed access performance of internal memory, turn improve the hit rate of data access.
In sum, the IO of a kind of storage system of the present invention reads acceleration caching method and system avoids the defect that page cache write operation loses data, takes full advantage of again the high speed access performance of internal memory simultaneously; Internal memory and solid state hard disc buffer memory mutual exclusion classification, data can not be present in internal memory and solid state hard disc simultaneously, thus can not form repetition and waste; The data of eliminating from internal memory can write solid state hard disc, and thus the hottest data are kept at internal memory, and the data of secondary heat are then kept at solid state hard disc, avoid after time dsc data eliminated from internal memory, and the situation of not hitting appears in access again.So the present invention effectively overcomes various shortcoming of the prior art and tool high industrial utilization.
Above-described embodiment is illustrative principle of the present invention and effect thereof only, but not for limiting the present invention.Any person skilled in the art scholar all without prejudice under spirit of the present invention and category, can modify above-described embodiment or changes.Therefore, such as have in art usually know the knowledgeable do not depart from complete under disclosed spirit and technological thought all equivalence modify or change, must be contained by claim of the present invention.

Claims (9)

1. the IO of storage system reads to accelerate a caching method, it is characterized in that, comprises the following steps:
Step S1, searches desired data from internal memory, if hit, then reads data and returns; If miss, then enter step S2;
Step S2, enters in solid state hard disc and searches desired data, if hit, then by Data import in internal memory, and these data to be deleted from solid state hard disc; If miss, then enter step S3;
Step S3, reads desired data from disk, and by Data import in internal memory.
2. the IO of storage system according to claim 1 reads to accelerate caching method, it is characterized in that, also comprises in described step S1: when during miss desired data, judging whether available free piece of described internal memory in internal memory; If so, then desired data is read from solid state hard disc, and by this Data import in internal memory; If not, then selected in superseded block write solid state hard disc by the algorithm of internal memory.
3. the IO of storage system according to claim 2 reads to accelerate caching method, it is characterized in that, before the algorithm of internal memory selects superseded block write solid state hard disc, judge whether available free piece of solid state hard disc, if so, in the superseded block write the solid state hard disc then algorithm of internal memory selected; If not, then select superseded block by the algorithm of solid state hard disc to abandon.
4. the IO of storage system according to claim 3 reads to accelerate caching method, and it is characterized in that, the algorithm of described internal memory and the algorithm of described solid state hard disc are lru algorithm.
5. the IO of storage system reads to accelerate a caching system, is installed on main frame, it is characterized in that, mainly comprise operational module, Linux kernel module, solid state hard disc and disk; Described Linux kernel module comprises IO buffer memory external member module, internal memory and block layer interface module;
Described internal memory is used for the level cache of data;
Described solid state hard disc is used for the L2 cache of data;
Described piece of layer interface module provides service for described operational module, and described piece of layer interface module uses general instruction directly to access described internal memory, solid state hard disc and described disk described in the interface accessing that described piece of layer interface module uses driver to provide;
The searching of data when described IO buffer memory external member module is used for read operation, to load and buffer memory;
Described internal memory, described solid state hard disc and described disk all send data to described operational module by described piece of layer interface module; Described solid state hard disc and described disk are by described IO buffer memory external member module loading data extremely described internal memory; Described internal memory writes data to described solid state hard disc; Described disk directly writes data to described operational module by described piece of layer interface module.
6. the IO of storage system according to claim 5 reads to accelerate caching system, it is characterized in that, described IO buffer memory external member module is used for inquiring about described internal memory when operational module access disk, also for inquiring about described solid state hard disc during miss data in described internal memory, also for inquiring about described disk during miss data in described solid state hard disc.
7. the IO of storage system according to claim 6 reads to accelerate caching system, it is characterized in that, described IO buffer memory external member module also for reading described data during hiting data and returning in described internal memory, also for loading described data during hiting data in described internal memory in described solid state hard disc, also for loading described data during hiting data in described internal memory in described disk.
8. the IO of storage system according to claim 7 reads to accelerate caching system, and it is characterized in that, described IO buffer memory external member module also for judging whether available free piece of described internal memory in described internal memory during miss data; The lru algorithm of described internal memory is used for writing described solid state hard disc at described internal memory without selecting superseded block during free block.
9. the IO of storage system according to claim 8 reads to accelerate caching system, it is characterized in that, described IO buffer memory external member module also for judging whether available free piece of described solid state hard disc before described superseded block writes described solid state hard disc, also for described superseded block being write described solid state hard disc when available free piece of described solid state hard disc; The lru algorithm of described solid state hard disc is used for abandoning without selecting superseded block during free block from described solid state hard disc at described solid state hard disc.
CN201510922595.0A 2015-12-11 2015-12-11 IO read speeding cache method and system of storage system Pending CN105573669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510922595.0A CN105573669A (en) 2015-12-11 2015-12-11 IO read speeding cache method and system of storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510922595.0A CN105573669A (en) 2015-12-11 2015-12-11 IO read speeding cache method and system of storage system

Publications (1)

Publication Number Publication Date
CN105573669A true CN105573669A (en) 2016-05-11

Family

ID=55883863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510922595.0A Pending CN105573669A (en) 2015-12-11 2015-12-11 IO read speeding cache method and system of storage system

Country Status (1)

Country Link
CN (1) CN105573669A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107402819A (en) * 2017-08-04 2017-11-28 郑州云海信息技术有限公司 The management method and system of a kind of client-cache
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN107451152A (en) * 2016-05-31 2017-12-08 阿里巴巴集团控股有限公司 Computing device, data buffer storage and the method and device of lookup
CN107992271A (en) * 2017-12-21 2018-05-04 郑州云海信息技术有限公司 Data pre-head method, device, equipment and computer-readable recording medium
CN108540367A (en) * 2017-03-06 2018-09-14 中国移动通信有限公司研究院 A kind of message treatment method and system
CN109446222A (en) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 A kind of date storage method of Double buffer, device and storage medium
CN109582246A (en) * 2018-12-06 2019-04-05 深圳市网心科技有限公司 Data access method, device, system and readable storage medium storing program for executing based on mine machine
CN110472004A (en) * 2019-08-23 2019-11-19 国网山东省电力公司电力科学研究院 A kind of method and system of scientific and technological information data multilevel cache management
CN111124279A (en) * 2019-11-29 2020-05-08 苏州浪潮智能科技有限公司 Storage deduplication processing method and device based on host
CN112579630A (en) * 2020-12-28 2021-03-30 北京思特奇信息技术股份有限公司 Commodity retrieval method and system and electronic equipment
CN112732190A (en) * 2021-01-07 2021-04-30 苏州浪潮智能科技有限公司 Method, system and medium for optimizing data storage structure
CN113326214A (en) * 2021-06-16 2021-08-31 统信软件技术有限公司 Page cache management method, computing device and readable storage medium
CN113721846A (en) * 2021-07-30 2021-11-30 苏州浪潮智能科技有限公司 Method, system, device and medium for optimizing input and output performance of mechanical hard disk
WO2023050488A1 (en) * 2021-09-30 2023-04-06 福建极存数据科技有限公司 Disk performance improvement method and terminal
CN117389958A (en) * 2023-12-08 2024-01-12 中汽研汽车检验中心(广州)有限公司 Searching and processing method for mo file

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760101A (en) * 2012-05-22 2012-10-31 中国科学院计算技术研究所 SSD-based (Solid State Disk) cache management method and system
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
US20150149709A1 (en) * 2013-11-27 2015-05-28 Alibaba Group Holding Limited Hybrid storage
CN105138292A (en) * 2015-09-07 2015-12-09 四川神琥科技有限公司 Disk data reading method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760101A (en) * 2012-05-22 2012-10-31 中国科学院计算技术研究所 SSD-based (Solid State Disk) cache management method and system
CN102945207A (en) * 2012-10-26 2013-02-27 浪潮(北京)电子信息产业有限公司 Cache management method and system for block-level data
US20150149709A1 (en) * 2013-11-27 2015-05-28 Alibaba Group Holding Limited Hybrid storage
CN105138292A (en) * 2015-09-07 2015-12-09 四川神琥科技有限公司 Disk data reading method

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451152A (en) * 2016-05-31 2017-12-08 阿里巴巴集团控股有限公司 Computing device, data buffer storage and the method and device of lookup
CN108540367A (en) * 2017-03-06 2018-09-14 中国移动通信有限公司研究院 A kind of message treatment method and system
CN107422994A (en) * 2017-08-02 2017-12-01 郑州云海信息技术有限公司 A kind of method for improving reading and writing data performance
CN107402819A (en) * 2017-08-04 2017-11-28 郑州云海信息技术有限公司 The management method and system of a kind of client-cache
CN107992271A (en) * 2017-12-21 2018-05-04 郑州云海信息技术有限公司 Data pre-head method, device, equipment and computer-readable recording medium
CN109446222A (en) * 2018-08-28 2019-03-08 厦门快商通信息技术有限公司 A kind of date storage method of Double buffer, device and storage medium
CN109582246A (en) * 2018-12-06 2019-04-05 深圳市网心科技有限公司 Data access method, device, system and readable storage medium storing program for executing based on mine machine
WO2020113941A1 (en) * 2018-12-06 2020-06-11 深圳市网心科技有限公司 Data access method, apparatus and system based on mining computer, and readable storage medium
CN110472004A (en) * 2019-08-23 2019-11-19 国网山东省电力公司电力科学研究院 A kind of method and system of scientific and technological information data multilevel cache management
CN111124279A (en) * 2019-11-29 2020-05-08 苏州浪潮智能科技有限公司 Storage deduplication processing method and device based on host
CN112579630A (en) * 2020-12-28 2021-03-30 北京思特奇信息技术股份有限公司 Commodity retrieval method and system and electronic equipment
CN112732190A (en) * 2021-01-07 2021-04-30 苏州浪潮智能科技有限公司 Method, system and medium for optimizing data storage structure
CN112732190B (en) * 2021-01-07 2023-01-10 苏州浪潮智能科技有限公司 Method, system and medium for optimizing data storage structure
CN113326214A (en) * 2021-06-16 2021-08-31 统信软件技术有限公司 Page cache management method, computing device and readable storage medium
CN113326214B (en) * 2021-06-16 2023-06-16 统信软件技术有限公司 Page cache management method, computing device and readable storage medium
CN113721846A (en) * 2021-07-30 2021-11-30 苏州浪潮智能科技有限公司 Method, system, device and medium for optimizing input and output performance of mechanical hard disk
CN113721846B (en) * 2021-07-30 2023-08-25 苏州浪潮智能科技有限公司 Method, system, equipment and medium for optimizing mechanical hard disk input and output performance
WO2023050488A1 (en) * 2021-09-30 2023-04-06 福建极存数据科技有限公司 Disk performance improvement method and terminal
CN117389958A (en) * 2023-12-08 2024-01-12 中汽研汽车检验中心(广州)有限公司 Searching and processing method for mo file
CN117389958B (en) * 2023-12-08 2024-04-09 中汽研汽车检验中心(广州)有限公司 Searching and processing method for mo file

Similar Documents

Publication Publication Date Title
CN105573669A (en) IO read speeding cache method and system of storage system
US10176057B2 (en) Multi-lock caches
US10042779B2 (en) Selective space reclamation of data storage memory employing heat and relocation metrics
Zhou et al. Spitfire: A three-tier buffer manager for volatile and non-volatile memory
US11061572B2 (en) Memory object tagged memory monitoring method and system
US20140115261A1 (en) Apparatus, system and method for managing a level-two cache of a storage appliance
US10019377B2 (en) Managing cache coherence using information in a page table
US9229869B1 (en) Multi-lock caches
US20150149742A1 (en) Memory unit and method
US9552301B2 (en) Method and apparatus related to cache memory
US10185498B2 (en) Write buffer design for high-latency memories
CN108153682B (en) Method for mapping addresses of flash translation layer by utilizing internal parallelism of flash memory
KR100678913B1 (en) Apparatus and method for decreasing page fault ratio in virtual memory system
CN109478164A (en) For storing the system and method for being used for the requested information of cache entries transmission
CN109002400B (en) Content-aware computer cache management system and method
US8732404B2 (en) Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to
CN109983538B (en) Memory address translation
KR20220154612A (en) Method of cache management based on file attributes, and cache management device operating based on file attributes
KR101976320B1 (en) Last level cache memory and data management method thereof
JP4792065B2 (en) Data storage method
CN107423232B (en) FTL quick access method and device
US20240061786A1 (en) Systems, methods, and apparatus for accessing data in versions of memory pages
WO2022161619A1 (en) A controller, a computing arrangement, and a method for increasing readhits in a multi-queue cache
JP2017062597A (en) Information processing system and information processing method
CN103294613A (en) Memory access method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160511