CN102043731A - Cache system of storage system - Google Patents

Cache system of storage system Download PDF

Info

Publication number
CN102043731A
CN102043731A CN2010105985270A CN201010598527A CN102043731A CN 102043731 A CN102043731 A CN 102043731A CN 2010105985270 A CN2010105985270 A CN 2010105985270A CN 201010598527 A CN201010598527 A CN 201010598527A CN 102043731 A CN102043731 A CN 102043731A
Authority
CN
China
Prior art keywords
equipment
layer
caching
block device
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010105985270A
Other languages
Chinese (zh)
Inventor
许建卫
骆志军
邵宗有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TIANJIN SUGON COMPUTER INDUSTRY Co Ltd
Original Assignee
TIANJIN SUGON COMPUTER INDUSTRY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TIANJIN SUGON COMPUTER INDUSTRY Co Ltd filed Critical TIANJIN SUGON COMPUTER INDUSTRY Co Ltd
Priority to CN2010105985270A priority Critical patent/CN102043731A/en
Publication of CN102043731A publication Critical patent/CN102043731A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a cache system of a storage system, comprising an external interface layer, a cache management module and a block device hardware layer, wherein the external interface layer comprises a user interface and a standard block device interface; the cache management module comprises a virtual device mapping layer and a core management layer; and the block device hardware layer comprises a high-speed block device and a conventional standard block device based on RAM (Random Access Memory). The invention has the advantages of low system price, high performance and large capacity.

Description

A kind of caching system of storage system
Technical field
The present invention relates to performance of storage system and optimize the field, be specifically related to a kind of caching system of storage system.
Background technology
In the computing machine evolution, CPU speed promptly increases about 60% always along with the Moore's Law development every year.But as the disk system of main memory device, access speed but increasess slowly, and have only about 7% every year.Therefore, the performance gap between CPU and the disk is just increasing, and " IO wall " becomes " CPU wall " and " internal memory wall " new bottleneck of computer system afterwards.
The gap of this processing speed shows that externally server end can not reliablely and stablely externally provide service, client to can not get high-quality timely IO response.Common solution is by the increase number of servers at present, thereby the quantity that increases memory unit solves the IO problem, but this method can cause the waste of computational resource; Even improve performance of storage system, also have more disk storage space and be wasted by the quantity that increases disk unit simply; In addition, other power costs, machine room and Environmental costs also constantly rise along with the increase of server or disk unit quantity.
In order to remedy the deficiency based on the storage device performance of disk, many new storage mediums constantly occur at present, such as Flash, and PCM etc.But advantages such as disk size is big, price is low make its dominant position in storage system can not change in a short time.Therefore, how to improve the performance that has disk system now and just seem very important.
The notion of classification storage has obtained generally adopting in computer system, cache layer is mainly undertaken the work of two aspects, be that the metadata cache of will visit before gets off on the one hand, temporal locality principle according to data access, these data may be visited in a short time once more, therefore can directly read, and need not visit the main memory medium once more from cache layer.On the other hand, cache layer also needs to give a forecast to using the data that are about to visit, and the content of prediction is read in the buffer memory medium ahead of time.Under the correct situation of prediction, follow-up visit can directly be hit in the buffer memory medium, has avoided the slower main memory medium of access speed too.
Therefore, storage medium just need have characteristics wear-resistant, that read or write speed fast, the read-write balanced, system interference is low.Existing based on FLASH memory device since its to write indegree limited, and write bandwidth and be starkly lower than shortcomings such as tape reading is wide and make it can not be used as the buffer memory medium.And if use the host internal memory because the data migtation between the internal memory needs CPU to participate in, therefore do the interference that the buffer memory medium can bring aspect of performance to system with the host internal memory.
Summary of the invention
For solving above-mentioned shortcoming, the present invention increases cache layer from the buffer memory and the prefetching technique of storage system between disk unit and memory system, thereby forms the storage organization of stratification, improves the performance of underlying device and storage system integral body.
A kind of caching system of storage system comprises the external interface layer, caching management module and block device hardware layer;
Described external interface layer comprises user interface and calibrated bolck equipment interface;
Described caching management module comprises virtual unit mapping layer and hard core control layer;
Described block device hardware layer comprises based on the high speed block device of RAM and conventional calibrated bolck equipment.
First kind of optimal technical scheme of the present invention is: use by described calibrated bolck equipment interface equipment is carried out read, write, ioctl operation; The user is by described user interface structure, management and allocating cache system.
Second kind of optimal technical scheme of the present invention is: described virtual unit mapping layer is used for setting up mapping and address spaces between buffer memory equipment and standard storage equipment.
The third optimal technical scheme of the present invention is: described hard core control layer manages and organizes the bottom memory device, handles concrete IO request and carries out the realization and the conversion of various strategies.
The 4th kind of optimal technical scheme of the present invention is: describedly be used for doing the buffer memory of calibrated bolck equipment based on the high speed block device of RAM, described calibrated bolck equipment can be disk, dish battle array, RAID or dummy block will equipment.
The beneficial effect that brings among the present invention is as follows:
Low price: because the storage system based on disk is still used in the rear end storage, so the price of total system is lower than all adopting the system of FLASH;
High-performance: under the hit rate condition with higher, most visits are all hit in the high-speed cache medium, so system can provide the performance of approximate high-speed cache medium;
High capacity: because data finally are stored on the exterior storage, therefore final system memory size is based on the memory capacity of disk.
Description of drawings
Fig. 1 is a caching system Organization Chart of the present invention
Fig. 2 is a data access flow process in the system of the present invention
Fig. 3 is a read request treatment scheme of the present invention
Fig. 4 is a write request treatment scheme of the present invention
Embodiment
As shown in Figure 1, be the external interface layer in the superiors of caching system, user interface and block device operation-interface are provided.Be the caching management module of caching system under the external interface layer, this module is the major part that the present invention designs realization, and it comprises virtual unit mapping layer and hard core control layer; The bottom is the block device hardware layer, by forming based on the high speed block device of RAM and the calibrated bolck equipment of a routine, wherein is used for doing the buffer memory of calibrated bolck equipment based on the high speed block device of RAM.
In caching management module, the virtual unit mapping layer is mainly realized the buffer memory equipment that caching system is virtual and the map addresses and the encapsulation of raw data source device.Caching system hard core control layer carries out the mapping of buffer unit between the buffer memory equipment of bottom and normal data source device, set up the buffer memory mapping relations, and carry out the management of buffer memory by related data structure, by management, reach the purpose that request is distributed and controlled to IO to the buffer unit state.
Wherein, the function of each module is divided as follows:
1, calibrated bolck equipment interface and user interface layer
The calibrated bolck equipment interface is for application provides the block device view of a standard, the memory device that makes things convenient for upper layer application to use cache module to present.Can carry out common block device operation such as Read, Write, ioctl to equipment by the calibrated bolck equipment interface.
User interface layer provides the interface of a structure, management and allocating cache equipment for upper layer application.The user provides the size and the relative address of cache device, data-source device by this interface, and the name of the caching system that generates of ultimate demand etc.
Management that user interface layer provides and configuration interface are mainly used to manage the attribute with the allocating cache system, for example size of the corresponding strategies of the hard core control layer of caching system, buffer memory equipment, buffer unit etc.
2, device map layer
The device map layer is mainly used to set up mapping and address spaces between buffer memory equipment and standard storage equipment.It is used for the original address of two equipment is integrated, and forms a new logical address, will fall into caching system automatically to the visit of this logical address.
And the device map layer also is responsible for the access map of logical address is gone to the physical address of physical equipment, with realization the visit of virtual unit is really fallen on the physical equipment.
3, hard core control layer
The hard core control layer be whole caching system nucleus module.The hard core control layer mainly is responsible for realization and the conversion to the management of bottom memory device and tissue, concrete IO processing of request, various strategies.
Comprise various data structures in the hard core control layer, comprise data structure, the data structure of two fundamental block equipment of management of equipment, the data structure that data transmit and the data structure of the master data piece of management, each worker thread that carries out corresponding data processing and cache management, IO treatment scheme are controlled, the data structure of various strategies etc.
The lower system overhead of assurance of all try one's best in the various strategy designs of hard core control layer, specific strategy can be divided into: the map addresses mode, write strategy, take-back strategy and prefetch policy.
4, entity device layer
The entity device layer comprises concrete block device.In caching system, the solid block mechanical floor is in the bottom.The caching system inside of each generation comprises two concrete equipment, the block device of a high speed and a calibrated bolck equipment.Wherein calibrated bolck equipment can be disk, dish battle array, Raid system or virtual block device etc.The high speed block device then is the high-performance memory device that adopts based on RAM.
According to the IO solicited message that upper layer transfers is got off, each device processes of solid block mechanical floor sends to the IO request of oneself, and carries out corresponding data and transmit.
Introduce the course of work of caching system among the present invention below:
As shown in Figure 2, in caching system, the process of handling request is actually according to information requested, and request is analyzed, and changes, transmits, expands and the process of the processing that finish up according to the information of analyzing.To processing of request is the main line that runs through whole module.The processing of request route is divided into two circuits according to the read and write of request.The state of buffer memory is the factor of decision in the process of handling, and is all writing down the buffer status of this unit in the data structure of each buffer unit, is controlling the processing of request flow process.
Wherein, in the processing of read request, spatial cache mainly carries out is that data in the source data equipment are done buffer memory and according to information the looking ahead to follow-up data of read request.
The treatment scheme of read request as shown in Figure 3.
After at first storage receives read request,, the data address of request is analyzed in conjunction with the history information of prefetch policy and request.If judge and to look ahead, then can handle simultaneously prefetch data and former request of data simultaneously.
If desired the content of request being carried out buffer memory, at first is that the buffer unit address realm at read request place distributes buffer unit.Giving source data equipment with this read request handles.When data-source device copied to the data content of request in the host memory zone of request, request of neotectonics was used for the data in this piece region of memory are written among the buffer unit of buffer memory equipment.
If the address realm at request place is buffered device map, promptly Qing Qiu address space is comprised among certain buffer unit, and then the state of query caching unit is further processed according to state.If data are put into buffer memory equipment from the source memory device, what perhaps store in the buffer memory equipment is the last look of data, then can directly obtain data from buffer memory equipment, directly will ask through giving the buffer memory device processes after the conversion.
If the state of buffer unit descriptor is that data are invalid, but update request sends, shows that buffer memory equipment reads in data from source data equipment.At this moment, if reading of data can be read in expired and inconsistent data from buffer memory equipment; If from source data equipment reading of data can cause data repeat read, all be inappropriate way.In this case, caching system can be converted to request the request to buffer memory equipment, waits for the buffer unit end-of-fill, and state is handed this request to the buffer memory device processes over to after becoming effectively again.
As Fig. 4 the processing procedure of write request is designed into the employing that buffer memory is write strategy.To writing the process of penetrating, is invalid directly with the data markers in the buffer memory, request is converted to the request of source data equipment, the source data device processes is given in the request of forwarding.To writing back strategy, the processing procedure of write request if request corresponding address space is not buffered, is then directly changed corresponding request and is given the source data device processes as shown in Figure 4.Otherwise the query caching location mode if the data in the buffer memory are effective, illustrates that then the data in the buffer memory are up-to-date, gives the buffer memory device processes with request.If buffer memory is not these three kinds of states, then request is given source data equipment and handle.

Claims (5)

1. the caching system of a storage system is characterized in that: comprise the external interface layer, caching management module and block device hardware layer;
Described external interface layer comprises user interface and calibrated bolck equipment interface;
Described caching management module comprises virtual unit mapping layer and hard core control layer;
Described block device hardware layer comprises based on the high speed block device of RAM and conventional calibrated bolck equipment.
2. a kind of caching system of storage system according to claim 1 is characterized in that: use by described calibrated bolck equipment interface equipment is carried out read, write, ioctl operation; The user is by described user interface structure, management and allocating cache system.
3. a kind of caching system of storage system according to claim 1, it is characterized in that: described virtual unit mapping layer is used for setting up mapping and address spaces between buffer memory equipment and standard storage equipment.
4. a kind of caching system of storage system according to claim 1, it is characterized in that: described hard core control layer manages and organizes the bottom memory device, handles concrete IO request and carries out the realization and the conversion of various strategies.
5. a kind of caching system of storage system according to claim 1 is characterized in that: describedly be used for doing the buffer memory of calibrated bolck equipment based on the high speed block device of RAM, described calibrated bolck equipment can be disk, dish battle array, RAID or dummy block will equipment.
CN2010105985270A 2010-12-17 2010-12-17 Cache system of storage system Pending CN102043731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105985270A CN102043731A (en) 2010-12-17 2010-12-17 Cache system of storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105985270A CN102043731A (en) 2010-12-17 2010-12-17 Cache system of storage system

Publications (1)

Publication Number Publication Date
CN102043731A true CN102043731A (en) 2011-05-04

Family

ID=43909882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105985270A Pending CN102043731A (en) 2010-12-17 2010-12-17 Cache system of storage system

Country Status (1)

Country Link
CN (1) CN102043731A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657298A (en) * 2015-02-11 2015-05-27 昆腾微电子股份有限公司 Reading control device and method
CN105573682A (en) * 2016-02-25 2016-05-11 浪潮(北京)电子信息产业有限公司 SAN storage system and data read-write method thereof
CN106528001A (en) * 2016-12-05 2017-03-22 北京航空航天大学 Cache system based on nonvolatile memory and software RAID
CN106951182A (en) * 2017-02-24 2017-07-14 深圳市中博睿存信息技术有限公司 A kind of block device caching method and device
CN107577492A (en) * 2017-08-10 2018-01-12 上海交通大学 The NVM block device drives method and system of accelerating file system read-write

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1506843A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system capable of using virtual memory processing mode
CN1506849A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system capable of managing virtual memory processing conception
CN1506850A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system without system memory
CN1602499A (en) * 2002-10-04 2005-03-30 索尼株式会社 Data management system, data management method, virtual memory device, virtual memory control method, reader/writer device, I C module access device, and I C module access control method
CN1722092A (en) * 2004-04-30 2006-01-18 微软公司 VEX - virtual extension framework
CN101727976A (en) * 2008-10-15 2010-06-09 晶天电子(深圳)有限公司 Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system
CN101763226A (en) * 2010-01-19 2010-06-30 北京航空航天大学 Cache method for virtual storage devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1602499A (en) * 2002-10-04 2005-03-30 索尼株式会社 Data management system, data management method, virtual memory device, virtual memory control method, reader/writer device, I C module access device, and I C module access control method
CN1506843A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system capable of using virtual memory processing mode
CN1506849A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system capable of managing virtual memory processing conception
CN1506850A (en) * 2002-12-12 2004-06-23 国际商业机器公司 Data processing system without system memory
CN1722092A (en) * 2004-04-30 2006-01-18 微软公司 VEX - virtual extension framework
CN101727976A (en) * 2008-10-15 2010-06-09 晶天电子(深圳)有限公司 Multi-layer flash-memory device, a solid hard disk and a truncation non-volatile memory system
CN101763226A (en) * 2010-01-19 2010-06-30 北京航空航天大学 Cache method for virtual storage devices

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657298A (en) * 2015-02-11 2015-05-27 昆腾微电子股份有限公司 Reading control device and method
CN105573682A (en) * 2016-02-25 2016-05-11 浪潮(北京)电子信息产业有限公司 SAN storage system and data read-write method thereof
CN105573682B (en) * 2016-02-25 2018-10-30 浪潮(北京)电子信息产业有限公司 A kind of SAN storage system and its data read-write method
CN106528001A (en) * 2016-12-05 2017-03-22 北京航空航天大学 Cache system based on nonvolatile memory and software RAID
CN106528001B (en) * 2016-12-05 2019-08-23 北京航空航天大学 A kind of caching system based on nonvolatile memory and software RAID
CN106951182A (en) * 2017-02-24 2017-07-14 深圳市中博睿存信息技术有限公司 A kind of block device caching method and device
CN107577492A (en) * 2017-08-10 2018-01-12 上海交通大学 The NVM block device drives method and system of accelerating file system read-write

Similar Documents

Publication Publication Date Title
US10248576B2 (en) DRAM/NVM hierarchical heterogeneous memory access method and system with software-hardware cooperative management
Stuecheli et al. The virtual write queue: Coordinating DRAM and last-level cache policies
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
CN102804152B (en) To the cache coherence support of the flash memory in storage hierarchy
US8171223B2 (en) Method and system to increase concurrency and control replication in a multi-core cache hierarchy
US20170139827A1 (en) Ram disk using non-volatile random access memory
CN106528454B (en) A kind of memory system caching method based on flash memory
US8055851B2 (en) Line swapping scheme to reduce back invalidations in a snoop filter
US20120102273A1 (en) Memory agent to access memory blade as part of the cache coherency domain
US20110161597A1 (en) Combined Memory Including a Logical Partition in a Storage Memory Accessed Through an IO Controller
US20040039880A1 (en) Method and apparatus for shared cache coherency for a chip multiprocessor or multiprocessor system
CN110362504A (en) Management to consistency link and multi-level store
US9317448B2 (en) Methods and apparatus related to data processors and caches incorporated in data processors
US9990293B2 (en) Energy-efficient dynamic dram cache sizing via selective refresh of a cache in a dram
US20070233966A1 (en) Partial way hint line replacement algorithm for a snoop filter
CN102043731A (en) Cache system of storage system
Chou et al. CANDY: Enabling coherent DRAM caches for multi-node systems
US10705977B2 (en) Method of dirty cache line eviction
CN112988387A (en) Memory page management method and computing device
US20130086325A1 (en) Dynamic cache system and method of formation
Zhang et al. DualStack: A high efficient dynamic page scheduling scheme in hybrid main memory
CN111787062A (en) Wide area network file system-oriented adaptive fast increment pre-reading method
Liu et al. A space-efficient fair cache scheme based on machine learning for NVMe SSDs
Ahmed et al. Directory-based cache coherence protocol for power-aware chip-multiprocessors
CN107526528B (en) Mechanism for realizing on-chip low-delay memory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110504