CN111352575A - Read cache acceleration design of distributed storage cluster system - Google Patents
Read cache acceleration design of distributed storage cluster system Download PDFInfo
- Publication number
- CN111352575A CN111352575A CN201811576597.9A CN201811576597A CN111352575A CN 111352575 A CN111352575 A CN 111352575A CN 201811576597 A CN201811576597 A CN 201811576597A CN 111352575 A CN111352575 A CN 111352575A
- Authority
- CN
- China
- Prior art keywords
- cache
- read
- ssd
- node
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001133 acceleration Effects 0.000 title claims abstract description 23
- 239000007787 solid Substances 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims 1
- 238000005192 partition Methods 0.000 abstract description 2
- 238000007726 management method Methods 0.000 description 7
- 238000013500 data storage Methods 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
- G06F3/0611—Improving I/O performance in relation to response time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention relates to a read cache acceleration design of a distributed storage cluster system, which is characterized in that an SSD (solid state disk) is configured as a global acceleration cache of a node, and the requirement of a service application scene on IO (input/output) requests with higher concurrency and lower delay is met by utilizing the characteristic that the SSD can provide high-concurrency and low-delay read/write IO times. Meanwhile, in the read cache acceleration design of the distributed storage cluster system, the hit rate of cache data in the SSD solid state disk is improved by utilizing the read-write cache non-partition, the LRU and the LFU algorithm to sum, and the coverage service is the application of recent access and more frequent access to realize read acceleration promotion, wherein the read-write cache is not partitioned to realize the recently written data, and the SSD cache data is directly read when a corresponding read request is met; and the LRU and LFU algorithm are used for caching the data with the maximum reading times in the SSD in the recent period, for example, the SSD cached data is directly read when a corresponding reading request is met.
Description
Technical Field
The invention relates to the field of computer storage, in particular to a read cache acceleration design of a distributed storage cluster system.
Background
With the continuous development of information technology, the requirements of each information system on software and hardware are higher and higher, and high performance, high speed and high manageability have become basic requirements of a storage system as a data bottom layer support, and a distributed cluster storage system is applied.
A distributed cluster storage system is an open storage architecture, a distributed operating system is adopted, storage spaces in a plurality of physical storage devices are aggregated into a storage pool, namely a uniform naming space, which can provide a uniform access interface and a management interface for an application server, an application can very easily manage all disks on the physical storage devices at the rear end of the storage pool through the access interface, the performance and the utilization rate of the disks of the storage devices are fully exerted, and data can be stored and read on the plurality of storage devices according to a certain load balancing strategy so as to obtain higher storage performance.
The distributed cluster storage system is composed of a server deployed on a management node and proxy servers deployed on data nodes, wherein the server of the management node is communicated with each proxy server to acquire state information of each data storage node and manage related services of the data storage node, and management of performance, service and hardware information of a cluster is provided for users. In the prior art, a server requests data from a proxy server at regular time, and the proxy server responds to the request and returns various data modes for communication.
When the server continuously requests the proxy server, the proxy server itself needs a longer time to respond, and a blocking phenomenon occurs. Under the condition that part of interfaces consume longer time, the proxy server cannot return data requested by the server on time, and the overtime condition of the interfaces is more and more obvious along with the continuous extension of the system running time, so that the normal running of the management system is influenced. If the timeout time is set too long, although this phenomenon can be alleviated, the management system cannot acquire the latest state of the proxy server in time, and the function and experience of the management system are also affected.
SUMMARY OF THE PATENT FOR INVENTION
The invention relates to a read cache acceleration design of a distributed storage cluster system, which is characterized in that an SSD (solid state disk) is configured as a global acceleration cache of a node, and the requirement of a service application scene on IO (input/output) requests with higher concurrency and lower delay is met by utilizing the characteristic that the SSD can provide high-concurrency and low-delay read/write IO times. Meanwhile, in the read cache acceleration design of the distributed storage cluster system, the hit rate of cache data in the SSD solid state disk is improved by utilizing the read-write cache non-partition, the LRU and the LFU algorithm to sum, and the coverage service is the application of recent access and more frequent access to realize read acceleration promotion, wherein the read-write cache is not partitioned to realize the recently written data, and the SSD cache data is directly read when a corresponding read request is met; and the LRU and LFU algorithm are used for caching the data with the maximum reading times in the SSD in the recent period, for example, the SSD cached data is directly read when a corresponding reading request is met.
Drawings
Fig. 1 is a schematic view of a read cache acceleration design structure of a distributed storage cluster system according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.
Referring to fig. 1, fig. 1 is a schematic view of a read cache acceleration design structure of a distributed storage cluster system according to the present invention.
The read cache acceleration design of the distributed storage cluster system is characterized by comprising a 2+1 distributed cluster node 1 (12 a), a node 2 (12 b) and a node 3 (12 c), wherein each node is configured with a volume or a file system which is respectively a volume/file system (11 a) of the node 1, a volume/file system (11 b) of the node 2 and a volume/file system (11 c) of the node 3, each node is configured with an SSD solid-state disk which is a global cache disk and is respectively an SSD solid-state disk (10 a) of the node 1, an SSD solid-state disk (10 b) of the node 2 and an SSD solid-state disk (10 c) of the node 3, and a read request (13), SSD cache data (14 a) of the node 1, SSD cache data (14 b) of the node 2 and SSD cache data (14 c) of the node 3.
A read cache acceleration design of a distributed storage cluster system is characterized in that a SSD solid-state disk (10 a/10b/10 c) configured by nodes in a cluster is a global cache disk.
A read cache acceleration design of a distributed storage cluster system is characterized in that SSD cache data (14 a/14b/14 c) of a node (12 a/12b/12 c) is obtained by sharing an SSD fixed disk (10 a/10b/10 c) through a write cache and a read cache and by a sum mode of LRU (least recently written) and LFU (Linear frequency unit) algorithms to cache data which is frequently accessed in a recent period of time.
A read cache acceleration design of a distributed storage cluster system is characterized in that an SSD solid state disk (10 a/10b/10 c) can be used as a read acceleration cache, and meanwhile, emerging technology products such as NVMe disks and NVDIMMs can be expanded to replace the SSD solid state disk (10 a/10b/10 c).
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the present invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (4)
1. The read cache acceleration design of the distributed storage cluster system is characterized by comprising a 2+1 distributed cluster node 1 (12 a), a node 2 (12 b) and a node 3 (12 c), wherein each node is configured with a volume or a file system which is respectively a volume/file system (11 a) of the node 1, a volume/file system (11 b) of the node 2 and a volume/file system (11 c) of the node 3, each node is configured with an SSD solid-state disk which is a global cache disk and is respectively an SSD solid-state disk (10 a) of the node 1, an SSD solid-state disk (10 b) of the node 2 and an SSD solid-state disk (10 c) of the node 3, and a read request (13), SSD cache data (14 a) of the node 1, SSD cache data (14 b) of the node 2 and SSD cache data (14 c) of the node 3.
2. The distributed storage cluster system read cache acceleration design according to claim 1, characterized in that one SSD solid state disk (10 a/10b/10 c) configured for nodes within a cluster is a global cache disk.
3. The read cache acceleration scheme of claim 1, wherein the SSD cache data (14 a/14b/14 c) of the nodes (12 a/12b/12 c) is shared by the write cache and the read cache (10 a/10b/10 c), and the LRU and LFU algorithms are summed to cache recently written data and frequently accessed data during a recent period of time.
4. The design of read cache acceleration of a distributed storage cluster system according to claim 1, wherein the method can use SSD solid state disk (10 a/10b/10 c) as read cache acceleration, and can use new technology products such as NVMe disk, NVDIMM, etc. instead of SSD solid state disk (10 a/10b/10 c) in an extensible way.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811576597.9A CN111352575A (en) | 2018-12-23 | 2018-12-23 | Read cache acceleration design of distributed storage cluster system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811576597.9A CN111352575A (en) | 2018-12-23 | 2018-12-23 | Read cache acceleration design of distributed storage cluster system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111352575A true CN111352575A (en) | 2020-06-30 |
Family
ID=71192504
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811576597.9A Pending CN111352575A (en) | 2018-12-23 | 2018-12-23 | Read cache acceleration design of distributed storage cluster system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111352575A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113721845A (en) * | 2021-07-30 | 2021-11-30 | 苏州浪潮智能科技有限公司 | Volume processing method, system, equipment and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080046538A1 (en) * | 2006-08-21 | 2008-02-21 | Network Appliance, Inc. | Automatic load spreading in a clustered network storage system |
CN107357532A (en) * | 2017-07-14 | 2017-11-17 | 长沙开雅电子科技有限公司 | A kind of new cache pre-reading implementation method of new cluster-based storage |
CN107643875A (en) * | 2016-07-20 | 2018-01-30 | 湖南百里目科技有限责任公司 | A kind of 2+1 distributed storages group system SSD read buffer accelerated methods |
-
2018
- 2018-12-23 CN CN201811576597.9A patent/CN111352575A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080046538A1 (en) * | 2006-08-21 | 2008-02-21 | Network Appliance, Inc. | Automatic load spreading in a clustered network storage system |
CN107643875A (en) * | 2016-07-20 | 2018-01-30 | 湖南百里目科技有限责任公司 | A kind of 2+1 distributed storages group system SSD read buffer accelerated methods |
CN107357532A (en) * | 2017-07-14 | 2017-11-17 | 长沙开雅电子科技有限公司 | A kind of new cache pre-reading implementation method of new cluster-based storage |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113721845A (en) * | 2021-07-30 | 2021-11-30 | 苏州浪潮智能科技有限公司 | Volume processing method, system, equipment and computer readable storage medium |
CN113721845B (en) * | 2021-07-30 | 2023-08-25 | 苏州浪潮智能科技有限公司 | Volume processing method, system, equipment and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9891835B2 (en) | Live configurable storage | |
US10866905B2 (en) | Access parameter based multi-stream storage device access | |
US9430404B2 (en) | Thinly provisioned flash cache with shared storage pool | |
US20160132273A1 (en) | Tiered caching and migration in differing granularities | |
CN103246616B (en) | A kind of globally shared buffer replacing method of access frequency within long and short cycle | |
CN104317736B (en) | A kind of distributed file system multi-level buffer implementation method | |
CN106648464B (en) | Multi-node mixed block cache data reading and writing method and system based on cloud storage | |
US8090924B2 (en) | Method for the allocation of data on physical media by a file system which optimizes power consumption | |
WO2019085769A1 (en) | Tiered data storage and tiered query method and apparatus | |
CN114860163B (en) | Storage system, memory management method and management node | |
US8140811B2 (en) | Nonvolatile storage thresholding | |
KR20120120186A (en) | Efficient Use of Hybrid Media in Cache Architectures | |
CN106528451B (en) | The cloud storage frame and construction method prefetched for the L2 cache of small documents | |
US10380023B2 (en) | Optimizing the management of cache memory | |
US11093410B2 (en) | Cache management method, storage system and computer program product | |
CN110688062B (en) | Cache space management method and device | |
CN114817195A (en) | Method, system, storage medium and equipment for managing distributed storage cache | |
US9892054B2 (en) | Method and apparatus for monitoring system performance and dynamically updating memory sub-system settings using software to optimize performance and power consumption | |
US10802748B2 (en) | Cost-effective deployments of a PMEM-based DMO system | |
CN108664415B (en) | Shared replacement policy computer cache system and method | |
CN111352575A (en) | Read cache acceleration design of distributed storage cluster system | |
US10860498B2 (en) | Data processing system | |
CN107643875A (en) | A kind of 2+1 distributed storages group system SSD read buffer accelerated methods | |
US11372761B1 (en) | Dynamically adjusting partitioned SCM cache memory to maximize performance | |
WO2020024591A1 (en) | Cost-effective deployments of a pmem-based dmo system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200630 |
|
WD01 | Invention patent application deemed withdrawn after publication |