CN109739696A - Double-control storage array solid state disk cache acceleration method - Google Patents
Double-control storage array solid state disk cache acceleration method Download PDFInfo
- Publication number
- CN109739696A CN109739696A CN201811522575.4A CN201811522575A CN109739696A CN 109739696 A CN109739696 A CN 109739696A CN 201811522575 A CN201811522575 A CN 201811522575A CN 109739696 A CN109739696 A CN 109739696A
- Authority
- CN
- China
- Prior art keywords
- caching
- index
- storage
- data
- write
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a double-control storage array solid state disk cache acceleration method, and relates to the technical field of computer storage. The invention provides a cache acceleration method of double storage controllers, which realizes that after one storage controller fails, the storage operation of data is taken over by the other controller, namely a cache system can still continue to work in the other controller, and ensures that the data storage service is not interrupted. The cache index trees in the two storage controllers are synchronized in real time and are in a mirror image state. When one storage controller fails, the other controller can seamlessly take over storage services on that controller at the point of failure. The invention not only ensures the high efficiency of the SSD cache storage, but also ensures the high reliability of the SSD cache storage.
Description
Technical field
The present invention relates to computer memory technical fields, and in particular to a kind of dual control storage array solid state hard disk caching acceleration
Method.
Background technique
Traditional mechanical hard disk (HDD) has large capacity advantage, but performance is relatively relatively low, especially random I/O performance,
The performance bottleneck for often becoming system, show under virtualized environment and becomes apparent, because virtualizing scene, to aggravate I/O random
Change.Compared to HDD, solid state hard disk (SSD) advantage with high performance, especially in terms of random I/O, advantage clearly, still
The hardware cost of SSD is relatively high.
Industry has done some optimizations in terms of combining the high-performance of large capacity and SSD of HDD at present, basic ideas be using
Caching of the SSD as HDD.Dsc data is cached by SSD, when storage control reads and writes data, once hit caching, just directly
It is read and write from caching.In SSD cold data caching can according to it is certain strategy under brush in HDD.It is utilized HDD's to reach
Large capacity advantage, and promote the purpose of HDD performance.
In the storage system of double storage controllers, since the metadata being buffered in controller memory will not periodic refreshing
To SSD, once master controller breaks down, storage service is switched to from controller, cache metadata do not update to from
Controller can not reorganize caching from controller, to cause loss of data.Therefore, the caching for realization dual controller adds
Speed will guarantee that the metadata of caching is lower in real time and brush to SSD.
Summary of the invention
(1) technical problems to be solved
The technical problem to be solved by the present invention is how to provide a kind of caching accelerated method of double storage controllers, realize
After the failure of one storage control, the storage operation of data is taken over by another controller, i.e., caching system still can be after
Continue and work in another controller, and guarantees that data storage service is not interrupted.
(2) technical solution
In order to solve the above-mentioned technical problems, the present invention provides a kind of dual control storage array solid state hard disks to cache acceleration side
Method, comprising the following steps:
(1) it creation caching: initializes cache object respectively in the memory of two storage controls A, B, and establishes caching
The caching of object indexes, and the corresponding caching index of each cache object, caching index is organized as a caching index tree,
Cache object is stored in the caching index tree in non-leaf nodes and indexes key, corresponding cache object is found according to these key and exists
Position on HDD;
The data cached mapping relations with HDD data of caching index record SSD, including HDD block data address, SSD
Data block address, data block size, caching number and data access time;
Every four caching index organizations are a caching group, and each caching group is one piece of contiguous memory in memory, corresponding
Unique caching group #;
The corresponding caching index tree of one rear end equipment, caching index tree are assigned with unique caching group #;
(2) caching read-write: traversal caching index updates the data access time of caching index, more for the caching of hit
Another controller memory index tree is newly arrived, reads and writes data from the SSD data block address of index;For the caching in unnatural death, from
HDD reads and writes data, and to memory index tree application available index, records the data to the corresponding SSD data block address of index;
(3) cache synchronization: the caching in storage control A memory is indexed by way of queue, is simultaneously sent to and deposits
In the memory for storing up controller B, the caching of storage control B is indexed by way of queue, storage control A is simultaneously sent to
In;
(4) it caches write-back: being executed by caching write-back thread, caching write-back thread statistics caching use space, caching uses
Space is the sum of the size of data in all caching indexes, when caching use space higher than highest threshold value, caches write-back thread
Start write-back buffer, by traversing the caching index tree, the caching group on index tree is cached described in poll, by each caching rope
Draw the corresponding position that corresponding caching is written back to back-end logic volume, the caching index of write-back is by the synchronous setting of two storage controls
Stop write-back buffer when spatial cache is lower than lowest threshold for available mode.
Preferably, in step (4), the corresponding caching of each caching index is written back to back-end logic volume by lru algorithm
Corresponding position.
Preferably, the caching index tree tissue is a B+ tree.
(3) beneficial effect
The present invention provides a kind of caching accelerated method of double storage controllers, after realizing a storage control failure,
The storage operation of data is taken over by another controller, i.e., caching system can still continue to work in another controller, and
Guarantee that data storage service is not interrupted.Caching index tree real-time synchronization in two storage controls is in mirrored state.When one
A storage control breaks down when failing, another controller can take in the storage on the fault point seamless pipe controller
Business.The present invention both ensure that the efficient performance of SSD buffer memory, in turn ensure the high reliability of SSD buffer memory.This method can
Applied to the business scenario in double storage controllers mixing storage array product, used simultaneously for SSD and HDD.It is total reducing
On the basis of body possesses cost, while not only having ensure that reliability, but also the overall performance of product is improved.
Detailed description of the invention
Fig. 1 is caching index structure schematic diagram of the invention;
Fig. 2 is caching index tree of the invention and caching group structural schematic diagram;
Fig. 3 is that dual control of the invention caches accelerated method data flow figure.
Specific embodiment
To keep the purpose of the present invention, content and advantage clearer, with reference to the accompanying drawings and examples, to of the invention
Specific embodiment is described in further detail.
As shown in Figure 1 to Figure 3, a kind of dual control storage array solid state hard disk caching accelerated method proposed by the present invention include with
Lower step:
(1) it creation caching: initializes cache object respectively in the memory of two storage controls A, B, and establishes caching
The caching of object indexes, and the corresponding caching index of each cache object, caching index is organized as a caching index tree,
Cache object is stored in the caching index tree in non-leaf nodes and indexes key, corresponding cache object is found according to these key and exists
Position on HDD creates following data structure in the present invention:
The caching index: the data cached mapping relations with HDD data of record SSD, including HDD block data address, SSD
Data block address, data block size, caching number and data access time;
Caching group: every four caching index organizations are a caching group, and each caching group is in one piece continuous in memory
It deposits, corresponding unique caching group #;
The caching index tree: the corresponding caching index tree of a rear end equipment, caching index tree tissue are a B+
Tree is assigned with unique caching group #;
(2) caching read-write: traversal caching index updates the data access time of caching index, more for the caching of hit
Another controller memory index tree is newly arrived, reads and writes data from the SSD data block address of index;For the caching in unnatural death, from
HDD reads and writes data, and to memory index tree application available index, records the data to the corresponding SSD data block address of index;
(3) cache synchronization: the caching in storage control A memory is indexed by way of queue, is simultaneously sent to and deposits
In the memory for storing up controller B, the caching of storage control B is indexed by way of queue, storage control A is simultaneously sent to
In;In this way, the caching index tree in two storage control memories keeps realtime uniform;
(4) cache write-back: caching write-back is executed by special caching write-back thread, and write-back thread statistics caching uses empty
Between, caching use space is that the sum of the size of data in all caching indexes is delayed when caching use space is higher than highest threshold value
It is stored back to and writes thread and start write-back buffer, by traversing the caching index tree, caching group on caching index tree described in poll will
Each caching index is corresponding to cache the corresponding position that back-end logic volume is written back to by lru algorithm, and the caching of write-back indexes quilt
Two storage controls synchronize and are set as available mode, when spatial cache is lower than lowest threshold, stop write-back buffer.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, without departing from the technical principles of the invention, several improvement and deformations can also be made, these improvement and deformations
Also it should be regarded as protection scope of the present invention.
Claims (5)
1. a kind of dual control storage array solid state hard disk caches accelerated method, which comprises the following steps:
(1) it creation caching: initializes cache object respectively in the memory of two storage controls A, B, and establishes cache object
Caching index, the corresponding caching index of each cache object, caching indexes and is organized as a caching index tree, described
It caches and stores cache object index key in index tree in non-leaf nodes, find corresponding cache object in HDD according to these key
On position;
The data cached mapping relations with HDD data of caching index record SSD, including HDD block data address, SSD data
Block address, data block size, caching number and data access time;
Every four caching index organizations are a caching group, and each caching group is one piece of contiguous memory in memory, corresponding unique
Caching group #;
The corresponding caching index tree of one rear end equipment, caching index tree are assigned with unique caching group #;
(2) caching read-write: traversal caching index updates the data access time of caching index for the caching of hit, and update is arrived
Another controller memory index tree reads and writes data from the SSD data block address of index;For the caching in unnatural death, read from HDD
Data are write, and to memory index tree application available index, record the data to the corresponding SSD data block address of index;
(3) cache synchronization: the caching in storage control A memory is indexed by way of queue, is simultaneously sent to storage control
In the memory of device B processed, the caching of storage control B is indexed by way of queue, is simultaneously sent in storage control A;
(4) it caches write-back: being executed by caching write-back thread, caching write-back thread statistics caching use space caches use space
For the sum of the size of data in all caching indexes, when caching use space higher than highest threshold value, caching write-back thread starts
Write-back buffer caches the caching group on index tree, by each caching index pair by traversing the caching index tree described in poll
The caching answered is written back to the corresponding position of back-end logic volume, and the caching index of write-back is synchronized that be set as can by two storage controls
Stop write-back buffer when spatial cache is lower than lowest threshold with state.
2. the method as described in claim 1, which is characterized in that in step (4), the corresponding caching of each caching index is passed through
Lru algorithm is written back to the corresponding position of back-end logic volume.
3. the method as described in claim 1, which is characterized in that the caching index tree tissue is a B+ tree.
4. the method as described in claim 1, which is characterized in that the caching index tree in two storage controls is in mirror image shape
State.
5. the method as described in claim 1, which is characterized in that this method is produced applied to double storage controllers mixing storage array
In product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811522575.4A CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811522575.4A CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109739696A true CN109739696A (en) | 2019-05-10 |
CN109739696B CN109739696B (en) | 2022-05-13 |
Family
ID=66358939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811522575.4A Active CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109739696B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111414321A (en) * | 2020-02-24 | 2020-07-14 | 中国农业大学 | Cache protection method and device based on dynamic mapping mechanism |
CN112181705A (en) * | 2020-10-12 | 2021-01-05 | 上海前瞻创新研究院有限公司 | Management storage control method based on multiple controllers and storage equipment |
WO2021012169A1 (en) * | 2019-07-22 | 2021-01-28 | 华为技术有限公司 | Method of improving reliability of storage system, and related apparatus |
CN112597079A (en) * | 2020-12-22 | 2021-04-02 | 上海安路信息科技有限公司 | Data write-back system of convolutional neural network accelerator |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6134634A (en) * | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
CN102081584A (en) * | 2009-11-30 | 2011-06-01 | 英业达股份有限公司 | Cache mirror system and method of dual-controller storage system |
CN102364474A (en) * | 2011-11-17 | 2012-02-29 | 中国科学院计算技术研究所 | Metadata storage system for cluster file system and metadata management method |
US20120278335A1 (en) * | 2011-04-27 | 2012-11-01 | Verisign, Inc. | Systems and Methods for a Cache-Sensitive Index Using Partial Keys |
CN103092775A (en) * | 2013-01-31 | 2013-05-08 | 武汉大学 | Spatial data double cache method and mechanism based on key value structure |
US20160203180A1 (en) * | 2014-03-10 | 2016-07-14 | Hitachi, Ltd. | Index tree search method and computer |
US20170192892A1 (en) * | 2016-01-06 | 2017-07-06 | Netapp, Inc. | High performance and memory efficient metadata caching |
CN107193767A (en) * | 2017-05-25 | 2017-09-22 | 北京计算机技术及应用研究所 | A kind of double controller storage system caches the data transmission system of mirror image |
CN107301021A (en) * | 2017-06-22 | 2017-10-27 | 郑州云海信息技术有限公司 | It is a kind of that the method and apparatus accelerated to LUN are cached using SSD |
JP2017208113A (en) * | 2017-06-30 | 2017-11-24 | ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. | Data storage method, data storage apparatus, and storage device |
-
2018
- 2018-12-13 CN CN201811522575.4A patent/CN109739696B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6134634A (en) * | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
CN102081584A (en) * | 2009-11-30 | 2011-06-01 | 英业达股份有限公司 | Cache mirror system and method of dual-controller storage system |
US20120278335A1 (en) * | 2011-04-27 | 2012-11-01 | Verisign, Inc. | Systems and Methods for a Cache-Sensitive Index Using Partial Keys |
CN102364474A (en) * | 2011-11-17 | 2012-02-29 | 中国科学院计算技术研究所 | Metadata storage system for cluster file system and metadata management method |
CN103092775A (en) * | 2013-01-31 | 2013-05-08 | 武汉大学 | Spatial data double cache method and mechanism based on key value structure |
US20160203180A1 (en) * | 2014-03-10 | 2016-07-14 | Hitachi, Ltd. | Index tree search method and computer |
US20170192892A1 (en) * | 2016-01-06 | 2017-07-06 | Netapp, Inc. | High performance and memory efficient metadata caching |
CN107193767A (en) * | 2017-05-25 | 2017-09-22 | 北京计算机技术及应用研究所 | A kind of double controller storage system caches the data transmission system of mirror image |
CN107301021A (en) * | 2017-06-22 | 2017-10-27 | 郑州云海信息技术有限公司 | It is a kind of that the method and apparatus accelerated to LUN are cached using SSD |
JP2017208113A (en) * | 2017-06-30 | 2017-11-24 | ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. | Data storage method, data storage apparatus, and storage device |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021012169A1 (en) * | 2019-07-22 | 2021-01-28 | 华为技术有限公司 | Method of improving reliability of storage system, and related apparatus |
CN111414321A (en) * | 2020-02-24 | 2020-07-14 | 中国农业大学 | Cache protection method and device based on dynamic mapping mechanism |
CN111414321B (en) * | 2020-02-24 | 2022-07-15 | 中国农业大学 | Cache protection method and device based on dynamic mapping mechanism |
CN112181705A (en) * | 2020-10-12 | 2021-01-05 | 上海前瞻创新研究院有限公司 | Management storage control method based on multiple controllers and storage equipment |
CN112181705B (en) * | 2020-10-12 | 2023-02-03 | 上海前瞻创新研究院有限公司 | Management storage control method based on multiple controllers and storage equipment |
CN112597079A (en) * | 2020-12-22 | 2021-04-02 | 上海安路信息科技有限公司 | Data write-back system of convolutional neural network accelerator |
CN112597079B (en) * | 2020-12-22 | 2023-10-17 | 上海安路信息科技股份有限公司 | Data write-back system of convolutional neural network accelerator |
Also Published As
Publication number | Publication date |
---|---|
CN109739696B (en) | 2022-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109739696A (en) | Double-control storage array solid state disk cache acceleration method | |
CN105574104B (en) | A kind of LogStructure storage system and its method for writing data based on ObjectStore | |
US10241919B2 (en) | Data caching method and computer system | |
CN104350477B (en) | For solid-state drive device(SSD)Optimization context remove | |
CN101604226B (en) | Method for building dynamic buffer pool to improve performance of storage system based on virtual RAID | |
CN110058822B (en) | Transverse expansion method for disk array | |
US20140006687A1 (en) | Data Cache Apparatus, Data Storage System and Method | |
CN106095342B (en) | A kind of watt recording disc array construction method and the system of dynamically changeable long strip | |
JP2014516179A (en) | Program, system, and method for determining caching of data in a storage system having a cache | |
CN106503051A (en) | A kind of greediness based on meta data category prefetches type data recovery system and restoration methods | |
CN103345368B (en) | Data caching method in buffer storage | |
CN104765575A (en) | Information storage processing method | |
US20130219122A1 (en) | Multi-stage cache directory and variable cache-line size for tiered storage architectures | |
JP2018520420A (en) | Cache architecture and algorithm for hybrid object storage devices | |
JPWO2017149592A1 (en) | Storage device | |
US8484424B2 (en) | Storage system, control program and storage system control method | |
CN111488125B (en) | Cache Tier Cache optimization method based on Ceph cluster | |
CN104391653A (en) | Data block-based cache design method | |
CN102043593A (en) | Region-based management method for external cache of disk | |
CN106527987A (en) | Non-DRAM SSD master control reliability improving system and method | |
CN107133369A (en) | A kind of distributed reading shared buffer memory aging method based on the expired keys of redis | |
CN105630413A (en) | Synchronized writeback method for disk data | |
US8732404B2 (en) | Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to | |
CN104182281A (en) | Method for implementing register caches of GPGPU (general purpose graphics processing units) | |
CN107506139A (en) | A kind of write request towards phase transition storage optimizes device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |