CN109739696B - Double-control storage array solid state disk caching acceleration method - Google Patents
Double-control storage array solid state disk caching acceleration method Download PDFInfo
- Publication number
- CN109739696B CN109739696B CN201811522575.4A CN201811522575A CN109739696B CN 109739696 B CN109739696 B CN 109739696B CN 201811522575 A CN201811522575 A CN 201811522575A CN 109739696 B CN109739696 B CN 109739696B
- Authority
- CN
- China
- Prior art keywords
- cache
- index
- storage
- data
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a double-control storage array solid state disk cache acceleration method, and relates to the technical field of computer storage. The invention provides a cache acceleration method of double storage controllers, which realizes that after one storage controller fails, the storage operation of data is taken over by the other controller, namely a cache system can still continue to work in the other controller, and ensures that the data storage service is not interrupted. The cache index trees in the two storage controllers are synchronized in real time and are in a mirror image state. When one storage controller fails, the other controller can seamlessly take over storage services on that controller at the point of failure. The invention not only ensures the high efficiency of the SSD cache storage, but also ensures the high reliability of the SSD cache storage.
Description
Technical Field
The invention relates to the technical field of computer storage, in particular to a cache acceleration method for a solid state disk with double control storage arrays.
Background
The traditional mechanical hard disk (HDD) has the advantage of large capacity, but the performance is relatively low, especially the random I/O performance, which is often the performance bottleneck of the system, and is more obvious in the virtualization environment because the virtualization scenario can aggravate the I/O randomization. Solid State Disks (SSDs) have the advantage of high performance compared to HDDs, especially in terms of random I/O, but the hardware cost of SSDs is relatively high.
Currently, some optimization is made in the industry in terms of combining the large capacity of the HDD and the high performance of the SSD, and the basic idea is to use the SSD as the cache of the HDD. The SSD caches the hot data, and when the storage controller reads and writes the data, the SSD directly reads and writes the data from the cache once the data hits the cache. The cold data cache in the SSD can be flushed down to the HDD according to a certain strategy. Therefore, the high-capacity advantage of the HDD is utilized, and the performance of the HDD is improved.
In the storage system with double storage controllers, because the metadata cached in the memory of the controller cannot be regularly refreshed to the SSD, once the main controller fails, the storage service is switched to the slave controller, the cached metadata is not updated to the slave controller in time, and the slave controller cannot reorganize the cache, so that data loss is caused. Therefore, cache acceleration for dual controllers is to ensure that cached metadata is flushed down to the SSD in real time.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is as follows: how to provide a cache acceleration method of double storage controllers, after realizing that one storage controller fails, the storage operation of data is taken over by the other controller, namely the cache system can still continue to work in the other controller, and guarantee that the data storage service is not interrupted.
(II) technical scheme
In order to solve the technical problem, the invention provides a cache acceleration method for a double-control storage array solid state disk, which comprises the following steps:
(1) creating a cache: respectively initializing cache objects in the memories of the two storage controllers A, B, establishing cache indexes of the cache objects, wherein each cache object corresponds to one cache index, the cache indexes are organized into a cache index tree, storing cache object index keys in non-leaf nodes in the cache index tree, and finding out the positions of the corresponding cache objects on the HDD according to the keys;
the cache index records the mapping relation between SSD cache data and HDD data, including HDD data block address, SSD data block address, data block size, cache number and data access time;
every four cache indexes are organized into a cache group, and each cache group is a continuous memory in the memory and corresponds to a unique cache group number;
one back-end device corresponds to one cache index tree, and the cache index tree is distributed with a unique cache group number;
(2) cache reading and writing: traversing the cache index, updating the data access time of the cache index for the hit cache, updating to another controller memory index tree, and reading and writing data from the SSD data block address of the index; for the non-hit cache, reading and writing data from the HDD, applying for an available index from a memory index tree, and recording the data to an SSD data block address corresponding to the index;
(3) cache synchronization: synchronously sending the cache index in the memory of the storage controller A to the memory of the storage controller B in a queue mode, and synchronously sending the cache index of the storage controller B to the storage controller A in a queue mode;
(4) caching and writing back: and the cache write-back thread counts the cache use space, the cache use space is the sum of the data sizes in all cache indexes, when the cache use space is higher than the highest threshold value, the cache write-back thread starts to write back the cache, the cache group on the cache index tree is polled by traversing the cache index tree, the cache corresponding to each cache index is written back to the corresponding position of the back-end logical volume, the written-back cache indexes are synchronously set to be in an available state by the two storage controllers, and when the cache space is lower than the lowest threshold value, the write-back cache is stopped.
Preferably, in step (4), the cache corresponding to each cache index is written back to the corresponding location of the back-end logical volume through the LRU algorithm.
Preferably, the cache index tree is organized as a B + tree.
(III) advantageous effects
The invention provides a cache acceleration method of double storage controllers, which realizes that after one storage controller fails, the storage operation of data is taken over by the other controller, namely a cache system can still continue to work in the other controller, and ensures that the data storage service is not interrupted. The cache index trees in the two storage controllers are synchronized in real time and are in a mirror image state. When one storage controller fails, the other controller can seamlessly take over storage services on that controller at the point of failure. The invention not only ensures the high efficiency of the SSD cache storage, but also ensures the high reliability of the SSD cache storage. The method can be applied to a dual-storage controller hybrid storage array product and is used for a service scene in which the SSD and the HDD are used simultaneously. On the basis of reducing the total cost of ownership, the reliability is guaranteed, and the overall performance of the product is improved.
Drawings
FIG. 1 is a diagram illustrating a cache index structure according to the present invention;
FIG. 2 is a schematic diagram of a cache index tree and a cache set structure according to the present invention;
fig. 3 is a data flow diagram of the double-control cache acceleration method of the present invention.
Detailed Description
In order to make the objects, contents, and advantages of the present invention clearer, the following detailed description of the embodiments of the present invention will be made in conjunction with the accompanying drawings and examples.
As shown in fig. 1 to fig. 3, the cache acceleration method for a dual-control storage array solid state disk provided by the present invention includes the following steps:
(1) creating a cache: respectively initializing cache objects in the memories of the two storage controllers A, B, establishing cache indexes of the cache objects, wherein each cache object corresponds to one cache index, the cache indexes are organized into a cache index tree, storing cache object index keys in non-leaf nodes in the cache index tree, and finding out the positions of the corresponding cache objects on the HDD according to the keys, the invention creates the following data structures:
the cache index is as follows: recording the mapping relation between SSD cache data and HDD data, wherein the mapping relation comprises an HDD data block address, an SSD data block address, a data block size, a cache number and data access time;
and (3) caching group: every four cache indexes are organized into a cache group, and each cache group is a continuous memory in the memory and corresponds to a unique cache group number;
the cache index tree: a back-end device corresponds to a cache index tree which is organized as a B + tree and is distributed with a unique cache group number;
(2) cache reading and writing: traversing the cache index, updating the data access time of the cache index for the hit cache, updating to another controller memory index tree, and reading and writing data from the SSD data block address of the index; for the non-hit cache, reading and writing data from the HDD, applying for an available index from a memory index tree, and recording the data to an SSD data block address corresponding to the index;
(3) cache synchronization: synchronously sending the cache index in the memory of the storage controller A to the memory of the storage controller B in a queue mode, and synchronously sending the cache index of the storage controller B to the storage controller A in a queue mode; in this way, the cache index trees in the memories of the two storage controllers are kept consistent in real time;
(4) caching and writing back: and the cache write-back is executed by a special cache write-back thread, the write-back thread counts the cache usage space, the cache usage space is the sum of the data sizes in all cache indexes, when the cache usage space is higher than a highest threshold value, the cache write-back thread starts to write back the cache, cache groups on the cache index tree are polled by traversing the cache index tree, the cache corresponding to each cache index is written back to a corresponding position of the back-end logical volume by an LRU algorithm, the written-back cache indexes are synchronously set to be in an available state by two storage controllers, and when the cache space is lower than a lowest threshold value, the write-back cache is stopped.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (5)
1. A cache acceleration method for a double-control storage array solid state disk is characterized by comprising the following steps:
(1) creating a cache: respectively initializing cache objects in the memories of the two storage controllers A, B, establishing cache indexes of the cache objects, wherein each cache object corresponds to one cache index, the cache indexes are organized into a cache index tree, storing cache object index keys in non-leaf nodes in the cache index tree, and finding out the positions of the corresponding cache objects on the HDD according to the keys;
the cache index records the mapping relation between SSD cache data and HDD data, including HDD data block address, SSD data block address, data block size, cache number and data access time;
every four cache indexes are organized into a cache group, and each cache group is a continuous memory in the memory and corresponds to a unique cache group number;
one back-end device corresponds to one cache index tree, and the cache index tree is distributed with a unique cache group number;
(2) cache reading and writing: traversing the cache index, updating the data access time of the cache index for the hit cache, updating to another controller memory index tree, and reading and writing data from the SSD data block address of the index; for the non-hit cache, reading and writing data from the HDD, applying for an available index from a memory index tree, and recording the data to an SSD data block address corresponding to the index;
(3) cache synchronization: synchronously sending the cache index in the memory of the storage controller A to the memory of the storage controller B in a queue mode, and synchronously sending the cache index of the storage controller B to the storage controller A in a queue mode;
(4) caching and writing back: and the cache write-back thread counts the cache use space, the cache use space is the sum of the data sizes in all cache indexes, when the cache use space is higher than the highest threshold value, the cache write-back thread starts to write back the cache, the cache group on the cache index tree is polled by traversing the cache index tree, the cache corresponding to each cache index is written back to the corresponding position of the back-end logical volume, the written-back cache indexes are synchronously set to be in an available state by the two storage controllers, and when the cache space is lower than the lowest threshold value, the write-back cache is stopped.
2. The method of claim 1, wherein in step (4), the cache corresponding to each cache index is written back to the corresponding location of the back-end logical volume via an LRU algorithm.
3. The method of claim 1, wherein the cache index tree is organized as a B + tree.
4. The method of claim 1, wherein the cache index trees in both storage controllers are in a mirrored state.
5. The method of claim 1, wherein the method is applied to a dual memory controller hybrid memory array product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811522575.4A CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811522575.4A CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109739696A CN109739696A (en) | 2019-05-10 |
CN109739696B true CN109739696B (en) | 2022-05-13 |
Family
ID=66358939
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811522575.4A Active CN109739696B (en) | 2018-12-13 | 2018-12-13 | Double-control storage array solid state disk caching acceleration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109739696B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BR112022001182A2 (en) * | 2019-07-22 | 2022-03-29 | Huawei Tech Co Ltd | Method to improve the reliability of storage system, and related appliance |
CN111414321B (en) * | 2020-02-24 | 2022-07-15 | 中国农业大学 | Cache protection method and device based on dynamic mapping mechanism |
CN112181705B (en) * | 2020-10-12 | 2023-02-03 | 上海前瞻创新研究院有限公司 | Management storage control method based on multiple controllers and storage equipment |
CN112597079B (en) * | 2020-12-22 | 2023-10-17 | 上海安路信息科技股份有限公司 | Data write-back system of convolutional neural network accelerator |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6134634A (en) * | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
CN102081584A (en) * | 2009-11-30 | 2011-06-01 | 英业达股份有限公司 | Cache mirror system and method of dual-controller storage system |
CN103092775A (en) * | 2013-01-31 | 2013-05-08 | 武汉大学 | Spatial data double cache method and mechanism based on key value structure |
CN107193767A (en) * | 2017-05-25 | 2017-09-22 | 北京计算机技术及应用研究所 | A kind of double controller storage system caches the data transmission system of mirror image |
CN107301021A (en) * | 2017-06-22 | 2017-10-27 | 郑州云海信息技术有限公司 | It is a kind of that the method and apparatus accelerated to LUN are cached using SSD |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8788505B2 (en) * | 2011-04-27 | 2014-07-22 | Verisign, Inc | Systems and methods for a cache-sensitive index using partial keys |
CN102364474B (en) * | 2011-11-17 | 2014-08-20 | 中国科学院计算技术研究所 | Metadata storage system for cluster file system and metadata management method |
JP6188607B2 (en) * | 2014-03-10 | 2017-08-30 | 株式会社日立製作所 | Index tree search method and computer |
US10108547B2 (en) * | 2016-01-06 | 2018-10-23 | Netapp, Inc. | High performance and memory efficient metadata caching |
JP6376626B2 (en) * | 2017-06-30 | 2018-08-22 | ▲ホア▼▲ウェイ▼技術有限公司Huawei Technologies Co.,Ltd. | Data storage method, data storage device, and storage device |
-
2018
- 2018-12-13 CN CN201811522575.4A patent/CN109739696B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6134634A (en) * | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
CN102081584A (en) * | 2009-11-30 | 2011-06-01 | 英业达股份有限公司 | Cache mirror system and method of dual-controller storage system |
CN103092775A (en) * | 2013-01-31 | 2013-05-08 | 武汉大学 | Spatial data double cache method and mechanism based on key value structure |
CN107193767A (en) * | 2017-05-25 | 2017-09-22 | 北京计算机技术及应用研究所 | A kind of double controller storage system caches the data transmission system of mirror image |
CN107301021A (en) * | 2017-06-22 | 2017-10-27 | 郑州云海信息技术有限公司 | It is a kind of that the method and apparatus accelerated to LUN are cached using SSD |
Also Published As
Publication number | Publication date |
---|---|
CN109739696A (en) | 2019-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109739696B (en) | Double-control storage array solid state disk caching acceleration method | |
JP6007329B2 (en) | Storage controller, storage device, storage system | |
KR102556431B1 (en) | Solid state drive with heterogeneous nonvolatile memory types | |
JP5823469B2 (en) | Apparatus and method for low power, low latency, large capacity storage class memory | |
CN102521147B (en) | Management method by using rapid non-volatile medium as cache | |
US9292206B2 (en) | Method and apparatus for optimizing the performance of a storage system | |
US7882304B2 (en) | System and method for efficient updates of sequential block storage | |
US8095738B2 (en) | Differential caching mechanism based on media I/O speed | |
WO2018019119A1 (en) | Method and device for dynamic partial-parallel data layout for continuous data storage | |
WO2013175529A1 (en) | Storage system and storage control method for using storage area based on secondary storage as cache area | |
CN105339910B (en) | Virtual NAND capacity extensions in hybrid drive | |
US20130219122A1 (en) | Multi-stage cache directory and variable cache-line size for tiered storage architectures | |
US11042324B2 (en) | Managing a raid group that uses storage devices of different types that provide different data storage characteristics | |
CN1664794A (en) | Expandable high speed storage network buffer system | |
US10853252B2 (en) | Performance of read operations by coordinating read cache management and auto-tiering | |
CN104750433A (en) | Cache design method based on SCST | |
US11157418B2 (en) | Prefetching data elements within a heterogeneous cache | |
US20200341873A1 (en) | Data access method, apparatus and computer program product | |
US11526447B1 (en) | Destaging multiple cache slots in a single back-end track in a RAID subsystem | |
US6934803B2 (en) | Methods and structure for multi-drive mirroring in a resource constrained raid controller | |
US11182307B2 (en) | Demoting data elements from cache using ghost cache statistics | |
US11550732B2 (en) | Calculating and adjusting ghost cache size based on data access frequency | |
US20160371192A1 (en) | Apparatus and method for performing cache management in a storage system | |
US10915262B2 (en) | Hybrid storage device partitions with storage tiers | |
US10579541B2 (en) | Control device, storage system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |