CN108459972B - Efficient cache management design method for multi-channel solid state disk - Google Patents

Efficient cache management design method for multi-channel solid state disk Download PDF

Info

Publication number
CN108459972B
CN108459972B CN201611140866.8A CN201611140866A CN108459972B CN 108459972 B CN108459972 B CN 108459972B CN 201611140866 A CN201611140866 A CN 201611140866A CN 108459972 B CN108459972 B CN 108459972B
Authority
CN
China
Prior art keywords
data
cache
data cache
cold
hot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611140866.8A
Other languages
Chinese (zh)
Other versions
CN108459972A (en
Inventor
杨明伟
李亚晖
谢建春
郭鹏
白林亭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN201611140866.8A priority Critical patent/CN108459972B/en
Publication of CN108459972A publication Critical patent/CN108459972A/en
Application granted granted Critical
Publication of CN108459972B publication Critical patent/CN108459972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/20Employing a main memory using a specific memory technology
    • G06F2212/202Non-volatile memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention belongs to the field of airborne embedded computers, and relates to a high-efficiency cache management design method of a multi-channel solid state disk. The method comprises the following steps: step 1, reading a request service flow; step 2, identifying the cold and hot attributes of the write request, and determining the door card of the cold and hot request by adding a basic threshold value to the average size of the latest request; step 3, managing the internal caches of the cold data cache and the hot data cache, dynamically adjusting a write-back threshold value according to requirements, and writing back data in the idle period of the chip; and step 4, dynamically adjusting the size of the cache according to the requirements of performance and service life of the cold data cache and the hot data cache. The method utilizes different characteristics of different data updating to write back the data in the data cache in advance in the idle period of the chip of the multi-channel solid state disk, and combines the data with the dynamic adjustment between the cold data cache and the hot data cache, thereby not only ensuring the cache service capability of the multi-channel solid state disk, but also not causing great service life loss.

Description

Efficient cache management design method for multi-channel solid state disk
Technical Field
The invention belongs to the field of airborne embedded computers, and relates to a high-efficiency cache management design method of a multi-channel solid state disk.
Background
With the rapid development of modern avionic systems, higher requirements are put forward on a new generation of airborne storage equipment, the traditional mechanical hard disk has inherent limitations, so that the traditional mechanical hard disk is difficult to apply under the condition of airborne loading, and the flash memory provides a new choice for the airborne storage system. Flash memory has been widely noticed as a new storage medium, and its unique advantages of low energy consumption, small size, light weight and vibration resistance make it widely used in many kinds of embedded devices. A NAND Flash based solid-state drive (SSD) has a shorter start-up time, a faster random access speed, and lower energy consumption by eliminating mechanical overhead in the disk, which makes its storage performance more superior than the disk, becomes a powerful substitute for the disk, and is more suitable for an onboard environment. Some inherent structural defects of the NAND Flash can be shielded by a Flash Translation Layer (FTL), so that the NAND Flash can be conveniently used like a conventional magnetic disk, and on the basis of the FTL, the performance and durability of the solid state disk can be further improved by using the cache.
Caching in a solid state disk plays a crucial role, firstly, caching is generally used for two parts, namely a mapping cache and a data cache, the mapping cache stores required mapping entries, and the data cache stores latest write request data; secondly, the read-write speed of the data cache is far higher than that of Flash, and the performance of the solid state disk can be remarkably improved by reading and writing through the cache; and finally, the data cache can absorb the write data which is frequently updated, and the write times of the Flash are reduced, so that the service life of the SSD is prolonged. Most of the existing cache scheduling focuses on how to select write-back data and improve the hit rate of the cache under the condition that the cache is full, but the passive write-back mode has a very unstable effect on improving the performance of the solid-state disk, and when the hit rate of a request is low, the performance improvement is very small, and the performance which the cache should have is difficult to exert.
Disclosure of Invention
The purpose of the invention is as follows:
the invention provides a cache management design method facing a multi-channel solid state disk aiming at the requirement of an airborne embedded computer on read-write performance, and the method is characterized in that the performance is remarkably improved at the cost of a small amount of service life by utilizing the characteristics of different chip idle periods and different cache data updating frequencies of the multi-channel solid state disk.
The technical scheme of the invention is as follows:
a high-efficiency cache management design method of a multi-channel solid state disk comprises the following steps:
step 1, when a read request reaches a multi-channel solid-state disk, firstly judging whether the read request hits a data cache, and if the read request hits the data cache, directly serving by the data cache. If the data cache is not hit, whether the mapping item corresponding to the data is in the mapping cache needs to be judged next, if the corresponding mapping item is not in the mapping cache, the mapping item corresponding to the data needs to be read first, and finally the target data is read; and if the corresponding mapping item is in the mapping cache, directly reading data from the corresponding position of the Flash to finish the reading operation.
And 2, determining a threshold of a large request by using a mode of adding a threshold to the average value of the sizes of the latest requests, and ensuring that the large request not only exceeds a basic value, but also exceeds the average size of the latest requests. Data that exceeds this threshold is determined to be cold data, otherwise it is considered to be hot data.
And 3, dividing the data cache into a cold data cache and a hot data cache, wherein the cold data cache and the hot data cache are respectively used for storing the obtained cold data and the hot data, when the write request reaches the multi-channel solid-state disk, firstly judging whether the data cache is hit, if so, performing cache service by the hit data cache, and if not, adding the data cache into the corresponding data cache for service according to the judged cold and hot attributes. After the service is finished, if the chip is in an idle state, writing back a certain amount of data at the tail node in the LRU linked list of the cold data cache and the hot data cache to Flash in advance according to the size of the write-back threshold.
And 4, when data is written into the data cache, if the free space of the current data cache is insufficient, the data cache seizes the free space from another data cache, so that the current data can be served by the cache.
The invention has the advantages and effects that:
the method utilizes different characteristics of different data updating to write back the data in the data cache in advance in the idle period of the chip of the multi-channel solid state disk, and combines the data with the dynamic adjustment between the cold data cache and the hot data cache, thereby not only ensuring the cache service capability of the multi-channel solid state disk, but also not causing great service life loss.
Drawings
FIG. 1 is a flow diagram of cache management.
FIG. 2 is a flow chart of the internal service of the cold and hot data cache.
Fig. 3 is a schematic diagram illustrating dynamic adjustment of a hot and cold data buffer in embodiment 1.
Detailed Description
The present invention will be described in further detail.
A cache design method for a multi-channel solid state disk, as shown in fig. 1, includes:
step 1, when a read request reaches a multi-channel solid-state disk, firstly judging whether the read request hits a data cache, and if the read request hits the data cache, directly serving by the data cache. If the data cache is not hit, whether the mapping item corresponding to the data is in the mapping cache needs to be judged next, if the corresponding mapping item is not in the mapping cache, the mapping item corresponding to the data needs to be read first, and finally the target data is read; and if the corresponding mapping item is in the mapping cache, directly reading data from the corresponding position of the Flash to finish the reading operation.
Step 2, when observing the load collected in the general-purpose computing system, it is found that the locality of the write request access is closely related to the size of the write request, and the update frequency is significantly reduced when the write request is greater than a certain value, such as 32 KB. Generally, small requests are updated more frequently than large requests, however the size of the requests are not fixed and are related to both the size of the most recent request and the size of the request itself. The threshold value of a large request is determined by adding a fixed threshold value to the average value of the sizes of the recent requests, so that the large request can be ensured to exceed a basic value, and the size of the large request can be ensured to obviously exceed the average size of the recent requests.
When the size of the incoming write request is judged, the relation between the judged threshold value s and the latest request is as follows:
Figure BDA0001177906000000031
wherein N is the inclusion currentThe number of requests including 64, n in example 1iIndicating the size of the ith request and Base indicating a basic threshold, which may be determined by counting historical data of the applied system, 32 in example 1. And substituting the size of the latest request into a formula to obtain a threshold value S, comparing the threshold value S with the size of the current request, and if the threshold value S is larger than the size of the current request, determining that the request is a large request, namely cold data, and otherwise, determining that the request is a small request and hot data.
And 3, dividing the data cache into a cold data cache and a hot data cache, wherein the cold data cache and the hot data cache are respectively used for storing the obtained cold data and the hot data, when the write request reaches the multi-channel solid-state disk, firstly judging whether the data cache is hit, if so, performing cache service by the hit data cache, and if not, adding the data cache into the corresponding data cache for service according to the judged cold and hot attributes.
The strategy adopted when the cold data cache and the hot data cache serve the data is the same, and the data nodes comprise two types, wherein a clean node indicates that the data inside the node is written back to Flash, and a dirty node indicates that the data inside the node is not written back to Flash. Next, taking the hot data cache as an example, the service policies of the cold data cache and the hot data cache are described. If the write request hits the hot data cache, further judgment on the attributes of the data node hit by the write request is needed. If the hit node is a clean node, indicating that the written-back data is excessive, subtracting a corresponding value from the write-back threshold, placing the data node at the head end of the LRU linked list, and modifying the state of the data node into a dirty node; if a dirty node is hit, the data node is placed directly at the head of the LRU linked list. If the hot data cache is missed and the write request needs to be written into the hot data cache, whether a free space exists in the hot data cache is checked, wherein the free space comprises a node which is not written with data and a clean node, if the free space exists, the data is directly written into the hot data cache, the node is promoted to the head of the LRU queue, if the free space does not exist, the free space needs to be taken from another data cache, namely a cold data cache, as shown in FIG. 3, and the write-back threshold value is correspondingly increased. After the hot data caching service is completed, when a chip in an idle state appears in the multi-channel solid state disk, writing back a certain amount of data in the cold data cache and the hot data cache LRU linked list at the tail node to Flash in advance according to the size of the write-back threshold.
And 4, the size of the cold and hot data cache in the write cache is not fixed, but can be dynamically changed along with the requirement of the cold and hot data cache on the memory, so that the balance between the speed and the service life is achieved. On the one hand, the hot data cache represents the service life of Flash, and the larger the hot data cache is, the more updating operations can be absorbed, and the longer the Flash service life is; on the other hand, the larger the cold data cache, the more likely an incoming large request will be serviced by the cache, and thus the faster.
In the hot data cache, when the hot data misses the hot data cache and there is no free space in the hot data cache, the H-DAT will take free space from the cold data cache. As shown in fig. 3(a), in embodiment 1, data 15 is to be added to the hot data cache, but no free node exists in the hot data cache, and the cold data cache deletes its own free node and adds the obtained free space to the hot data cache.
In a cold data cache, as in a hot data cache, when there is insufficient cold data cache space, space is obtained from the hot data cache. However, there are two cases:
(1) if there is free space in the hot data cache, the space in the hot data cache will be directly added to the cold data cache to serve incoming data, as shown in fig. 3 (a);
(2) if there is no free space in the hot cache and the write-back threshold WB in the cold data cache exceeds the size of the cold cache, then in order to allow future large requests to be serviced by the cache, the dirty node is deleted from the hot data cache and the data stored in the node is written back to Flash. The resulting space is added to the cold data cache to serve the incoming data, as shown in FIG. 3 (b).

Claims (2)

1. A high-efficiency cache management design method of a multi-channel solid state disk is characterized by comprising the following steps:
step 1, when a read request reaches a multi-channel solid-state disk, firstly judging whether the read request hits a data cache, and if the read request hits the data cache, directly serving by the data cache; if the data cache is not hit, whether the mapping item corresponding to the data is in the mapping cache needs to be judged next, if the corresponding mapping item is not in the mapping cache, the mapping item corresponding to the data needs to be read first, and finally the target data is read; if the corresponding mapping item is in the mapping cache, directly reading data from the corresponding position of the Flash to finish the reading operation;
step 2, determining a threshold of a large request by using a mode of adding a threshold to an average value of the size of the latest request, and ensuring that the large request not only exceeds a basic value, but also exceeds the average size of the latest request; data above this threshold is determined to be cold data, otherwise it is considered to be hot data;
step 3, dividing the data cache into a cold data cache and a hot data cache, wherein the cold data cache and the hot data cache are respectively used for storing the obtained cold data and the hot data, when a write request reaches the multi-channel solid-state disk, firstly judging whether the data cache is hit, if so, performing cache service by the hit data cache, and if not, adding the data cache into the corresponding data cache for service according to the judged cold and hot attributes; after the service is finished, if the chip is in an idle state, writing back a certain amount of data at the tail node in the LRU chain table of the cold data cache and the hot data cache to Flash in advance according to the size of the write-back threshold;
the strategy adopted when the cold data cache and the hot data cache serve the data is the same and comprises two types of data nodes, a clean node indicates that the data in the node is written back to Flash, and a dirty node indicates that the data in the node is not written back to Flash;
step 4, the size of the cold and hot data cache in the write cache is not fixed, but dynamically changes along with the requirement of the cold and hot data cache on the memory, so as to achieve the balance between speed and service life;
when data is written into a data cache, if the free space of the current data cache is insufficient, the data cache deprives the free space from another data cache, so that the current data can be served by the cache.
2. The efficient cache management design method for the multi-channel solid state disk as claimed in claim 1, wherein when the size of the incoming write request is determined, the relationship between the determined threshold s and the latest request is:
Figure FDA0003423766110000011
n is the number of requests including the current request, ni represents the size of the ith request, and Base represents a basic threshold, and the threshold can be determined by counting the historical data of the applied system.
CN201611140866.8A 2016-12-12 2016-12-12 Efficient cache management design method for multi-channel solid state disk Active CN108459972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611140866.8A CN108459972B (en) 2016-12-12 2016-12-12 Efficient cache management design method for multi-channel solid state disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611140866.8A CN108459972B (en) 2016-12-12 2016-12-12 Efficient cache management design method for multi-channel solid state disk

Publications (2)

Publication Number Publication Date
CN108459972A CN108459972A (en) 2018-08-28
CN108459972B true CN108459972B (en) 2022-03-15

Family

ID=63221809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611140866.8A Active CN108459972B (en) 2016-12-12 2016-12-12 Efficient cache management design method for multi-channel solid state disk

Country Status (1)

Country Link
CN (1) CN108459972B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109324761A (en) * 2018-10-09 2019-02-12 郑州云海信息技术有限公司 A kind of data cache method, device, equipment and storage medium
WO2021189203A1 (en) * 2020-03-23 2021-09-30 华为技术有限公司 Bandwidth equalization method and apparatus
CN112527194B (en) * 2020-12-04 2024-02-13 北京浪潮数据技术有限公司 Method, system and device for setting write amplification of solid state disk and readable storage medium
CN113342265B (en) * 2021-05-11 2023-11-24 中天恒星(上海)科技有限公司 Cache management method and device, processor and computer device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441597A (en) * 2007-11-22 2009-05-27 威刚科技股份有限公司 Adjustable mixed density memory storage device and control method thereof
CN103514106A (en) * 2012-06-20 2014-01-15 北京神州泰岳软件股份有限公司 Method for caching data
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813705B2 (en) * 2000-02-09 2004-11-02 Hewlett-Packard Development Company, L.P. Memory disambiguation scheme for partially redundant load removal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101441597A (en) * 2007-11-22 2009-05-27 威刚科技股份有限公司 Adjustable mixed density memory storage device and control method thereof
CN103514106A (en) * 2012-06-20 2014-01-15 北京神州泰岳软件股份有限公司 Method for caching data
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hot Data-Aware FTL Based on Page-Level Address Mapping;Zhiguang Chen等;《2010 IEEE 12th International Conference on High Performance Computing and Communications (HPCC)》;IEEE;20100927;第713-718页 *
一种优化的闪存地址映射方法;张琦等;《软件学报》;20140228;第25卷(第2期);第315-324页 *

Also Published As

Publication number Publication date
CN108459972A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
US8949544B2 (en) Bypassing a cache when handling memory requests
CN107193646B (en) High-efficiency dynamic page scheduling method based on mixed main memory architecture
US9235508B2 (en) Buffer management strategies for flash-based storage systems
CN108459972B (en) Efficient cache management design method for multi-channel solid state disk
US9501420B2 (en) Cache optimization technique for large working data sets
CN102760101B (en) SSD-based (Solid State Disk) cache management method and system
JP6613375B2 (en) Profiling cache replacement
CN105389135B (en) A kind of solid-state disk inner buffer management method
CN109478165B (en) Method for selecting cache transfer strategy for prefetched data based on cache test area and processor
US20140115241A1 (en) Buffer management apparatus and method
CN104166634A (en) Management method of mapping table caches in solid-state disk system
CN104794064A (en) Cache management method based on region heat degree
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
JP6711121B2 (en) Information processing apparatus, cache memory control method, and cache memory control program
JP6630449B2 (en) Replace cache entries based on entry availability in other caches
KR20160029086A (en) Data store and method of allocating data to the data store
WO2018004801A1 (en) Multi-level system memory with near memory scrubbing based on predicted far memory idle time
CN109388341A (en) A kind of system storage optimization method based on Device Mapper
CN102395957A (en) Cache and disk management method, and a controller using the method
CN109478164B (en) System and method for storing cache location information for cache entry transfer
WO2015072925A1 (en) Method for hot i/o selective placement and metadata replacement for non-volatile memory cache on hybrid drive or system
CN111506517B (en) Flash memory page level address mapping method and system based on access locality
CN109478163B (en) System and method for identifying a pending memory access request at a cache entry
KR101284465B1 (en) Page mapping scheme that supports secure file deletion for nand-based block device, and thereof recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant