CN116431076A - Data cache management device and method for journal additional type file system flash memory - Google Patents

Data cache management device and method for journal additional type file system flash memory Download PDF

Info

Publication number
CN116431076A
CN116431076A CN202310454571.1A CN202310454571A CN116431076A CN 116431076 A CN116431076 A CN 116431076A CN 202310454571 A CN202310454571 A CN 202310454571A CN 116431076 A CN116431076 A CN 116431076A
Authority
CN
China
Prior art keywords
flash memory
data
file
address
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310454571.1A
Other languages
Chinese (zh)
Inventor
贾刚勇
赵育淼
饶欢乐
任庆
王国坤
俞铭辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202310454571.1A priority Critical patent/CN116431076A/en
Publication of CN116431076A publication Critical patent/CN116431076A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a data cache management device and method for a journal additional file system flash memory, and belongs to the field of flash memory equipment, journal file systems and cache data management. The device comprises a host equipment end and a flash memory equipment end; the host equipment comprises a log additional file system and an address association tracker; the flash memory device side comprises an address association aware cache manager, a flash memory translation layer and a NAND flash memory chip. The method comprises the steps that firstly, an address association tracker constructs a file tracking sequence with fixed length, and a buffer manager for address association perception in a flash memory device end constructs an address mapping table. And secondly, obtaining a tracking target of the address association tracker. And finally, updating the file block by the LFS, and performing write-back until all data caching is completed. The invention improves the hit rate of the internal cache of the flash memory, improves the I/O efficiency of the system and prolongs the service life of SSD flash memory devices.

Description

Data cache management device and method for journal additional type file system flash memory
Technical Field
The invention belongs to the field of flash memory devices, log file systems and cache data management, and particularly relates to a data association aware cache management device and a policy method for a log addition type file system flash memory device.
Background
The data buffer management technology of flash memory device is a technology for improving the system I/O efficiency by constructing a data buffer space between the flash memory storage device (SSD) and the Host memory. The technology is widely applied to the fields of large server clusters, personal PCs, edge devices and the like. The strategy of cache management is mainly used for improving the system I/O efficiency and prolonging the service life of the flash memory device, wherein in the aspect of improving the system I/O efficiency, the time delay for accessing the SSD storage end is far longer than the time delay for accessing the SSD cache, so that more data with larger access probability are stored locally for later use in the cache; in the aspect of prolonging the service life of the flash memory device, due to the fact that part of data is frequently modified and the writing times of the flash memory device are limited, the cache caches the writing data first, and the writing data is written back into the SSD after the writing data is not modified any more later, so that the writing times of the SSD device are reduced. In this big data age, the amount of data grows exponentially, which is a strict challenge for the I/O efficiency of the system and the lifetime of the SSD.
Currently, most cache management policy methods of SSD use the locality feature of I/O load to improve the I/O cache hit rate, i.e. use the information of access frequency, access time interval, etc. of each data block. However, none of the existing caching strategies take into account the operational characteristics of the upper level file system, especially the flash-friendly Log append file system (Log-structured File System, LFS). This results in lower system I/O efficiency with LFS based flash memory devices, while the life of the flash memory device is also greatly shortened. The following is specifically explained:
(1) Because the flash-friendly Log-append file system (Log-structured File System, LFS) uses off-site updates to generate sequential write I/O and isolation between the host and flash device, the SSD cache cannot timely sense the address off-site updates of the cached data, i.e., the local characteristics of the I/O load are destroyed, causing the SSD cache to be unable to sense the replacement of invalid data before garbage collection (Garbage Collection, GC). Excessive invalid data is accumulated in the cache, and old invalid data in the cache cannot be replaced timely by original hot data, so that excessive I/O is written back to SSD storage, and the service life is shortened.
(2) Because the excessive invalid data occupies the cache, the available space of the cache is further reduced, and most of data cannot be loaded into the cache in advance, so that preparation is made for subsequent reading. Finally, most of the I/O access of data is directly read from SSD storage without going through caching, i.e. the caching loses the meaning of improving the I/O efficiency, which results in low system I/O efficiency.
The quality of the SSD on-chip cache management strategy is strongly related to the data access efficiency of the SSD, the existing SSD cache management strategy does not consider the operation characteristics of an upper file system, particularly a log-appended file system, so that the SSD cache strategy is not only free from improving effect, but also can be negatively optimized. Therefore, a method of cache management policy for a log-oriented additional file system flash memory device needs to be studied.
Disclosure of Invention
The invention aims to solve the problem that the SSD Cache cannot timely sense the change of data update in different places, and provides a Cache (CAC) management device and a policy method of a log-added file system flash memory.
A data buffer management device of a log-added file system flash memory comprises a host device side and a flash memory device side. The host equipment comprises a log additional file system and an address association tracker; the flash memory device side comprises an address association aware cache manager, a flash memory translation layer and a NAND flash memory chip.
(1) Log additional file system (LFS)
The LFS operates in the memory of the host device, and is a module responsible for maintaining and updating file data in the memory. In a file system, a file is made up of a number of file blocks, and each file block is made up of several data blocks of 4KB size. The LFS is mainly characterized in that when updating a file block, the file block is not updated at the original location (logical address: in a computer, each data block is allocated a unique logical address for a read/write operation to an external storage device), but the start logical address is allocated again to the file block, the file block data is written to the location, and the original location file block data is set as invalid data. In this module, it is possible to obtain when and where the file is updated.
(2) Address association tracker
The address association tracker is also operated in the memory of the host device, and is a module for recording the initial address replacement of the file block in each file, and the position of the module is positioned below the LFS. When the LFS performs a file block update, the address association tracker will record the start logical address of the file block before and after the update as address association data. Meanwhile, when the host side is to write the file data back to the flash memory device, the address association tracker will also add the address association tracking data of the file to the data communication structure bio (a communication structure in which the host side and the flash memory device side perform read-write operation update) of the flash memory device, and transmit the data back together.
(3) Address association aware cache manager
The address association aware cache manager operates in the cache of the flash memory device side, and is a module for analyzing the bio structure and managing the cache. When the flash memory device receives the bio structure transmitted by the host device, the address association aware cache manager analyzes and acquires address association data in the bio at the first time and updates relevant file block data in the cache. When the subsequent cache data is to be written back to the NAND flash memory chip, the updated data is sent to a flash memory translation layer according to the logic address.
(4) Flash translation layer
The flash translation layer operates at the flash device end and is a module for translating the logic address into the address in the flash chip, when receiving the read-write instruction of the buffer manager perceived by the address association, the translation layer translates the data logic address transmitted by the buffer manager perceived by the address into the address in the NAND flash chip which needs to be written back, and transmits the instruction to the corresponding flash chip controller.
(5) NAND flash memory chip
The NAND flash memory chip exists at the flash memory device end and is a module for exactly executing data reading and writing. When receiving the read-write request instruction, the flash memory chip controller reads and writes corresponding data according to the corresponding on-chip address.
The data buffer management method of the journal additional type file system flash memory comprises the following steps:
step 1: the address association tracker constructs a fixed length file tracking sequence for recording address association data of files in the update sequence.
Step 2: the address association aware cache manager in the flash memory device end constructs an address mapping table, and records the corresponding relation between file block data and the address in the cache, wherein the file block data comprises the initial logical address of the file block.
Step 3: the method comprises the steps of obtaining access times in m days of each file in the LFS, arranging the access times in a descending order, and taking the files arranged in the first n as tracking targets of an address association tracker, wherein n, m >0.
Step 4: the LFS updates the file blocks, and the address association tracker stores the initial logical addresses of the file blocks before and after updating into the address association data of the corresponding files in the file tracking sequence.
Step 5: the LFS initiates a file block flash write back request and the address association tracker places the address association data of the file block into a private member in the data communication fabric bio. The bio will be sent later to the flash memory device.
Step 6: the flash memory device receives bio, the address association aware cache manager checks whether the address mapping table in the cache has corresponding file blocks, if so, the corresponding file blocks in the cache are updated one by taking the size of the 4KB data blocks as a unit; if not, directly performing write-back of the flash memory data block, directly transmitting the data in the bio to a flash memory translation layer, translating the data into an internal address of the NAND flash memory chip, and completing the whole write-back process in the NAND flash memory chip.
Step 7: if the cache space is sufficient, updating the data; when the cache space is insufficient, the cache management module of the address association awareness writes back part of the file block data into the SSD (the specific flow is the same as the direct write-back in the step 6), and removes the corresponding table entry in the address mapping table. Until the free space in the cache is sufficient, and further processing of the data blocks.
Step 8: and repeating the steps 3 to 7 until all data caches are completed.
The invention has the beneficial effects that:
aiming at the local damage of the existing flash memory device of the file system facing to the log addition, the invention provides a data association sensing cache management device and a strategy method of the flash memory device of the file system facing to the log addition, which are caused by the reduction of the system I/O performance and the shortening of the service life of a flash memory device due to the fact that the cache strategy for caching the file block data according to the access frequency and the access time interval of the logical addresses of the file blocks cannot be adapted to scene characteristics. According to the method, the updated file block logical address of the asynchronous file page and the old file block logical address are downloaded into the cache of the flash memory device through the bio structure body, so that the cache senses the correlation of the logical addresses before and after the file block data, the old file block in the cache is replaced with a new file block in time, the cache hit rate of the flash memory device is improved, the I/O efficiency of the system is improved, and the service life of an SSD flash memory device is prolonged.
Drawings
FIG. 1 is a block diagram of a flash memory device;
FIG. 2 is a diagram of the overall framework of the patent;
FIG. 3 is a schematic diagram of an address association tracker;
FIG. 4 is a diagram of address association aware cache management;
FIG. 5 is a schematic flow chart of the method;
FIG. 6 is a comparison of cache hit rates for different cache policy approaches;
FIG. 7 is a comparison of I/O latency for different cache policy approaches;
FIG. 8 is a chart showing the comparison of the number of SSD dirty pages written back by different cache policy methods.
Detailed Description
The invention is further described below with reference to the accompanying drawings, and specific implementation steps are as follows.
The invention provides a data association perception cache management device and a strategy method for a log-added file system flash memory device. The used scene is a flash memory device based on a log-added file system, the basic structure of the flash memory device is shown in figure 1, each flash memory controller interacts with a flash memory chip through a plurality of channels, a plurality of flash memory crystal grains (Die) are packaged in each chip, and each crystal grain consists of a plurality of storage matrixes (planes). Each memory matrix contains a large number of physical blocks, and each physical block encapsulates a large number of physical pages. The flash memory read-write operation unit is one physical page, and the garbage collection unit is one physical block. As shown in figure 2, the related device comprises a host device side and a flash memory device side, and comprises a log additional file system, an address association tracker, an address association aware cache manager, a flash memory translation layer and a NAND flash memory chip.
(1) Log additional file system (LFS)
The LFS is a source for initiating update and write-back of a file block, and the LFS can continuously read and write file data at the host device end, and when a write operation occurs, the LFS can update in different places. Here, the start logical addresses before and after the update in the different places are named lba_cur and lba_pre, respectively.
(2) Address association tracker
The address association tracker is also operated in the memory of the host device, and is a module for recording the initial address replacement of the file block in each file, and the position of the module is positioned below the LFS. When the LFS performs a file block update, the address association tracker will record the start logical address of the file block before and after the update as address association data. In the specific process, as shown in fig. 3, the file block 3 of the file M is updated, and the file system reassigns the logical address 103 to the file block, and the previously mapped logical address 102 is invalidated. At this point, the address association tracker module saves the old logical address along with the new logical address as < lba_pre, lba_cur >, i.e., <102, 103> address association data. When the host side is to write the file block, such as the file block 3 of the file M, back to the flash memory device, the address association tracker will also add the address association tracking data <102, 103> of the file block 3 to the private member in the data communication structure bio and transfer to the flash memory device side.
(3) Address association aware cache manager
The address association aware cache manager operates in the cache of the flash memory device side, and is a module for analyzing the bio structure and managing the cache. When the flash memory device receives the bio structure transmitted by the host device, the address association aware cache manager analyzes and obtains address association data < LBA_pre, LBA_cur > in the bio at the first time. Traversing an address mapping table in a cache, searching whether an entry binding the LBA_pre logical address exists, and if not, directly writing back to the SSD according to a write-back request; if hit, the binding logical address of the entry is updated to LBA_cur, and the corresponding file block cache data in the cache is updated in situ. In the updating process, if the space is enough, other operations are not needed; if the space is insufficient, corresponding cache entries need to be written back according to the original replacement policy in the cache. In the specific process shown in fig. 4, when the write-back request is transmitted to the flash memory device, the associated address pair < lba_pre, lba_cur > is extracted at the first time. The entries of the address mapping table are traversed to find out whether there are entries for which the entries are lba_pre. If the item LBA_pre does not exist in the cache item, namely the item is not hit, directly writing back to the SSD according to a write-back request; if the LBA_pre exists in the cache entry, changing the logical address of the cache entry into LBA_cur, and updating the corresponding file block cache data in situ. In the updating process, if the space is enough, other operations are not needed; if the space is insufficient, corresponding cache entries need to be written back according to the original replacement policy in the cache.
(4) Flash translation layer
When receiving the read-write instruction of the buffer manager perceived by the address association, the translation layer translates the data logic address transmitted by the buffer manager perceived by the address association into the address in the NAND flash memory chip which needs to be written back, and transmits the instruction to the corresponding flash memory chip controller.
(5) NAND flash memory chip
The NAND flash memory chip exists at the flash memory device end and is a module for exactly executing data reading and writing. When receiving the read-write request instruction, the flash memory chip controller reads and writes corresponding data according to the corresponding on-chip address.
A data association perception cache management method for a log-oriented additional file system flash memory device comprises the following steps:
step 1: the address association tracker constructs a fixed length file tracking sequence for recording address association data of files in the update sequence.
Step 2: the address association aware cache manager in the flash memory device end constructs an address mapping table, and records the corresponding relation between file block data and the address in the cache, namely < logical_address, cache_address > (logical address, starting address in the cache).
Step 3: the method comprises the steps of obtaining access times in m days of each file in the LFS, arranging the access times in a descending order, and taking the files arranged in the first n as tracking targets of an address association tracker, wherein n and m are more than 0.
Step 4: the LFS updates the file block, and the address association tracker stores the start logical address of the file block before and after the update into the address association data < lba_pre, lba_cur > of the corresponding file in the file tracking sequence.
Step 5: the LFS initiates a file block flash write back request, and the address association tracker puts address association data < lba_pre, lba_cur > of the file block into a private member in a data communication structure bio between the ends, which is then sent to the flash memory device.
Step 6: the flash memory device receives the bio, and the cache manager of the address association awareness checks whether the address mapping table in the cache has a table entry of logical_address=lba_pre, and if so, updates the logical_address in the table entry to lba_cur. Updating the corresponding file blocks in the cache one by taking the size of the 4KB data block as a unit, and jumping to the step 7; if not, directly performing write-back of the flash memory data block, directly sending the data in the bio to a flash memory translation layer, translating the data into an internal address of the NAND flash memory chip, and completing the translation in the NAND flash memory chip.
Step 7: if the original buffer memory space of the file block can hold the updated data, directly updating the data block on site; if the original buffer space of the file block is insufficient, the flash memory device determines whether the current buffer has enough free space to accommodate the incremental portion. If the free space is enough, directly executing the in-situ and expansion space update file blocks; otherwise, step 8 is performed.
Step 8: the cache management module of the address association awareness writes back part of the file block data to the SSD (the specific flow is the same as the direct write back in step 6), and removes the corresponding entry in the address mapping table. After the free space of the cache can accommodate the delta portion, the in-place and extended space update file blocks are executed.
Step 9: and repeating the steps 3 to 8 until all data caches are completed.
FIG. 5 is a schematic flow chart of the method.
Comparison of experimental results:
to verify the effect of the present invention, three aspects are compared with the existing method, namely SSD cache hit rate, I/O delay and dirty page write back times. The experimental platform selects 128GB SSD, 128MB cache, 4KB page size and 8 channels. The comparison experiment shows that different method combinations are selected in the design aspect, wherein the method comprises the following steps of EXT4 file system+LRU strategy, F2FS file system+LRU strategy+CAC (associated perceived cache management), EXT4 file system+ARC (adaptive cache replacement) strategy, F2FS file system+ARC strategy+CAC. The I/O load used in calculating the cache hit rate is an open source I/O load set from MSR Cambridge (MSRC) multi-issue.
SSD cache hit rate effect: as shown in fig. 6, the SSD cache hit rate of six combinations is compared. Wherein, the combination of F2FS and CAC strategies is improved by 8.1% to 23.1% compared with the combination hit rate which is not adopted.
System I/O latency effect: as shown in FIG. 7, of the six combined system I/O delays, F2FS employed the CAC strategy with 25.0% decrease in average delay compared to the strategy without CAC. Equivalent to improving the overall I/O efficiency of the system.
The SSD dirty page write-back frequency effect: as shown in FIG. 8, in the case of six combined SSD dirty pages write back, F2FS adopts the CAC strategy, and compared with other combination methods, the number of times of writing back SSD storage is obviously reduced, and the reduction rate can reach 97.8%. Therefore, the service life of the SSD device can be greatly prolonged.

Claims (5)

1. The data cache management device of the flash memory of the file system of the journal addition type is characterized by comprising a host equipment end and a flash memory equipment end;
the host equipment side comprises a log additional file system LFS and an address association tracker; the flash memory device end comprises a cache manager for address association sensing, a flash memory translation layer and a NAND flash memory chip;
the log added file system LFS operates in a memory of a host device end and is responsible for maintaining and updating file data in the memory and acquiring when and where the file is updated;
the address association tracker operates in a memory of a host device end, records the initial address replacement of a file block in each file, and when the LFS executes the file block update, the address association tracker records the initial logical addresses before and after the file block update in different places as address association data; meanwhile, when the host computer side is to execute the writing of the file data back to the flash memory device, the address association tracker adds the address association tracking data of the file into the data communication structure bio of the back flash memory device and returns the data communication structure bio;
the address association perceived cache manager operates in a cache of the flash memory device end, analyzes and acquires address association data in the bio, and updates relevant file block data in the cache; when the subsequent cache data is written back to the NAND flash memory chip, the updated data is sent to a flash memory translation layer according to a logic address;
the flash translation layer operates at the flash equipment end, translates the data logic address transmitted by the address association sensing cache manager into a written-back NAND flash chip internal address, and transmits the instruction to the corresponding flash chip controller;
the NAND flash memory chip is arranged at the flash memory equipment end, data read-write is executed exactly, and when a read-write request instruction is received, the flash memory chip controller reads and writes corresponding data according to the corresponding address in the chip.
2. The apparatus according to claim 1, wherein in the journal additional file system LFS, one file is composed of a plurality of file blocks, and each file block is composed of a plurality of data blocks having a size of 4 KB.
3. The apparatus according to claim 2, wherein in the log added file system LFS, when a file block is updated, the original location is not updated, but the start logical address is reassigned to the file block, the file block data is written to the location, and the original location file block data is set as invalid data.
4. A data buffer management method of a journal adding type file system flash memory, adopting the data buffer management device of any one of claims 1 to 3, comprising the steps of:
step 1: the address association tracker constructs a file tracking sequence with fixed length and records address association data of files in the updating sequence;
step 2: an address association perceived cache manager in a flash memory device end constructs an address mapping table, and records the corresponding relation between file block data and addresses in a cache, wherein the file block data comprises the initial logical address of a file block;
step 3: acquiring the access times in m days of each file in the LFS, arranging the access times in a descending order, and taking the files arranged in the previous n as tracking targets of an address association tracker, wherein n, m >0;
step 4: the LFS updates the file blocks, and the address association tracker stores the initial logical addresses of the file blocks before and after updating into the address association data of the corresponding files in the file tracking sequence;
step 5: the LFS initiates a file block flash memory write-back request, the address association tracker puts address association data of the file block into private members in a data communication structure bio, and then sends the bio to the flash memory device;
step 6: the flash memory device receives bio, the address association aware cache manager checks whether the address mapping table in the cache has corresponding file blocks, if so, the corresponding file blocks in the cache are updated one by taking the size of the data blocks as a unit;
if not, directly performing write-back of the flash memory data block, directly transmitting the data in the bio to a flash memory translation layer, and translating the data into an internal address of the NAND flash memory chip;
step 7: if the cache space is sufficient, updating the data block; when the cache space is insufficient, the cache management module of the address association perception writes back part of file block data into the SSD of the flash memory storage equipment end, removes corresponding table entries in the address mapping table until the free space in the cache is sufficient, and updates the data blocks;
step 8: and repeatedly executing the steps 3 to 7 until all data caches are completed.
5. The method of claim 4, wherein the write-back process in step 6 and step 7 is performed inside the NAND flash memory chip.
CN202310454571.1A 2023-04-25 2023-04-25 Data cache management device and method for journal additional type file system flash memory Pending CN116431076A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310454571.1A CN116431076A (en) 2023-04-25 2023-04-25 Data cache management device and method for journal additional type file system flash memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310454571.1A CN116431076A (en) 2023-04-25 2023-04-25 Data cache management device and method for journal additional type file system flash memory

Publications (1)

Publication Number Publication Date
CN116431076A true CN116431076A (en) 2023-07-14

Family

ID=87083172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310454571.1A Pending CN116431076A (en) 2023-04-25 2023-04-25 Data cache management device and method for journal additional type file system flash memory

Country Status (1)

Country Link
CN (1) CN116431076A (en)

Similar Documents

Publication Publication Date Title
US11372771B2 (en) Invalidation data area for cache
US10216638B2 (en) Methods and systems for reducing churn in a caching device
EP2476055B1 (en) Apparatus, system, and method for caching data on a solid-state storage device
US20100185806A1 (en) Caching systems and methods using a solid state disk
US8966155B1 (en) System and method for implementing a high performance data storage system
CN111488125B (en) Cache Tier Cache optimization method based on Ceph cluster
KR20100115090A (en) Buffer-aware garbage collection technique for nand flash memory-based storage systems
CN113015966A (en) Compressed computer memory access
EP2186008A1 (en) Cache management method and cache device using sector set
US20240020014A1 (en) Method for Writing Data to Solid-State Drive
CN111506517B (en) Flash memory page level address mapping method and system based on access locality
CN116431076A (en) Data cache management device and method for journal additional type file system flash memory
CN111008158B (en) Flash memory cache management method based on page reconstruction and data temperature identification
KR101353967B1 (en) Data process method for reading/writing data in non-volatile memory cache having ring structure
US10579541B2 (en) Control device, storage system and method
Kwon et al. Fast responsive flash translation layer for smart devices
CN116010298B (en) NAND type flash memory address mapping method and device, electronic equipment and storage medium
KR101373613B1 (en) Hybrid storage device including non-volatile memory cache having ring structure
US20210263648A1 (en) Method for managing performance of logical disk and storage array
CN117312188A (en) Hybrid SSD data cache prefetch system and method
CN117270775A (en) Method and device for improving performance and service life of RAID array solid state disk
CN117707437A (en) Virtual disk storage method and device based on distributed storage system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination