CN116560587A - Data management system and terminal equipment - Google Patents

Data management system and terminal equipment Download PDF

Info

Publication number
CN116560587A
CN116560587A CN202310841724.8A CN202310841724A CN116560587A CN 116560587 A CN116560587 A CN 116560587A CN 202310841724 A CN202310841724 A CN 202310841724A CN 116560587 A CN116560587 A CN 116560587A
Authority
CN
China
Prior art keywords
data
disk
tlc
qlc
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310841724.8A
Other languages
Chinese (zh)
Other versions
CN116560587B (en
Inventor
高山
何逍阳
林志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202310841724.8A priority Critical patent/CN116560587B/en
Publication of CN116560587A publication Critical patent/CN116560587A/en
Application granted granted Critical
Publication of CN116560587B publication Critical patent/CN116560587B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a data management system and a terminal device, the system comprising: a hierarchical storage subsystem on the UFS device for integrating a plurality of QLC media chips in the UFS device into a QLC disk and integrating a plurality of TLC media chips in the UFS device into a TLC disk; and the mixed file subsystem on the host is used for writing data into the TLC disk and moving the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value.

Description

Data management system and terminal equipment
Technical Field
The disclosure relates to the technical field of data storage, in particular to a data management system and terminal equipment.
Background
In recent years, functions of terminal equipment are more and more abundant, performances of various functions are gradually improved, but the functions also cause continuous improvement of data volume of the terminal equipment, and higher requirements are put on storage space of the terminal equipment. In the related art, the terminal device mainly uses UFS (Universal Flash Storage) equipment to provide a storage space and manage data in the storage space, but at present, the terminal device cannot guarantee performance and capacity requirements of the storage space at the same time, which limits development of the terminal device to a certain extent.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a data management system to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a data management system comprising:
a hierarchical storage subsystem on the UFS device for integrating a plurality of QLC media chips in the UFS device into a QLC disk and integrating a plurality of TLC media chips in the UFS device into a TLC disk;
and the mixed file subsystem on the host is used for writing data into the TLC disk and moving the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value.
In one possible embodiment of the present disclosure, the TLC disk has a QLC buffer for buffering the data read in the QLC disk;
the mixed file subsystem is also used for moving the read data from the QLC disk to the QLC cache area and reading the data in the QLC cache area.
In one possible embodiment of the disclosure, the mixed file subsystem is further configured to compress data in the TLC disk if the free space in the TLC disk is less than a compression threshold;
The mixed file subsystem is used for moving the data in the TLC disk to the QLC disk under the condition that the free space is smaller than the preset threshold after the data in the TLC disk is compressed;
the mixed file subsystem is also used for garbage cleaning the data in the QLC disk under the condition that the free space in the QLC disk is smaller than the compression threshold value.
In one possible embodiment of the present disclosure, the tiered storage subsystem includes:
a storage resource pool for integrating a plurality of QLC media chips in the UFS device into a QLC disk and integrating a plurality of TLC media chips in the UFS device into a TLC disk;
the space management layer is used for dividing the QLC disk into a plurality of QLC partitions, dividing the TLC disk into a plurality of TLC partitions and distributing a logic unit for each QLC partition and each TLC partition;
and the address index layer is used for storing the mapping relation between the logical address of the data page in the logical unit and the physical address of the data page in the QLC partition or the data page in the TLC partition.
In one possible embodiment of the disclosure, the space management layer is further configured to divide the TLC disk into a metadata space and a TLC data space, and use the QLC disk as a QLC data space, where the metadata space includes at least one TLC partition, the TLC data space includes at least one TLC partition, and the QLC data space includes a plurality of QLC partitions.
In one possible embodiment of the disclosure, the hybrid file subsystem includes a performance data area corresponding to the TLC disk and a capacity data area corresponding to the QLC disk;
the mixed file subsystem is further used for generating a write request in the performance data area and sending the write request to the space management layer so that the space management layer writes data into the TLC disk;
the mixed file subsystem is further used for generating a read request in the performance data area and sending the read request to the space management layer so that the space management layer can read data from the TLC disk;
the mixed file subsystem is further configured to generate a data movement request in the performance data area, and send the movement request to the space management layer, so that the space management layer moves data from the TLC disk to the QLC disk.
In one possible embodiment of the disclosure, the space management layer is configured to send the write request or the read request to an address index layer, so that the address index layer sends the write request or the read request to the storage resource pool;
The storage resource pool is used for writing data into a cache page aimed by the write request in the DRAM after receiving the write request, or reading data from the cache page aimed by the read request in the DRAM after receiving the read request;
and when the number of the cache pages in the DRAM reaches a number threshold, at least one cache page is moved to the TLC disk.
In one possible embodiment of the disclosure, the storage resource pool is further configured to obtain a data page from the TLC disk after receiving the write request, and store the data page as a cache page for the write request in the DRAM.
In one possible embodiment of the disclosure, the storage resource pool is further configured to, after receiving the read request, search the DRAM for a cache page targeted by the read request, and replace any cache page in the DRAM with a corresponding data page in the TLC disk when the cache page targeted by the read request does not exist in the DRAM.
In one possible embodiment of the disclosure, the write request has a logical address of a data page targeted, and the storage resource pool is further configured to send an update message to the address index layer, where the update message is used to update a physical address corresponding to the page targeted by the write request;
The read request has a logic address of a specific page, and the storage resource pool is used for searching a cache page specific to the read request in the DRAM, specifically for:
and searching a cache page aimed by the read request in the DRAM according to the mapping relation between the logical address and the physical address of the data page stored in the address index layer and the logical address in the read request.
In one possible embodiment of the present disclosure, the hybrid file subsystem includes:
the scanning layer is used for acquiring the logic unit in the space management layer;
and the data management layer is used for generating a write request, a read request or a data movement request in the performance data area.
According to a second aspect of an embodiment of the present disclosure, there is provided a terminal device, including the data management system of the first aspect, where a UFS device is configured in the terminal device.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
the data management system provided by the embodiment of the disclosure comprises a layered storage subsystem on the UFS device and a mixed file subsystem on the host, wherein the UFS device is internally provided with a plurality of QLC medium chips and a plurality of TLC medium chips, the layered storage subsystem can integrate the plurality of QLC medium chips in the UFS device into a QLC disk and integrate the plurality of TLC medium chips in the UFS device into a TLC disk; and the mixed file subsystem can write data into the TLC disk and move the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value. That is, UFS devices have a QLC disc and a TLC disc inside, wherein the TLC chips constituting the TLC disc have better performance than the QLC chips constituting the QLC disc, but the TLC chips have smaller capacity than the QLC chips; the hybrid file subsystem performs corresponding operations by utilizing the advantages of the two, namely, performs write operation aiming at TLC, so that the data writing performance is ensured, and the data in the TLC disk is transferred to the QLC disk for storage, so that the large capacity of the QLC disk is utilized, and the storage space of the data is expanded under the condition of ensuring the storage performance as a whole.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram of a data management system according to an exemplary embodiment of the present disclosure.
FIG. 2 is a block diagram of a hierarchical storage subsystem according to an exemplary embodiment of the present disclosure.
FIG. 3 is a schematic diagram illustrating a relationship between data spaces within a tiered storage subsystem in accordance with an exemplary embodiment of the present disclosure.
FIG. 4 is a block diagram of a hybrid file subsystem shown in an exemplary embodiment of the present disclosure.
Fig. 5 is a schematic diagram showing a relationship between a performance data area and a capacity data area according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating a process of transferring data in a performance data area to a capacity data area after compression according to an exemplary embodiment of the present disclosure.
FIG. 7 is a schematic diagram illustrating the operation of a data management system according to an exemplary embodiment of the present disclosure.
Fig. 8 is a block diagram of a terminal device according to an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In recent years, functions of terminal equipment are more and more abundant, performances of various functions are gradually improved, but the functions also cause continuous improvement of data volume of the terminal equipment, and higher requirements are put on storage space of the terminal equipment. In the related art, the terminal device mainly uses UFS (Universal Flash Storage) equipment to provide a storage space and manage data in the storage space, but at present, the terminal device cannot guarantee performance and capacity requirements of the storage space at the same time, which limits development of the terminal device to a certain extent.
Based on this, at least one embodiment of the present disclosure provides a data management system, where the data management system may be applied in a terminal device such as a mobile phone, a tablet computer, a wearable device, etc. and is used to manage data in the terminal device, for example, manage data writing, data reading, etc. related to the terminal device; the data management system can improve the storage space in the terminal equipment without reducing the performance (such as speed, accuracy and the like) of data processing in the terminal equipment.
Referring to fig. 1, an exemplary architecture of the data management system is shown, including: a tiered storage subsystem (Tier Flash Storage) on the UFS device for integrating a plurality of QLC media chips within the UFS device into a QLC disk and a plurality of TLC media chips within the UFS device into a TLC disk; and the mixed file subsystem (Hybrid Flash File System) on the host is used for writing data into the TLC disk and moving the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value.
The host may be a terminal device targeted by the data management system, and the UFS device may be a storage device installed in the host, where the data management system is intended to manage data in the UFS device. UFS is a standard specification for consumer electronic flash memory, aimed at bringing higher data bandwidth and higher reliability to flash memory storage, while providing a uniform package specification. The UFS equipment is internally provided with a plurality of TLC medium chips (3D TLC NAND) and a plurality of QLC medium chips (3D QLC NAND); the TLC medium chip and the QLC medium chip are basic units for data reading and writing, and internally comprise a floating gate transistor (floating gate transistor) for storing data, a control circuit and a data cache; the ratio of TLC media chips to QLC media chips in UFS devices may be preset, e.g., 1:2 or 1:3, etc., to ensure a balance of performance and capacity. The TLC medium chip is a storage medium adopted by the traditional UFS equipment, and the high density characteristic of the QLC medium chip enables the QLC medium chip to have higher capacity, namely the capacity is larger than that of the TLC medium chip, but the performance is inferior to that of the TLC medium chip; the UFS device combines the TLC and QLC medium chips, so that the storage performance of the terminal device is not reduced, and a high-capacity and high-performance storage scheme is provided for users.
Illustratively, the TLC disk has a QLC buffer for buffering the data read in the QLC disk; the mixed file subsystem is also used for moving the read data from the QLC disk to the QLC cache area and reading the data in the QLC cache area. That is, the TLC disk may provide a buffer space for data reading in the QLC disk, and based on the superior performance of the TLC disk, the efficiency of data reading in the QLC disk may be improved.
Illustratively, the mixed file subsystem is further configured to perform compression processing on data in the TLC disk (i.e., remove invalid data, concentrate valid data, thereby reducing space occupation) if the free space in the TLC disk is less than a compression threshold, and stop compression processing if the free space in the TLC disk is greater than a compression stop threshold; the mixed file subsystem is used for moving the data in the TLC disk to the QLC disk under the condition that the free space is smaller than the preset threshold value after the data in the TLC disk is compressed, for example, the last updated data in the TLC disk is moved to the QLC disk by adopting an LRU algorithm; the mixed file subsystem is further used for performing garbage cleaning on the data in the QLC disk (namely, removing invalid data, centralizing valid data and accordingly releasing the storage space of the disk) under the condition that the free space in the QLC disk is smaller than the compression threshold, and stopping garbage cleaning under the condition that the free space in the QLC disk is larger than the compression stop threshold.
That is, the mixed file subsystem further monitors the free space in the TLC disk in addition to writing data into the TLC disk in the UFS device, and compresses the data when the free space is smaller, thereby freeing more storage space while retaining valid data; when the storage space in the TLC disk cannot be released through compression, the mixed file subsystem transfers the data in the TLC disk to the QLC disk, so that the TLC disk continuously keeps a certain free space, high-performance writing service can be provided for the outside, and the QLC disk stores the historical data written in the TLC disk through higher capacity; and then, when the free space in the QLC disk is smaller, the data are cleaned up, so that more storage space is released under the condition of retaining the effective data. Through the management of the mixed file subsystem, the new data is realized on the TLC disk, the writing performance of the new data is ensured, the new data is compressed to the QLC disk after the TLC disk reaches a certain capacity, and the capacity can be released by the space of the TLC disk, so that the whole high performance and large capacity are ensured.
In order not to exhaust the storage space in the QLC disk, the compression speed of the TLC disk and the garbage cleaning speed of the QLC disk need to be adapted to a certain extent, so that the space recovery speed of the QLC disk is ensured to be larger than the space consumption speed in the QLC disk.
The data management system provided by the embodiment of the disclosure comprises a layered storage subsystem on the UFS device and a mixed file subsystem on the host, wherein the UFS device is internally provided with a plurality of QLC medium chips and a plurality of TLC medium chips, the layered storage subsystem can integrate the plurality of QLC medium chips in the UFS device into a QLC disk and integrate the plurality of TLC medium chips in the UFS device into a TLC disk; and the mixed file subsystem can write data into the TLC disk and move the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value. That is, UFS devices have a QLC disc and a TLC disc inside, wherein the TLC chips constituting the TLC disc have better performance than the QLC chips constituting the QLC disc, but the TLC chips have smaller capacity than the QLC chips; the hybrid file subsystem performs corresponding operations by utilizing the advantages of the two, namely, performs write operation aiming at TLC, so that the data writing performance is ensured, and the data in the TLC disk is transferred to the QLC disk for storage, so that the large capacity of the QLC disk is utilized, and the storage space of the data is expanded under the condition of ensuring the storage performance as a whole.
Referring to fig. 2, in some embodiments of the present disclosure, the hierarchical storage subsystem includes: a Storage resource pool (Storage pool) for integrating a plurality of QLC media chips in the UFS device into a QLC disk and a plurality of TLC media chips in the UFS device into a TLC disk; a space management layer (space) for dividing the QLC disk into a plurality of QLC partitions, dividing the TLC disk into a plurality of TLC partitions, and allocating a logic unit for each QLC partition and each TLC partition; an address index layer (FTL) for storing a mapping relationship between a logical address of a data page in the logical unit and a physical address of a data page in the QLC partition or a data page in the TLC partition.
The hierarchical storage subsystem interacts with a TLC media chip (TLC Target) and a QLC media chip (QLC Target) through an open interface ONFI of NAND, such as data reading, writing, management and the like.
The address index layer can record and record the size of the free space in the TLC disk and the free space in the QLC disk, and start and control the data compression in the TLC disk and the garbage cleaning in the QLC disk according to the size of the free space. The address index layer can monitor the flow of the host in the garbage disposal process, and if the flow of the garbage disposal is smaller than that of the host, the flow pressure can be carried out on the host, so that the recovery space speed is ensured to be larger than the system consumption speed, and the idle space is prevented from being exhausted.
The storage resource pool comprises the following modules:
and the discovery module is used for discovering and initializing the media chip.
The pdisk module is used for integrating a plurality of TLC medium chips into TLC magnetic disks and integrating a plurality of QLC medium chips into QLC magnetic disks.
And the tier module is used for storing and layering, and carrying out layered management on the TLC disk and the QLC disk.
And the spark module is used for replacing the bad blocks in the disk.
bad block Management module is used for managing bad blocks in the disk.
The block module is used for performing operations such as erasing on the disk, and the minimum erasing unit is the block.
And the superblock module is used for storing the configuration information of the disk.
The address index layer also comprises the following modules:
and the map module is used for storing the mapping relation between the logical address and the physical address.
And the recovery module is used for providing a power failure abnormal recovery function.
And the map cache module is used for realizing a map cache function and improving the map searching and updating performance.
And the spar scaling module is used for realizing the balancing function of the pblock in the pdisk.
And the user data stat module is used for managing IO flow, pblock erase times, pblock effective data statistics and the like.
And the checkpoint module is used for regularly refreshing state information and recovering the abnormality.
And the GC module is used for garbage recycling.
The space management layer further comprises the following modules:
and the vdisk module is used for dividing the physical space pdisk (such as a TLC disk or a QLC disk) into a plurality of vdisks (namely partitions) averagely, wherein the vdisks realize a resource isolation function, and the vdisks realize scheduling balance according to cpu distribution.
vblock as a vdisk space allocation basic unit.
vpage, as a writing basic unit of vdisk data.
And the super vblock module is used for storing space configuration information.
The data space may be formed of a plurality of vdisks as a data space.
meta space, which is a metadata space, may be composed of a plurality of vdisks.
The vpage buffer is used for caching the page.
Illustratively, the space management layer is further configured to divide the TLC disk into a metadata space and a TLC data space, and use the QLC disk as a QLC data space, where the metadata space includes at least one TLC partition, the TLC data space includes at least one TLC partition, and the QLC data space includes a plurality of QLC partitions. By dividing the space of the TLC disk and the QLC disk, the space in the TLC disk is used as the metadata space, the access frequency of the metadata space is higher, the high performance of the TLC disk can meet the access frequency of the metadata space, and the processing efficiency is ensured.
Referring to fig. 3, the allocation of the logical units LUN, vdisk, pdisk, meta space, data space, pblcok, vblock, vdisk, vpage, target, etc. is shown, where the media targets of the same type form a pdisk, and the pblock of the pdisk spans the target channels. The minimum unit of the capacity of pdisk garbage collection is pblock. The vblock in the vdisk is distributed from the pdisk pblock, vpage is the minimum unit of vdisk reading and writing, metadata of the logical space LUN is stored in the meta area, and data is stored in the data area.
For further example, referring to fig. 4, the hybrid file subsystem includes a scan layer and a data management layer, the scan layer is configured to acquire a logic unit in the space management layer, and the hybrid file subsystem creates a performance data area corresponding to the TLC disk and a capacity data area corresponding to the QLC disk based on an acquisition result of the scan layer. The mixed file subsystem performs data management (such as data compression, garbage cleaning, data writing, data reading, data transferring and the like) in the performance data area, and generates a data management request (such as a data compression request, a garbage cleaning request, a write request, a read request, a data transferring request and the like) and sends the data management request to the layered storage subsystem so that the layered storage subsystem performs synchronous data management on TLC (thin layer computer) disks; the hybrid file subsystem performs data management (such as data compression, garbage collection, data writing, data reading, data transferring and the like) in the capacity data area, and generates a data management request (such as a data compression request, a garbage collection request, a write request, a read request, a data transferring request and the like) and sends the data management request to the layered storage subsystem so that the layered storage subsystem performs synchronous data management on the QLC disk.
Wherein, the data management layer has the following functions:
mkfs: the block device is formatted into a file system supporting multiple block device formatting.
Pagecache: file system page cache management.
NAT: node address table.
checkpoint: for file system exception recovery.
SSA: file system address mapping relationship.
compacts: file system garbage collection function.
SIT: file system segment information.
readcache, file system read cache, improving read performance.
Based on this, the hybrid file subsystem is further configured to generate a write request in the performance data area, and send the write request to the space management layer, so that the space management layer writes data to the TLC disk, specifically, the data management layer of the hybrid file subsystem is configured to generate a write request in the performance data area, and send the write request to the space management layer, so that the space management layer writes data to the TLC disk. After receiving a write request sent by the hybrid file subsystem, the space management layer can be used for sending the write request to the address index layer, so that the address index layer can send the write request to the storage resource pool, and the address index layer and the storage resource pool can communicate by using a ring queue; the storage resource pool can be used for acquiring a data page from the TLC disk after the write request is received, and storing the data page serving as a cache page aimed by the write request into the DRAM, so as to write data into the cache page aimed by the write request in the DRAM. It can be understood that the response message of the write request is reported after the storage resource pool finishes writing data, and the response message can be reported sequentially along the address index layer, the space management layer and the data management layer, and finally reaches the hybrid file subsystem.
The data management layer of the hybrid file subsystem generates a write request in the performance data area, and new data can be written into the TLC disk corresponding to the performance data area, so that the response speed of the write request is improved, the system performance is ensured, and good storage experience is provided for a user.
Because the space management layer has two types of logic units, namely, the logic unit of the TLC partition and the logic unit of the QLC partition, the write request generated by the mixed file subsystem in the performance data area can be sent to the logic unit of the TLC partition so that new data can be written into the TLC disk.
The storage resource pool is further used for sending an update message to the address index layer, wherein the update message is used for updating a physical address corresponding to the page aimed by the write request, namely, a mapping relation is formed between the logical address in the write request and the physical address of the data page obtained from the TCL disk.
The QLC disk and the TCL disk are divided into a plurality of pblocks according to the minimum erasure unit, and each pblock comprises a plurality of data pages (pages). After initializing the QLC disk and the TLC disk, maintaining 3 pblock linked lists, namely an idle linked list (free list), a used linked list (used list) and a used linked list (using list); the pblocks in the linked list are orderly arranged according to the space size of the effective data, and the pblocks are used as a strategy for selecting the pblocks during compression or recovery, and the pblocks with smaller effective data are preferentially selected, so that the effective data which need to be moved are small, and the recovery or compression efficiency is high. When the storage resource pool acquires a data page aimed at by a write request from the TLC disk, a pblock at the tail of the used linked list can be selected (if the used linked list does not have the pblock, one pblock can be selected from the free linked list to be put into the used linked list), and then the free page (free page) is selected from the selected pblock to be used as the data page aimed at by the write request. After the storage resource pool stores the data page for which the write request is directed in the DRAM, the data page may be set to dirty in the TLC disk.
The QLC disk and the TCL disk are both provided with cache pages in the DRAM, and in order to improve the searching efficiency of the cache pages, physical addresses in the disks corresponding to the cache pages are recorded by adopting the following binary tree data structure: the method comprises the steps of pblock_no and page_no, wherein the pblock_no is the pblock number, the page_no is the page number inside the pblock, and page cache is maintained by adopting an LRU elimination algorithm. It is understood that when the number of cache pages in the DRAM reaches a number threshold, at least one cache page is moved to the TLC disk or QLC disk.
Based on this, the hybrid file subsystem is further configured to generate a read request in the performance data area and send the read request to the space management layer, so that the space management layer reads data from the TLC disk, specifically, the data management layer of the hybrid file subsystem is configured to generate a read request in the performance data area and send the read request to the space management layer, so that the space management layer reads data from the TLC disk. After receiving a read request sent by the mixed file subsystem, the space management layer can be used for sending the read request to the address index layer so that the address index layer can send the read request to the storage resource pool, and the address index layer and the storage resource pool can communicate by using a ring queue; the storage resource pool can be used for searching a cache page aimed by the read request in the DRAM after the read request is received, and replacing any cache page in the DRAM by using a corresponding data page in the TLC disk when the cache page aimed by the read request does not exist in the DRAM, so as to read data from the cache page aimed by the read request in the DRAM. It can be understood that after the storage resource pool finishes data reading, the read data is reported as a response message of a read request, and the response message can be reported sequentially along the address index layer, the space management layer and the data management layer and finally reaches the mixed file subsystem.
The data management layer of the hybrid file subsystem generates a read request in the performance data area, and can read data from the TLC disk corresponding to the performance data area, so that the response speed of the read request is improved, the system performance is ensured, and good storage experience is provided for a user.
Because the space management layer has two types of logic units, namely, the logic unit of the TLC partition and the logic unit of the QLC partition, the read request generated by the mixed file subsystem in the performance data area can be sent to the logic unit of the TLC partition to read data from the TLC disk.
The storage resource pool is used for searching the cache page aimed by the read request in the DRAM according to the mapping relation between the logical address and the physical address of the data page stored in the address index layer and the logical address in the read request. When the corresponding data page in the TLC disk is used for replacing any cache page in the DRAM, the cache page with the longest access time from the current time in the cache pages marked as dirty in the DRAM can be replaced by the storage resource pool, the replaced page is marked as clean in the TLC disk, and the data page aimed by the read request is also marked as clean in the TLC disk.
The QLC disk and the TCL disk are both provided with cache pages in the DRAM, and in order to improve the searching efficiency of the cache pages, physical addresses in the disks corresponding to the cache pages are recorded by adopting the following binary tree data structure: the method comprises the steps of pblock_no and page_no, wherein the pblock_no is the pblock number, the page_no is the page number inside the pblock, and page cache is maintained by adopting an LRU elimination algorithm. It is understood that the number of cache pages in the DRAM reaches a number threshold, at least one cache page is moved to the TLC disk.
Based on this, the hybrid file subsystem is further configured to generate a data movement request in the performance data area and send the movement request to the space management layer, so that the space management layer moves data from the TLC disk to the QLC disk, and specifically, the data management layer of the hybrid file subsystem is configured to generate a data movement request in the performance data area and send the movement request to the space management layer, so that the space management layer moves data from the TLC disk to the QLC disk.
The data management layer of the hybrid file subsystem generates a data transfer request in the performance data area, so that data in the TLC disk corresponding to the performance data area can be transferred to the QLC disk corresponding to the capacity data area, and accordingly the TLC disk and the space of the performance capacity area corresponding to the TLC disk are released, and high-performance response capability is provided for users.
Referring to fig. 5, a relationship between a performance data area and a capacity data area is shown by way of example, where a hybrid file subsystem may divide the performance data area and the capacity data area into a plurality of data segments (segments), so that the multi-core characteristics of the system may be fully utilized to process the plurality of data segments, and improve the performance of the system; the data in the performance data area can be transferred to the capacity data area after being compressed, so that the data in the corresponding TLC disk is transferred to the QLC disk after being compressed; a buffer area QLC Read Cache of the capacity data area is arranged in the performance data area; the capacity data area may perform garbage cleaning GC when the data segment reaches a certain threshold to reclaim the data space.
Referring to FIG. 6, details of transferring data in a performance data area to a capacity data area after compression are exemplarily shown, wherein a data segment may be further divided into data pages (pages), and the data pages in the data segment may correspond to the data pages in a disk; when the data in the performance data area is transferred to the capacity data area after being compressed, invalid data in the data section of the performance data area is proposed, and the valid data is written into the data section of the capacity data area.
Referring to fig. 7, an exemplary operation process of the data management system provided in the present disclosure is shown, including the following steps:
s1: and (3) issuing a discovery message by the storage pool, finding a Nand target, and carrying out corresponding initialization.
S2: create pdisk for type Nand target, such as TLC creates TLC type pdisk and QLC creates QLC type pdisk. After the Pdisk is created, registering to the GC service, the GC may start the Pdisk GC in the foreground or in the background.
S3: report events corresponding to the reporting pdisk by the storage pool, such as creation, reset, power-down, bad block, etc.
S4: creating lun: after receiving the creation event, the space creates a corresponding lun object, and lun is an external logic unit.
S5: the create obj lun creates the corresponding FTL obj object such as map, user data stat, checkpoint, etc.
S6: the BD module (scanning layer) in the scan lun HFFS initiates the scan lun to acquire lun information in the TFS.
S7: mkfs: the file system creates a mixed-media multi-lun based file system.
S8: r/w, namely, reading and writing by a file system, sending a read-write message to lun of a space layer of the TFS, and searching a vpage buffer by a read-write command firstly, and directly overwriting or reading if the vpage buffer exists in the buffer; and after the buffer reaches a certain threshold, the vpage buffer is dropped, namely, the corresponding pdisk is stored.
S9: flush-the vpage buffer swipes down the disk.
S10: update, lun updates FTL object.
According to a second aspect of an embodiment of the present disclosure, there is provided a terminal device, including the data management system of the first aspect, where a UFS device is configured in the terminal device. Referring to fig. 8, a block diagram of a terminal device is schematically shown. For example, the terminal device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, or the like.
Referring to fig. 8, a terminal device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the terminal device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on terminal device 800, contact data, phonebook data, messages, pictures, video, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 806 provides power to the various components of the terminal device 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for terminal device 800.
The multimedia component 808 includes a screen between the terminal device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the terminal device 800 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the terminal device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the terminal device 800. For example, the sensor assembly 814 may detect an on/off state of the terminal device 800, a relative positioning of the components, such as a display and keypad of the terminal device 800, the sensor assembly 814 may also detect a change in position of the terminal device 800 or a component of the terminal device 800, the presence or absence of a user's contact with the terminal device 800, an orientation or acceleration/deceleration of the terminal device 800, and a change in temperature of the terminal device 800. The sensor assembly 814 may also include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the terminal device 800 and other devices, either wired or wireless. The terminal device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G,4G or 5G or a combination thereof. In one exemplary embodiment, the communication part 816 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal device 800 can be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A data management system, comprising:
a hierarchical storage subsystem on the UFS device for integrating a plurality of QLC media chips in the UFS device into a QLC disk and integrating a plurality of TLC media chips in the UFS device into a TLC disk;
and the mixed file subsystem on the host is used for writing data into the TLC disk and moving the data in the TLC disk to the QLC disk under the condition that the free space in the TLC disk is smaller than a preset threshold value.
2. The data management system of claim 1, wherein the TLC disk has a QLC buffer for buffering data read in the QLC disk;
the mixed file subsystem is also used for moving the read data from the QLC disk to the QLC cache area and reading the data in the QLC cache area.
3. The data management system of claim 1, wherein the mixed file subsystem is further configured to compress data in the TLC disk if free space in the TLC disk is less than a compression threshold;
The mixed file subsystem is used for moving the data in the TLC disk to the QLC disk under the condition that the free space is smaller than the preset threshold after the data in the TLC disk is compressed;
the mixed file subsystem is also used for garbage cleaning the data in the QLC disk under the condition that the free space in the QLC disk is smaller than the compression threshold value.
4. The data management system of claim 1, wherein the tiered storage subsystem comprises:
a storage resource pool for integrating a plurality of QLC media chips in the UFS device into a QLC disk and integrating a plurality of TLC media chips in the UFS device into a TLC disk;
the space management layer is used for dividing the QLC disk into a plurality of QLC partitions, dividing the TLC disk into a plurality of TLC partitions and distributing a logic unit for each QLC partition and each TLC partition;
and the address index layer is used for storing the mapping relation between the logical address of the data page in the logical unit and the physical address of the data page in the QLC partition or the data page in the TLC partition.
5. The data management system of claim 4, wherein the space management layer is further configured to divide the TLC disk into a metadata space and a TLC data space and to treat the QLC disk as a QLC data space, wherein the metadata space comprises at least one TLC partition, the TLC data space comprises at least one TLC partition, and the QLC data space comprises a plurality of QLC partitions.
6. The data management system of claim 4, wherein the hybrid file subsystem includes a performance data area corresponding to the TLC disk and a capacity data area corresponding to the QLC disk;
the mixed file subsystem is further used for generating a write request in the performance data area and sending the write request to the space management layer so that the space management layer writes data into the TLC disk;
the mixed file subsystem is further used for generating a read request in the performance data area and sending the read request to the space management layer so that the space management layer can read data from the TLC disk;
the mixed file subsystem is further configured to generate a data movement request in the performance data area, and send the movement request to the space management layer, so that the space management layer moves data from the TLC disk to the QLC disk.
7. The data management system of claim 6, wherein the space management layer is configured to send the write request or the read request to an address index layer, such that the address index layer sends the write request or the read request to the storage resource pool;
The storage resource pool is used for writing data into a cache page aimed by the write request in the DRAM after receiving the write request, or reading data from the cache page aimed by the read request in the DRAM after receiving the read request;
and when the number of the cache pages in the DRAM reaches a number threshold, at least one cache page is moved to the TLC disk.
8. The data management system of claim 7, wherein the storage resource pool is further configured to obtain a data page from the TLC disk after receiving the write request, and store the data page as a cache page for the write request in the DRAM.
9. The data management system of claim 7, wherein the pool of storage resources is further configured to, upon receipt of the read request, look up a cache page for the read request in the DRAM, and replace any cache page in the DRAM with a corresponding data page in the TLC disk when the cache page for the read request does not exist in the DRAM.
10. The data management system according to claim 8 or 9, wherein the write request has a logical address of a data page to which the write request is directed, and the storage resource pool is further configured to send an update message to the address index layer, where the update message is used to update a physical address corresponding to the page to which the write request is directed;
The read request has a logic address of a specific page, and the storage resource pool is used for searching a cache page specific to the read request in the DRAM, specifically for:
and searching a cache page aimed by the read request in the DRAM according to the mapping relation between the logical address and the physical address of the data page stored in the address index layer and the logical address in the read request.
11. The data management system of claim 6, wherein the hybrid file subsystem comprises:
the scanning layer is used for acquiring the logic unit in the space management layer;
and the data management layer is used for generating a write request, a read request or a data movement request in the performance data area.
12. A terminal device comprising the data management system of any one of claims 1 to 11, said terminal device having a UFS device configured therein.
CN202310841724.8A 2023-07-10 2023-07-10 Data management system and terminal equipment Active CN116560587B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310841724.8A CN116560587B (en) 2023-07-10 2023-07-10 Data management system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310841724.8A CN116560587B (en) 2023-07-10 2023-07-10 Data management system and terminal equipment

Publications (2)

Publication Number Publication Date
CN116560587A true CN116560587A (en) 2023-08-08
CN116560587B CN116560587B (en) 2023-10-13

Family

ID=87493279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310841724.8A Active CN116560587B (en) 2023-07-10 2023-07-10 Data management system and terminal equipment

Country Status (1)

Country Link
CN (1) CN116560587B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545030A (en) * 2003-11-14 2004-11-10 清华大学 Data distribution dynamic mapping method based on magnetic disc characteristic
CN106406747A (en) * 2015-08-03 2017-02-15 中兴通讯股份有限公司 Hard disk storage management method and apparatus for mobile terminal
CN111506262A (en) * 2020-03-25 2020-08-07 华为技术有限公司 Storage system, file storage and reading method and terminal equipment
US20210398578A1 (en) * 2020-06-22 2021-12-23 Micron Technology, Inc. Magnetic cache for a memory device
CN113885787A (en) * 2021-06-08 2022-01-04 荣耀终端有限公司 Memory management method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1545030A (en) * 2003-11-14 2004-11-10 清华大学 Data distribution dynamic mapping method based on magnetic disc characteristic
CN106406747A (en) * 2015-08-03 2017-02-15 中兴通讯股份有限公司 Hard disk storage management method and apparatus for mobile terminal
CN111506262A (en) * 2020-03-25 2020-08-07 华为技术有限公司 Storage system, file storage and reading method and terminal equipment
US20210398578A1 (en) * 2020-06-22 2021-12-23 Micron Technology, Inc. Magnetic cache for a memory device
CN113885787A (en) * 2021-06-08 2022-01-04 荣耀终端有限公司 Memory management method and electronic equipment

Also Published As

Publication number Publication date
CN116560587B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
US20240054079A1 (en) Memory Management Method and Apparatus, Electronic Device, and Computer-Readable Storage Medium
CN110807125B (en) Recommendation system, data access method and device, server and storage medium
CN108268219B (en) Method and device for processing IO (input/output) request
US9792227B2 (en) Heterogeneous unified memory
KR101598727B1 (en) Techniques for moving data between memory types
CN116244067B (en) Virtual memory management method and electronic equipment
US9891825B2 (en) Memory system of increasing and decreasing first user capacity that is smaller than a second physical capacity
US8214581B2 (en) System and method for cache synchronization
CN103218224A (en) Method and terminal for improving utilization ratio of memory space
WO2009107393A1 (en) Access device, information recording device, controller, and information recording system
US20190258582A1 (en) Dram-based storage caching method and dram-based smart terminal
CN110554837A (en) Intelligent switching of fatigue-prone storage media
CN115145735B (en) Memory allocation method and device and readable storage medium
CN111274160A (en) Data storage method, electronic device, and medium
CN113419670A (en) Data writing processing method and device and electronic equipment
CN115421651A (en) Data processing method of solid state disk, electronic device and medium
CN116560587B (en) Data management system and terminal equipment
CN115934002A (en) Solid state disk access method, solid state disk, storage system and cloud server
CN116048414A (en) Data reading method of equipment and electronic equipment
CN117349246A (en) Disk sorting method, device and storage medium
CN115687270A (en) Data storage sorting method and device, electronic equipment and storage medium
CN116360671A (en) Storage method, storage device, terminal and storage medium
CN112783420A (en) Data deleting and garbage recycling method, device, system and storage medium
CN117707639B (en) Application start acceleration method, electronic device and storage medium
CN111324287A (en) Memory device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant