CN110413537B - Flash translation layer facing hybrid solid state disk and conversion method - Google Patents

Flash translation layer facing hybrid solid state disk and conversion method Download PDF

Info

Publication number
CN110413537B
CN110413537B CN201910675390.5A CN201910675390A CN110413537B CN 110413537 B CN110413537 B CN 110413537B CN 201910675390 A CN201910675390 A CN 201910675390A CN 110413537 B CN110413537 B CN 110413537B
Authority
CN
China
Prior art keywords
cmt
mapping
cold
hot
medium
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910675390.5A
Other languages
Chinese (zh)
Other versions
CN110413537A (en
Inventor
姚英彪
范金龙
周杰
孔小冲
徐欣
姜显扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201910675390.5A priority Critical patent/CN110413537B/en
Publication of CN110413537A publication Critical patent/CN110413537A/en
Application granted granted Critical
Publication of CN110413537B publication Critical patent/CN110413537B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • G06F2212/1036Life time enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a flash memory conversion layer facing a mixed solid state disk, which comprises an address mapping buffer zone, wherein the address mapping buffer zone consists of a Global Translation Directory (GTD), a mapping page cache slot (TPCS), a Hot mapping cache table Hot-CMT and a Cold mapping cache table Cold-CMT; the GTD is used for recording the actual physical page number of each translation page, the TPCS is used for caching the whole mapping page to which the current loading mapping item belongs when the current mapping item is not hit in the cache, the Hot-CMT is used for caching the frequently accessed write mapping item, and the Cold-CMT is used for caching the read mapping item and the infrequently accessed write mapping item; the Hot-CMT and the Cold-CMT record the logical page number LPN and the physical page number PPN of the mapping items. The invention solves the design problem of the flash translation layer for constructing the SSD by using various media. The unified management of the mapping items is realized, the performance of the hybrid SSD is improved, the unit capacity cost of the hybrid SSD is reduced, the wear balance between the medium A and the medium B is better realized, and the service life of the hybrid SSD is prolonged.

Description

Flash translation layer facing hybrid solid state disk and conversion method
Technical Field
The invention relates to the field of computer storage systems, in particular to a flash translation layer facing a hybrid solid state disk and a translation method thereof.
Background
With the continuous progress of Solid State Disk (SSD) design technology, compared with the conventional mechanical hard Disk, the SSD has many advantages of fast read/write speed, low power consumption, light weight, shock resistance, and drop resistance, which makes the SSD popular in the field of computer storage.
Currently, the storage medium of the mainstream SSD is NAND flash memory. The NAND flash memory has the following physical characteristics: (1) only three operations of reading, writing and erasing are provided, and the performances of the three operations are asymmetric, wherein the reading speed is fastest, the writing time is the lowest, and the erasing speed is the slowest. (2) Flash memories are organized in a structure of pages (pages), blocks (blocks), planes (planes), a page being a minimum unit of reading/writing, and a block being a minimum unit of erasing. (3) Flash memory does not support in-place update, and flash memory can only be written once after being erased, i.e., erased before writing (erase before write), which causes a huge overhead. (4) The number of program/erase (P/E) times per cell of a flash memory is limited, and the data stored in the flash memory cells beyond the number of times is no longer reliable.
Because the NAND Flash memory has the above characteristics, in the SSD design, a Flash Translation Layer (FTL) is generally required to hide the above characteristics of the NAND Flash memory, and the SSD is simulated in the form of a conventional hard disk having only read-write operation to adapt to the current file system.
The key technology of the FTL comprises three modules of address mapping, garbage recycling and wear leveling. The address mapping is responsible for mapping logical addresses from the file system to physical addresses in the flash memory, and generally includes three mapping modes, namely page-level mapping, block-level mapping and hybrid mapping. The garbage collection is responsible for collecting the invalid pages after the operation of the erase block, so that the invalid pages can be reused. The wear leveling is responsible for enabling the wear rate among the blocks to be consistent as much as possible, and preventing part of the blocks from being damaged in advance due to too fast wear.
The flash memory chip includes class 3, SLC (Single-Level Cell, i.e., 1bit/Cell), MLC (Multi-Level Cell, i.e., 2bit/Cell), and TLC (Trinary-Level Cell, i.e., 3 bit/Cell). The performances of the three media are SLC best, MLC second time and TLC worse; however, SLC is the highest and MLC is the second lowest, and TLC is the lowest in cost of unit storage capacity. Previously, SSDs have mainly adopted a homogenous structure, i.e., the same flash memory medium is used to construct the SSD. Now, to achieve a performance and cost tradeoff, heterogeneous SSD starts to appear, i.e. different storage media are used to build the underlying physical storage, such as SLC and MLC, SLC and TLC building hybrid SSD.
The invention discloses a flash translation layer facing a hybrid solid state disk and a conversion method thereof aiming at a hybrid SSD constructed by two storage media, namely a storage medium A with better performance and a storage medium B with inferior performance (such as the A is SLC, the B is MLC or TLC, or the A is MLC, and the B is TLC), so as to better realize the compromise between the performance and the cost of the hybrid SSD.
Disclosure of Invention
The invention aims to provide a flash translation layer facing a hybrid solid state disk and a translation method thereof aiming at the defects of the prior art. The high-performance and low-cost compromise of the hybrid SSD is guaranteed, the wear balance between the medium A and the medium B is realized, and the service life of the hybrid SSD is prolonged.
In order to achieve the purpose, the invention adopts the following technical scheme:
a flash conversion layer facing a mixed solid state disk is composed of two flash media, wherein the performance of a medium A is better, but the cost of unit capacity is higher, and the performance of a medium B is poorer than that of the medium A, but the cost of unit capacity is lower; the medium A is divided into a data block area and a translation block area, and the medium B is a data block area; the translation block area is used for storing translation pages, and the data block area is used for storing data pages; the translation page is used for storing the mapping relation between the logical address and the physical address of the data, and the data page is used for storing the actual data; the translation pages are aligned according to the page size of the medium A, and the data pages are aligned according to the page size of the medium B; the flash translation layer comprises an address mapping buffer area, and the address mapping buffer area consists of a Global Translation Directory (GTD), a mapping page cache slot (TPCS), a Hot mapping cache table (Hot-CMT) and a Cold mapping cache table (Cold-CMT); the GTD is used for recording the actual physical page number of each translation page, the TPCS is used for caching the whole mapping page to which the current loading mapping item belongs when the current mapping item is not hit in the cache, the Hot-CMT is used for caching the frequently accessed write mapping item, and the Cold-CMT is used for caching the read mapping item and the infrequently accessed write mapping item; the Hot-CMT and the Cold-CMT record the logical page number LPN and the physical page number PPN of the mapping items.
Further, the flash translation layer adopts a page level address mapping mode.
Further, the invention provides a conversion method of a flash memory conversion layer facing a hybrid solid state disk, which is used for the flash memory conversion layer and comprises the following steps:
s1, when the access request comes, judging the request type; if the request is read, executing S2, if the request is write, executing S6;
s2, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS according to the reading request; if the mapping item is positioned in the Hot-CMT, executing S5; if the mapping item is located in Cold-CMT or TPCS, executing S3; otherwise, if the mapping item is not in the cache mapping table, executing S4;
s3, loading the mapping item to the MRU (most Central used) position of the Cold-CMT, if the size of the Cold-CMT is larger than the set target threshold value, starting the elimination operation of the Cold-CMT, and then executing S5;
s4, when the mapping item is not in the cache mapping table, accessing GTD according to the requested logical page number to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, loading the mapping item to the MRU position of Cold-CMT at the same time, if the size of Cold-CMT is larger than the set target threshold value, starting the elimination operation of Cold-CMT, and finally executing S5;
s5, returning the physical page number corresponding to the mapping item, ending the address mapping conversion;
s6, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS according to the order of the write request; if the mapping item is hit in the Hot-CMT, Cold-CMT or TPCS, S7 is executed; otherwise, if the mapping item is not in the cache mapping table, executing S8;
s7, if hit in the Hot-CMT, the mapping item is migrated to the MRU position of the Hot-CMT; if hit in Cold-CMT, the mapping item is also migrated to the MRU position in the Hot-CMT, and if the Hot-CMT size is larger than the set target threshold value, the rejection operation of the Hot-CMT is started; if the mapping item is hit in the TPCS, the mapping item is migrated to the MRU position of the Cold-CMT, and if the size of the Cold-CMT is larger than a set target threshold value, the removing operation of the Cold-CMT is started; finally, executing S9;
s8, when the mapping item is not in the cache mapping table, accessing GTD according to the logic page number of the request to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, and loading the mapping item to the MRU position of Cold-CMT at the same time; if the Cold-CMT size is larger than the set target threshold value, starting the removing operation of the Cold-CMT; finally, executing S9;
s9, adjusting the sizes of the Hot-CMT and Cold-CMT cache tables according to the abrasion speeds of the media A and B; then allocating a new idle physical data page number for the write request, and writing the new physical data page number into a corresponding cache mapping table; and finally returning the physical page number before the mapping item and the newly allocated free physical page number.
Further, Cold-CMT in steps S3, S4, S7, and S8 employs a clean item first least recently used LRU culling mechanism: setting a clean item priority eliminating window with fixed length at the tail part of the Cold-CMT queue; when the Cold-CMT needs to carry out elimination operation, searching an unrefreshed mapping item in a preferential elimination window from the LRU position, and if the unrefreshed mapping item can be found, directly eliminating the clean mapping item; otherwise, the updated mapping item on the LRU position is selected, and the information is removed after being written back to the translation page.
Further, the Hot-CMT in step S7 adopts an LRU removal mechanism, that is, the mapping item at the LRU position of the Hot-CMT queue is directly selected, and the information is removed after being written back to the translation page.
Further, the adjusting the sizes of the Hot-CMT and Cold-CMT cache tables specifically comprises:
the abrasion speeds of the Hot-CMT and the Cold-CMT are ensured to be consistent as much as possible by controlling the proportion alpha of the Hot-CMT to the Cold-CMT; alpha is controlled by the relative wear rate phi of the medium A and the medium B, and the regulation method of alpha is as follows:
s21, calculating the current relative wear rate phi of the media A and the media B;
s22, comparing phi and phitSize of (phi)tFor presetting a relative wear leveling rate threshold value, if phi<φtIf so, the current medium A and the current medium B are considered to be in balanced abrasion, and alpha is not required to be adjusted; otherwise, the following S23 is executed;
s23, if RWA>RWBAt this time, the medium A wears faster than the medium B, and the ratio α of Hot-CMT is decreased by αΔ α to reduce the amount of data written to the medium a; on the contrary, the medium B wears faster than the medium a, and the amount of data written on the medium B is reduced by increasing the ratio α ═ α +. DELTA.α of Hot-CMT.
Further, φ is defined as:
Figure BDA0002143098170000041
wherein max (-) represents the maximum value, min (-) represents the minimum value; RW (R-W)ARepresenting the average wear rate of Medium A normalized to Medium B, RWBRepresenting the average wear rate of media B.
Further, the step S9 of allocating a new free physical data page number for the write request specifically includes:
dividing the data block area into a hot data block area and a cold data block area, namely, storing the data in the hot data block area as hot data in the medium A; data not in the hot data block area is cold data and is stored in the medium B; when data is written, the page of the mapping item in the Hot-CMT is considered as Hot data, and a free page number belonging to the medium A is allocated; otherwise, it is considered as cold data, and a free page number belonging to medium B is assigned.
Further, when the free blocks in the medium a are not enough, the garbage collection mechanism of the medium a is triggered, and the operation process is as follows:
s31, selecting the block with the most invalid pages as a recovery block, and then judging whether the recovery block has valid data pages; if there is a valid data page, then execute S32, otherwise execute S35;
s32, sequentially judging whether the mapping item of each effective data page is hit in the Hot-CMT, if yes, executing S33, otherwise executing S34;
s33, the data is still hot data, the data page is migrated to the free block of the medium A, and the corresponding mapping item information is modified;
s34, the data is cold data, the data page is transferred to the free block of the medium B, and the corresponding mapping information is modified;
s35, erasing the recovery block and ending the garbage recovery operation;
further, when the free block in the medium B is insufficient, triggering a garbage collection mechanism of the medium B, selecting a block with the most invalid pages as a collection block, then migrating the valid data pages in the collection block to the free block of the medium B, and updating the mapping relation of the mapping item; and finally erasing the recovery block and finishing the garbage recovery operation.
The invention provides a flash translation layer design method facing a hybrid SSD, which solves the problem of designing a flash translation layer for constructing the SSD by using various media. On one hand, the translation pages in the medium A and the medium B are aligned according to the page size of the medium A, and the uniform management of the mapping items is realized. On the other hand, a hot-write request and other requests are distinguished by adopting a cold-hot mapping cache, so that the hot-write request is distributed to a medium A with good performance, and other write requests are distributed to a medium B, the performance of the hybrid SSD is improved, and the unit capacity cost of the hybrid SSD is reduced; meanwhile, the proportion of the cold-hot mapping cache table is adjusted according to the wear speed of the medium A and the medium B, so that the wear balance between the medium A and the medium B is better realized, and the service life of the hybrid SSD is prolonged. The method disclosed by the aspect has practicability and a better application prospect in the design of the hybrid SSD.
Drawings
In order to more clearly illustrate the embodiments or technical solutions of the present invention, the drawings used in the embodiments or technical solutions of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive effort.
FIG. 1: the overall architecture of the present invention.
FIG. 2: the invention discloses a schematic diagram of an address mapping structure.
FIG. 3: the address translation flow diagram of the present invention.
FIG. 4: an address translation embodiment of the present invention is shown.
FIG. 5: the invention discloses a schematic diagram for friction balance between a medium A and a medium B and adaptive adjustment of the size of a cold-hot mapping table.
FIG. 6: the invention relates to a write request data distribution and garbage collection flow chart.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Example one
The embodiment provides a flash conversion layer for a hybrid solid state disk, wherein the hybrid solid state disk is composed of two flash media, wherein the medium A has good performance but high unit capacity cost, and the medium B has poor performance relative to the medium A but low unit capacity cost. Further, the medium a is divided into a data block area and a translation block area, and the medium B is entirely a data block area. The translation block area is used for storing translation pages, and the data block area is used for storing data pages. The translation pages are used for storing the mapping relation between the logical addresses and the physical addresses of the data, and the data pages are used for storing the actual data. The translation pages are aligned according to the page size of medium a, and the data pages are aligned according to the page size of medium B (generally, different storage media have different page sizes, and the page size of medium a is smaller than the page size of medium B and satisfies an integer multiple relationship).
For example, as shown in fig. 1, two different storage media SLC and MLC are used to mix and construct SSD, i.e. SLC is medium a in the above patent, and MLC is medium B in the above patent. The SLC is divided into a data block area and a translation block area, the MLCs are all the data block area, hot data pages and translation pages are stored in the SLC, and cold data pages are stored in the MLCs. The requested translation page size is aligned in 2KB (SLC page size) and the data page size is aligned in 4KB (MLC page size). And dividing the buffer area into four parts, namely GTD, TPCS, Hot-CMT and Cold-CMT, wherein the scale coefficients of the Hot-CMT and the Cold-CMT can be adaptively adjusted according to the relative wear rates of the SLC and the MLC.
The flash translation layer adopts a page level address mapping mode and comprises an address mapping buffer zone in the flash translation layer. Further, the address mapping buffer is divided into a GTD (global translation directory), a TPCS (mapping page cache slot), a Hot-CMT (Hot mapping cache table), and a Cold-CMT (Cold mapping cache table). GTD is used for recording the actual physical page number of each translation page, TPCS is used for caching the whole mapping page to which the current loading mapping item belongs (the current mapping item is only triggered when the current mapping item is not hit in the cache), Hot-CMT is used for caching frequently accessed write mapping items, and Cold-CMT is used for caching read mapping items and infrequently accessed write mapping items. The Hot-CMT and the Cold-CMT record the Logical Page Number (LPN) and the Physical Page Number (PPN) of the mapping item.
As shown in fig. 2, the flash translation layer includes 4 tables, which are: GTD, Hot-CMT, Cold-CMT and TPCS. The GTD is a global mapping table used to record the actual physical page number of each translated page. The TVPN in the GTD is a virtual translation page number and is not stored. The virtual translation page number corresponding to any logic page can be expressed by formula
Figure BDA0002143098170000071
It is obtained that, here, LPN is the logical page number, NUM is the number of mapping entries that can be stored per translation page,
Figure BDA0002143098170000072
representing the lower integer function. The location of any logical page in its corresponding virtual translation page mayObtained by using the formula No — Mod (LPN, NUM), where LPN and NUM are the same, and Mod represents the remainder function. The Hot-CMT is a Hot mapping cache table and is used for caching mapping items of write requests with high access frequency. When a write request hits in Cold-CMT, recognizing the write request as Hot write, and migrating the write request to the MRU position of Hot-CMT; when the Hot-CMT is full, the mapping item which is not accessed frequently is written back to the translation page of the bottom layer flash memory by adopting the LRU elimination strategy. The Cold-CMT is a Cold mapping cache table and is used for caching mapping items of write requests and mapping items of read requests with low access frequency; when the Cold-CMT is full, the clean item-first LRU elimination strategy is adopted. The TPCS is a mapping page cache slot used to cache the entire mapping page to which the current load mapping entry belongs (only triggered if the current mapping entry misses in the cache). In the bottom flash memory storage layer, because reading and writing updating of the translation page are frequent, the translation page is placed in the SLC with better reading and writing performance, a Hot data page (a data page corresponding to the Hot-CMT) is stored in the SLC, and a cold data page is stored in the MLC. In addition, the sizes of the Hot-CMT and Cold-CMT may be dynamically scaled according to the relative wear rates of the SLC and MLC.
Example two
As shown in fig. 3, based on the flash translation layer in the first embodiment, this embodiment provides a method for converting a flash translation layer for a hybrid solid state disk, which specifically includes:
s1, when the access request comes, judges the request type. If the request is a read request, S2 is performed, and if the request is a write request, S6 is performed.
S2, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS in sequence for the read request. If the mapping item is positioned in the Hot-CMT, executing S5; if the mapping item is located in Cold-CMT or TPCS, executing S3; otherwise, the mapping item is not in the cache mapping table, then S4 is executed.
S3, loading the mapping item to the MRU (most Central used) position of the Cold-CMT, if the size of the Cold-CMT is larger than the set target threshold value, starting the elimination operation of the Cold-CMT, and then executing S5.
S4, when the mapping item is not in the cache mapping table, accessing GTD according to the logic page number of the request to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, loading the mapping item to the MRU position of Cold-CMT at the same time, and if the size of Cold-CMT is larger than the set target threshold value, starting the elimination operation of Cold-CMT. Finally, S5 is executed.
S5, returning the physical page number corresponding to the mapping item, and ending the address mapping conversion.
S6, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS in sequence for the write request. If the mapping item is hit in the Hot-CMT, Cold-CMT or TPCS, S7 is executed; otherwise, the mapping item is not in the cache mapping table, and S8 is executed.
S7, if hit in the Hot-CMT, the mapping item is migrated to the MRU position of the Hot-CMT; if hit in Cold-CMT, the mapping item is also migrated to the MRU position in the Hot-CMT, and if the Hot-CMT size is larger than the set target threshold value, the rejection operation of the Hot-CMT is started; and if the mapping item is hit in the TPCS, migrating the mapping item to the MRU position of the Cold-CMT, and if the size of the Cold-CMT is larger than a set target threshold value at the moment, starting the elimination operation of the Cold-CMT. Finally, S9 is executed.
S8, when the mapping item is not in the cache mapping table, accessing GTD according to the logic page number of the request to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, and loading the mapping item to the MRU position of Cold-CMT at the same time; and if the size of the Cold-CMT is larger than the set target threshold value, starting the removing operation of the Cold-CMT. Finally, S9 is executed.
S9, adjusting the sizes of the Hot-CMT and Cold-CMT cache tables according to the abrasion speeds of the media A and B; then allocating a new idle physical data page number for the write request, and writing the new physical data page number into a corresponding cache mapping table; and finally returning the physical page number before the mapping item and the newly allocated free physical page number.
The Cold-CMT in steps S3, S4, S7, and S8 employs a clean item first Least Recently Used (LRU) culling mechanism. The specific implementation method comprises the following steps: setting a clean item priority eliminating window with fixed length at the tail part of the Cold-CMT queue; when the Cold-CMT needs to perform a culling operation, a clean (not updated) map entry is found in the culling window starting from the LRU location. If the mapping item can be found, directly eliminating the clean mapping item; otherwise, the dirty (updated) map entry at the LRU location is selected and its information is written back to the translated page and then culled.
In step S7, the Hot-CMT adopts an LRU elimination mechanism, namely, the mapping item on the LRU position of the Hot-CMT queue is directly selected, and the information is eliminated after being written back to the translation page.
In fig. 4, it is assumed that the logical address space of a hybrid SSD composed of SLCs and MLCs is 32 data pages, and one physical page of SLCs can store only 4 mapping entries. In addition, it is also assumed that the Hot-CMT and Cold-CMT resizing is not involved during address translation. It should be noted that the VPNs and VTPNs in fig. 4 do not actually occupy the storage space, and their functions are only for convenience of description.
According to the inventive concept of page level address mapping, 32 data pages require 32 mapping entries, whereas SLC can only store 4 mapping entries per physical page, so that the entire mapping table stores physical pages requiring 8 SLC. Further, given the specificity that flash does not support in-place updates, 12 pages are actually allocated in SLC for storage of the full mapping table, numbered T0-T11. Suppose SLC, MLC have 48 pages of data left for user data storage, numbered D0-D47, with the first 8 being at SLC and the last 40 being at MLC. The User Data space of each Data page is used to store real User Data, and the OOB space is used to store metadata of each page, such as the status of the current page (valid/invalid/free), logical page number of the corresponding Data, error checking Data, and the like. Based on the above assumptions, fig. 4(a) shows an initial state diagram of each mapping table and the underlying flash translation page of the flash translation layer. In fig. 4, the clean item priority elimination window of Cold-CMT is 2, i.e. the two mapping items at the tail are clean item priority elimination windows;
shadow entries in the Hot-CMT and Cold-CMT are mapping entries that have been updated.
Now assume that the following requests need to be processed in order: LPN-7 (read request), LPN-1 (read request), LPN-3 (write request), LPN-5 (write request), and LPN-6 (write request). According to the method disclosed in the present invention, in the initial state shown in fig. 4(a), the address translation process and the result are as follows:
c1, LPN-7 read request, the mapping item hits in Cold-CMT, migrate the mapping item (7, D22) to the MRU location of Cold-CMT, and return the requested PPN-D22.
C2, LPN ═ 1 read request, the mapping item hits in TPCS, loading the mapping item (1, D18) into the MRU location of Cold-CMT. At this point, Cold-CMT is full, looks for a clean map entry in the culling window starting from the LRU location, selects (8, D23) as a clean map entry to cull directly, and returns the requested PPN — D18.
C3, LPN ═ 3 write request, on Hot-CMT hit, migrate the mapping entry (3, D7) to the MRU location of Hot-CMT. Since the write request hits in Hot-CMT and the request is identified as Hot-write, the request is assigned a free physical page number PPN of SLC (D3), the previous data page D7 is invalidated, the mapping relation (3, D7) is updated to (3, D3), and finally the previous physical page number PPN of the mapping item (D7) and the newly assigned free physical page number PPN (D3) are returned.
And C4, obtaining the physical page number TPPN (T1) of the translation page corresponding to the mapping item by querying GTD (GTD) when the request is not hit in the mapping cache table, then loading the translation page into TPCS, and loading the request mapping item (5, D35) to the MRU position of Cold-CMT. At this point, the Cold-CMT is full, looking for a clean map entry in the culling window starting from the LRU location, and since (9, D24) is a clean map entry, it is selected as the culling entry to cull directly. Because the write request is not at Hot-CMT, the request is assigned a free physical page number PPN of D36 belonging to MLC, the previous data page D35 is set as invalid, the mapping relation (5, D35) is updated to (5, D36), and finally the previous physical page number PPN of the mapping item and the newly assigned free physical page number PPN of D35 and D36 are returned.
C5, LPN ═ 6 write request, on Cold-CMT hit, migrate the mapping entry (6, D21) to the MRU location of Hot-CMT. At this time, the Hot-CMT is full, the mapping item (0, D0) at the LRU position is selected, the information of the mapping item is removed after being written back to the new translation page T8, then the free physical data page number PPN of the new SLC flash memory area is allocated to the request as D4, the previous data page D21 is set as invalid, the mapping relation (6, D21) is updated to (6, D4), and finally the previous physical page number PPN of the mapping item and the newly allocated free physical page number PPN as D21 and D4 are returned.
The mechanism for adjusting the sizes of the Hot-CMT and Cold-CMT cache tables according to the wear rates of the media A and B in the step S9 is as follows:
in order to realize the wear balance of the media A and B, the invention ensures that the wear speeds of the Hot-CMT and the Cold-CMT are consistent as much as possible by controlling the proportion alpha of the Hot-CMT and the Cold-CMT. Specifically, α is controlled by the relative wear rate φ of media A and media B. φ is defined as:
Figure BDA0002143098170000111
in the above formula, max (-) represents the maximum value, min (-) represents the minimum value; RW (R-W)ARepresenting the average wear rate of Medium A normalized to Medium B, RWBRepresenting the average wear rate of media B.
The adjustment method of α is as follows:
s21, calculating the current relative wear rate phi of the media A and B.
S22, comparing phi and phitSize of (phi)tA preset relative wear leveling rate threshold). If phi<φtIf so, the current medium A and the current medium B are considered to be in balanced abrasion, and alpha is not required to be adjusted; otherwise, the following S23 is executed.
S23, if RWA>RWBAt this time, the medium A is worn faster than the medium B, and the data amount written into the medium A is reduced by reducing the ratio alpha of Hot-CMT to alpha-delta alpha; on the contrary, the medium B wears faster than the medium a, and the amount of data written on the medium B is reduced by increasing the ratio α ═ α +. DELTA.α of Hot-CMT.
As shown in FIG. 5, the method of the present invention controls the wear leveling between SLC and MLC by dynamically adjusting the size of the cold-hot mapping cache table. The ratio alpha of Hot-CMT is controlled by the relative wear rate phi of SLC and MLCIn the present embodiment, the wear leveling threshold Φ is sett1.3, the initial value α of the Hot-CMT scaling coefficient is 0.4, and Δ α is 0.05.
Calculating the current relative wear rate value phi0If phi0<1.3, then we consider the current SLC and MLC wear balanced, without adjusting α.
If phi is larger than or equal to 1.3, judging that the current SLC and MLC are not abraded equally, starting a dynamic adjustment mechanism of a cold-hot mapping cache table, and when the SLC is abraded more quickly, namely RW is carried outSLC>RWMLCThe scale value α - Δ α of Hot-CMT needs to be reduced to reduce the amount of data written into SLC and increase the amount of data migrated into MLC. When the MLC wears more rapidly, i.e. RWSLC<RWMLCThe ratio α ═ α +. DELTA.α of Hot-CMT needs to be increased to reduce the amount of data written to MLC, while increasing the ability of the SLC to retain Hot data. The relative wear rate is controlled by adjusting the ratio alpha of the Hot-CMT, so that the wear between the SLC and the MLC is balanced.
The mechanism for assigning a new free physical data page number for the write request in step S9 is as follows:
the method divides the data block area into a hot data block area and a cold data block area, namely, the data in the hot data block area is hot data and is stored in a medium A; data not in the hot data block area is cold data and is stored in medium B. When data is written, the page of the mapping item in the Hot-CMT is considered as Hot data, and a free page number belonging to the medium A is allocated; otherwise, it is considered as cold data, and a free page number belonging to medium B is assigned.
For example, as shown in fig. 6 (a). The process of writing request data allocation can identify the data by Hot and cold by means of the mapping item information of Hot-CMT. When data is written, a free data page number belonging to the SLC is allocated to the data page corresponding to the mapping item in the Hot-CMT, namely, the page of the mapping item in the Hot-CMT is identified as a Hot data page and is stored in the high-performance medium SLC. Otherwise, a free data page number belonging to the MLC is allocated to the data page corresponding to the mapping item which is not in the Hot-CMT, namely, the page which is not in the Hot-CMT is identified as a cold data page and is stored in the low-performance medium MLC.
The garbage collection mechanism of the present invention is shown in fig. 6 (b). The method carries out garbage collection by inquiring the mapping information corresponding to the effective data page in the collection block, if the mapping item of the data page is positioned in the Hot-CMT, the data page is still Hot data, the data page is copied to the free data block of the SLC, otherwise, the data is migrated to the data block area of the MLC.
When the free data block in the SLC is not enough, the garbage collection mechanism of SLC will be triggered, carry out garbage collection operation, the operation process is as follows:
c1, selecting the block with the most invalid pages as a recycle block, then judging whether the recycle block has valid data pages, if so, executing C2, otherwise, executing C5.
C2, judging whether the mapping item of each effective data page of the recycle block hits in the Hot-CMT in turn, if so, executing C3, otherwise, executing C4.
C3, the data is still Hot data, the data page is migrated to the free block of SLC, and the corresponding mapping item information in the Hot-CMT is modified.
C4, the data is cold data, the data page is migrated to the free block of MLC, and the corresponding mapping information is modified.
C5, erasing the recovery block and ending the garbage recovery operation.
And when the free block in the MLC is insufficient, triggering the garbage collection mechanism of the MLC, selecting the block with the most invalid pages as a collection block, then transferring the valid data pages in the collection block to the free block of the MLC, updating the mapping relation of the mapping item, finally erasing the collection block, and finishing the garbage collection operation.
The invention provides a flash translation layer design method facing a hybrid SSD, which solves the problem of designing a flash translation layer for constructing the SSD by using various media. On one hand, the translation pages in the medium A and the medium B are aligned according to the page size of the medium A, and the uniform management of the mapping items is realized. On the other hand, a hot-write request and other requests are distinguished by adopting a cold-hot mapping cache, so that the hot-write request is distributed to a medium A with good performance, and other write requests are distributed to a medium B, the performance of the hybrid SSD is improved, and the unit capacity cost of the hybrid SSD is reduced; meanwhile, the proportion of the cold-hot mapping cache table is adjusted according to the wear speed of the medium A and the medium B, so that the wear balance between the medium A and the medium B is better realized, and the service life of the hybrid SSD is prolonged. The method disclosed by the aspect has practicability and a better application prospect in the design of the hybrid SSD.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A conversion method of a flash conversion layer facing a mixed solid state disk is used for the flash conversion layer facing the mixed solid state disk, the mixed solid state disk is composed of two flash media, wherein the performance of a medium A is better but the cost of unit capacity is higher, and the performance of a medium B is poorer than that of the medium A but the cost of unit capacity is lower; the medium A is divided into a data block area and a translation block area, and the medium B is a data block area; the translation block area is used for storing translation pages, and the data block area is used for storing data pages; the translation page is used for storing the mapping relation between the logical address and the physical address of the data, and the data page is used for storing the actual data; the translation pages are aligned according to the page size of the medium A, and the data pages are aligned according to the page size of the medium B; it is characterized in that the preparation method is characterized in that,
the flash translation layer comprises an address mapping buffer area, and the address mapping buffer area consists of a Global Translation Directory (GTD), a mapping page cache slot (TPCS), a Hot mapping cache table (Hot-CMT) and a Cold mapping cache table (Cold-CMT); the GTD is used for recording the actual physical page number of each translation page, the TPCS is used for caching the whole mapping page to which the current loading mapping item belongs when the current mapping item is not hit in the cache, the Hot-CMT is used for caching the frequently accessed write mapping item, and the Cold-CMT is used for caching the read mapping item and the infrequently accessed write mapping item; the logical page number LPN and the physical page number PPN of the Hot-CMT and Cold-CMT record mapping items
The conversion comprises the following steps:
s1, when the access request comes, judging the request type; if the request is read, executing S2, if the request is write, executing S6;
s2, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS according to the reading request; if the mapping item is positioned in the Hot-CMT, executing S5; if the mapping item is located in Cold-CMT or TPCS, executing S3; otherwise, if the mapping item is not in the cache mapping table, executing S4;
s3, loading the mapping item to the MRU (most Central used) position of the Cold-CMT, if the size of the Cold-CMT is larger than the set target threshold value, starting the elimination operation of the Cold-CMT, and then executing S5;
s4, when the mapping item is not in the cache mapping table, accessing GTD according to the requested logical page number to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, loading the mapping item to the MRU position of Cold-CMT at the same time, if the size of Cold-CMT is larger than the set target threshold value, starting the elimination operation of Cold-CMT, and finally executing S5;
s5, returning the physical page number corresponding to the mapping item, ending the address mapping conversion;
s6, checking whether the request mapping item is in the Hot-CMT, Cold-CMT and TPCS according to the order of the write request; if the mapping item is hit in the Hot-CMT, Cold-CMT or TPCS, S7 is executed; otherwise, if the mapping item is not in the cache mapping table, executing S8;
s7, if hit in the Hot-CMT, the mapping item is migrated to the MRU position of the Hot-CMT; if hit in Cold-CMT, the mapping item is also migrated to the MRU position in the Hot-CMT, and if the Hot-CMT size is larger than the set target threshold value, the rejection operation of the Hot-CMT is started; if the mapping item is hit in the TPCS, the mapping item is migrated to the MRU position of the Cold-CMT, and if the size of the Cold-CMT is larger than a set target threshold value, the removing operation of the Cold-CMT is started; finally, executing S9;
s8, when the mapping item is not in the cache mapping table, accessing GTD according to the logic page number of the request to obtain the physical page number of the translation page corresponding to the mapping item, loading the translation page containing the target mapping item into TPCS according to the physical page number, and loading the mapping item to the MRU position of Cold-CMT at the same time; if the Cold-CMT size is larger than the set target threshold value, starting the removing operation of the Cold-CMT; finally, executing S9;
s9, adjusting the sizes of the Hot-CMT and Cold-CMT cache tables according to the abrasion speeds of the media A and B; then allocating a new idle physical data page number for the write request, and writing the new physical data page number into a corresponding cache mapping table; and finally returning the physical page number before the mapping item and the newly allocated free physical page number.
2. The translation method of claim 1, wherein the flash translation layer employs page level address mapping.
3. The transformation method according to claim 1, wherein Cold-CMT in steps S3, S4, S7 and S8 employs a clean item first least recently used LRU culling mechanism: setting a clean item priority eliminating window with fixed length at the tail part of the Cold-CMT queue; when the Cold-CMT needs to carry out elimination operation, searching an unrefreshed mapping item in a preferential elimination window from the LRU position, and if the unrefreshed mapping item can be found, directly eliminating the clean mapping item; otherwise, the updated mapping item on the LRU position is selected, and the information is removed after being written back to the translation page.
4. The conversion method according to claim 1, wherein the Hot-CMT in step S7 employs an LRU culling mechanism, which directly selects the mapping item at the LRU position of the Hot-CMT queue, and culls the information after writing back to the translation page.
5. The conversion method according to claim 1, wherein the adjusting the cache table sizes of the Hot-CMT and the Cold-CMT is specifically:
the abrasion speeds of the Hot-CMT and the Cold-CMT are ensured to be consistent as much as possible by controlling the proportion alpha of the Hot-CMT to the Cold-CMT; alpha is controlled by the relative wear rate phi of the medium A and the medium B, and the regulation method of alpha is as follows:
s21, calculating the current relative wear rate phi of the media A and the media B;
s22, comparing phi and phitSize of (phi)tFor presetting a relative wear leveling rate threshold value if phi is less than phitIf so, the current medium A and the current medium B are considered to be in balanced abrasion, and alpha is not required to be adjusted; otherwise, the following S23 is executed;
s23, if RWA>RWBAt this time, the medium A is abraded faster than the medium B, and the proportion value alpha of Hot-CMT is reduced to alpha-delta alpha, so that the data amount written into the medium A is reduced; on the contrary, the medium B wears faster than the medium a, and the amount of data written on the medium B is reduced by increasing the ratio α of Hot-CMT to α + Δ α.
6. The conversion method of claim 5, wherein φ is defined as:
Figure FDA0003073164050000031
wherein max (g) represents maximum value, min (g) represents minimum value; RW (R-W)ARepresenting the average wear rate of Medium A normalized to Medium B, RWBRepresenting the average wear rate of media B.
7. The conversion method according to claim 1, wherein the step S9 of assigning a new free physical data page number to the write request is specifically:
dividing the data block area into a hot data block area and a cold data block area, namely, storing the data in the hot data block area as hot data in the medium A; data not in the hot data block area is cold data and is stored in the medium B; when data is written, the page of the mapping item in the Hot-CMT is considered as Hot data, and a free page number belonging to the medium A is allocated; otherwise, it is considered as cold data, and a free page number belonging to medium B is assigned.
8. The conversion method according to claim 1, wherein when there are insufficient free blocks in the medium a, the garbage collection mechanism of the medium a is triggered, and the operation process is as follows:
s31, selecting the block with the most invalid pages as a recovery block, and then judging whether the recovery block has valid data pages; if there is a valid data page, then execute S32, otherwise execute S35;
s32, sequentially judging whether the mapping item of each effective data page is hit in the Hot-CMT, if yes, executing S33, otherwise executing S34;
s33, the data is still hot data, the data page is migrated to the free block of the medium A, and the corresponding mapping item information is modified;
s34, the data is cold data, the data page is transferred to the free block of the medium B, and the corresponding mapping information is modified;
and S35, erasing the recovery block and finishing the garbage recovery operation.
9. The conversion method according to claim 1, wherein when the free block in the medium B is insufficient, the garbage collection mechanism of the medium B is triggered to select the block with the most invalid pages as the collection block, then the valid data pages in the collection block are migrated to the free block of the medium B, and the mapping relation of the mapping item is updated; and finally erasing the recovery block and finishing the garbage recovery operation.
CN201910675390.5A 2019-07-25 2019-07-25 Flash translation layer facing hybrid solid state disk and conversion method Active CN110413537B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910675390.5A CN110413537B (en) 2019-07-25 2019-07-25 Flash translation layer facing hybrid solid state disk and conversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910675390.5A CN110413537B (en) 2019-07-25 2019-07-25 Flash translation layer facing hybrid solid state disk and conversion method

Publications (2)

Publication Number Publication Date
CN110413537A CN110413537A (en) 2019-11-05
CN110413537B true CN110413537B (en) 2021-08-24

Family

ID=68363092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910675390.5A Active CN110413537B (en) 2019-07-25 2019-07-25 Flash translation layer facing hybrid solid state disk and conversion method

Country Status (1)

Country Link
CN (1) CN110413537B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111258924B (en) * 2020-01-17 2021-06-08 中国科学院国家空间科学中心 Mapping method based on satellite-borne solid-state storage system self-adaptive flash translation layer
CN111506517B (en) * 2020-03-05 2022-05-17 杭州电子科技大学 Flash memory page level address mapping method and system based on access locality
CN112000296B (en) * 2020-08-28 2024-04-09 北京计算机技术及应用研究所 Performance optimization system in full flash memory array
CN112506445B (en) * 2020-12-29 2022-05-20 杭州电子科技大学 Partition proportion self-adaptive adjustment method for homogeneous hybrid solid state disk
CN113220241A (en) * 2021-05-27 2021-08-06 衢州学院 Cross-layer design-based hybrid SSD performance and service life optimization method
CN113435109B (en) * 2021-06-04 2024-01-30 衢州学院 Optimization method for performance and service life of mixed SSD

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122531A (en) * 2011-01-27 2011-07-13 浪潮电子信息产业股份有限公司 Method for improving stability in use of large-capacity solid state disk
CN103440206A (en) * 2013-07-25 2013-12-11 记忆科技(深圳)有限公司 Solid state hard disk and mixed mapping method of solid state hard disk
CN109446117A (en) * 2018-09-06 2019-03-08 杭州电子科技大学 A kind of solid state hard disk page grade flash translation layer (FTL) design method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8838935B2 (en) * 2010-09-24 2014-09-16 Intel Corporation Apparatus, method, and system for implementing micro page tables
US9336129B2 (en) * 2013-10-02 2016-05-10 Sandisk Technologies Inc. System and method for bank logical data remapping
KR20180045091A (en) * 2016-10-24 2018-05-04 에스케이하이닉스 주식회사 Memory system and method of wear-leveling for the same
CN106776375A (en) * 2016-12-27 2017-05-31 东方网力科技股份有限公司 Data cache method and device inside a kind of disk
CN109739780A (en) * 2018-11-20 2019-05-10 北京航空航天大学 Dynamic secondary based on the mapping of page grade caches flash translation layer (FTL) address mapping method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102122531A (en) * 2011-01-27 2011-07-13 浪潮电子信息产业股份有限公司 Method for improving stability in use of large-capacity solid state disk
CN103440206A (en) * 2013-07-25 2013-12-11 记忆科技(深圳)有限公司 Solid state hard disk and mixed mapping method of solid state hard disk
CN109446117A (en) * 2018-09-06 2019-03-08 杭州电子科技大学 A kind of solid state hard disk page grade flash translation layer (FTL) design method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于分类策略的聚簇页级闪存转换层算法;姚英彪等;《计算机研究与发展》;20170131;第54卷(第1期);第1.2节,第2.1节、第2.2节、第2.3节、图1、图3 *

Also Published As

Publication number Publication date
CN110413537A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN110413537B (en) Flash translation layer facing hybrid solid state disk and conversion method
US11893238B2 (en) Method of controlling nonvolatile semiconductor memory
US9430376B2 (en) Priority-based garbage collection for data storage systems
Murugan et al. Rejuvenator: A static wear leveling algorithm for NAND flash memory with minimized overhead
US9378131B2 (en) Non-volatile storage addressing using multiple tables
Jiang et al. S-FTL: An efficient address translation for flash memory by exploiting spatial locality
US11194737B2 (en) Storage device, controller and method for operating the controller for pattern determination
US10740251B2 (en) Hybrid drive translation layer
CN109582593B (en) FTL address mapping reading and writing method based on calculation
CN108845957B (en) Replacement and write-back self-adaptive buffer area management method
CN109783398A (en) One kind is based on related perception page-level FTL solid state hard disk performance optimization method
KR101403922B1 (en) Apparatus and method for data storing according to an access degree
KR20100115090A (en) Buffer-aware garbage collection technique for nand flash memory-based storage systems
CN113254358A (en) Method and system for address table cache management
CN111352593B (en) Solid state disk data writing method for distinguishing fast writing from normal writing
Yao et al. HDFTL: An on-demand flash translation layer algorithm for hybrid solid state drives
Ryu SAT: switchable address translation for flash memory storages
CN112559384B (en) Dynamic partitioning method for hybrid solid-state disk based on nonvolatile memory
KR100894845B1 (en) Method for Address Translation using the Flash Translation Layer Module
Zhao et al. A buffer algorithm of flash database based on LRU
Ryu A flash translation layer for multimedia storages

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant