CN107870867B - Method and device for 32-bit CPU to access memory space larger than 4GB - Google Patents

Method and device for 32-bit CPU to access memory space larger than 4GB Download PDF

Info

Publication number
CN107870867B
CN107870867B CN201610862939.8A CN201610862939A CN107870867B CN 107870867 B CN107870867 B CN 107870867B CN 201610862939 A CN201610862939 A CN 201610862939A CN 107870867 B CN107870867 B CN 107870867B
Authority
CN
China
Prior art keywords
ftl
address
entry
cpu
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610862939.8A
Other languages
Chinese (zh)
Other versions
CN107870867A (en
Inventor
丁胜涛
陈亮
徐晓画
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Starblaze Technology Co ltd
Original Assignee
Beijing Starblaze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Starblaze Technology Co ltd filed Critical Beijing Starblaze Technology Co ltd
Priority to CN201610862939.8A priority Critical patent/CN107870867B/en
Publication of CN107870867A publication Critical patent/CN107870867A/en
Application granted granted Critical
Publication of CN107870867B publication Critical patent/CN107870867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/0292User address space allocation, e.g. contiguous or non contiguous base addressing using tables or multilevel address translation means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Methods and apparatus are provided for 32-bit CPUs to access memory space greater than 4 GB. Disclosed is a method of accessing an FTL table, comprising: responding to the received IO command, and acquiring a logic address of the IO command; calculating a virtual address and a physical address of an entry of the FTL table corresponding to the logical address according to the logical address; setting a Memory Management Unit (MMU) according to the physical address; and accessing the table entry of the FTL table through the memory management unit by using the virtual address.

Description

Method and device for 32-bit CPU to access memory space larger than 4GB
Technical Field
The invention relates to a storage device controller, in particular to a method and a device for accessing a memory space larger than 4GB by a 32-bit CPU through an MMU (memory Management Unit) in the storage device controller
Background
In a Solid State Drive (SSD), mapping information from a logical address to a physical address is maintained using an FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented in the prior art using an intermediate address modality. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.
A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device. FTL tables of solid state memory devices have large sizes, e.g., several GB levels.
The FTL table includes a plurality of FTL table entries (or table entries). An example of an FTL table structure is provided in chinese patent application No. 201510430174.6. In one example, each FTL table entry records a corresponding relationship between a logical page address and a physical page. In yet another example, each FTL table entry records the corresponding relationship between the logical block address and the physical block address. In still another example, the FTL table records the mapping relationship between logical block addresses and physical block addresses, and/or the mapping relationship between logical page addresses and physical page addresses. The FTL table entry may also record the mapping relationship of the logical address and one or more physical addresses.
In yet another example, FTL tables are stored in a contiguous memory address space, a physical address is recorded in each FTL table entry, and a logical address corresponding to the physical address is represented by the memory address of each FTL table entry itself. The number of FTL table entries corresponds to the size of the solid state storage device logical address space.
The flash controller needs to frequently access the FTL when operating. When reading the flash memory, the FTL table is queried by the logical address to obtain the physical address of the flash memory storing data. When writing into the flash memory, a physical address is allocated for the written data, and the correspondence between the written logical address and the physical address is recorded in the FTL. When operations such as GC and erasure equalization occur, the mapping relationship between the logical address and the physical address changes, and the FTL needs to be updated.
There are a number of solutions for accessing the FTL. FTL table fast access method and apparatus are provided in chinese patent application for invention (CN 201610346104.7), which is incorporated herein by reference.
The characteristics of the FTL table include large number of table entries (hundreds of millions of FTL table entries are managed in the common SSD), small size of each table entry (about several to tens of bytes), and strong randomness (access to FTL table entries is distributed in the whole table entry space and lacks locality). And since a large capacity of FLASH is supported, it is often necessary to access a space exceeding 4 GB.
To store GB-level data, DRAM (Dynamic Random Access Memory) is generally used. The access delay of DRAM cannot match the high-speed Processing Unit such as CPU (Central Processing Unit), and the Processing performance of the flash memory controller is seriously affected. Conventionally, a Cache (Cache memory) is employed as a memory intermediate layer to provide a high-speed data access capability for a CPU by caching a part of data in a DRAM.
However, due to the randomness of FTL table access, even if Cache is used to Cache FTL table, Cache thrashing may be caused by frequent page changing, and FTL access capability may not be significantly improved.
For example, when accessing FTL entry a, first read the address storing entry a in DRAM, the randomness of FTL entry access, entry a is unlikely to already exist in Cache, so it can be considered that Cache always misses, and entry a needs to be read from external DRAM memory to Cache, and this time delay penalty is about above 100 ns.
The FTL can also be accessed using DMA. When accessing the FTL, it needs to configure the high order bits of the address, the low order bits of the address, and then move the FTL to the space that the CPU can directly access (or the low latency can access) by triggering DMA, such as the low 4G DDR or the SRAM of the system. However, the drawbacks of this solution are also evident: the process of configuring DMA takes time and CPU access latency can be significant. And because of the participation of DMA, the problem of memory consistency needs to be considered.
For large capacity solid state storage devices, the size of the FTL table may be larger than 4 GB. Whereas the address bus and/or data bus are difficult for a 32-bit CPU to access more than 4GB of memory space.
Also, when the size of FTL entries changes, the access latency of the FTL is further increased.
Disclosure of Invention
According to a first aspect of the present invention, there is provided a first method for accessing an FTL table according to the first aspect of the present invention, comprising: responding to the received IO command, and acquiring a logic address of the IO command; calculating a virtual address and a physical address of an entry of the FTL table corresponding to the logical address according to the logical address; setting a Memory Management Unit (MMU) according to the physical address; and accessing the table entry of the FTL table through the memory management unit by using the virtual address.
A second method of accessing an FTL table according to the first aspect of the present invention is provided according to the first aspect of the present invention, wherein the width of the physical address is larger than the virtual address.
According to the first or second method for accessing the FTL table of the first aspect of the present invention, there is provided a third method for accessing the FTL table according to the first aspect of the present invention, further comprising: recording the mapping relation between the virtual address and the physical address in a Translation Lookaside Buffer (TLB) of the memory management unit.
According to the first to third methods of accessing the FTL table of the first aspect of the present invention, there is provided the fourth method of accessing the FTL table according to the first aspect of the present invention, wherein the width of the physical address is larger than the internal address bus bit width of the CPU executing the method.
According to a second aspect of the present invention, there is provided a first method for accessing an FTL table according to the second aspect of the present invention, comprising: responding to the received IO command, and acquiring a logic address of the IO command; the logical address is used for calculating the virtual address and the physical address of the table entry of the FTL table corresponding to the logical address; setting a Memory Management Unit (MMU) according to the physical address; sending a preloading request for an FTL table entry of a specified address so as to load the FTL table entry located at the virtual address to a storage position indicated by a first index of an FTL table entry storage unit; and issuing a read request for the FTL entry of the first index to obtain the FTL entry of the first index from the FTL entry storage unit.
The first method for accessing an FTL table according to the second aspect of the present invention provides a second method for accessing an FTL table according to the second aspect of the present invention, further comprising: sending an update request to the FTL table entry of the first index to update the FTL table entry indicated by the first index in the FTL table entry storage unit; and issuing a flush request for the FTL entry of the first index to write the FTL entry indicated by the first index in the FTL entry storage unit into the main memory location indicated by the FTL entry address indicated by the first index in the FTL entry storage unit.
A third method of accessing a FTL table according to the second aspect of the present invention is provided according to the first or second method of accessing a FTL table of the second aspect of the present invention, wherein the width of the physical address is larger than the virtual address.
According to one of the first to third methods of accessing the FTL table of the second aspect of the present invention, there is provided a fourth method of accessing the FTL table according to the second aspect of the present invention, further comprising: recording the mapping relation between the virtual address and the physical address in a Translation Lookaside Buffer (TLB) of the memory management unit.
A fifth method of accessing an FTL table according to the second aspect of the present invention is provided according to one of the first to fourth methods of accessing an FTL table of the second aspect of the present invention, wherein the width of the physical address is larger than the address bus bit width of a CPU executing the method.
According to one of the first to fifth methods of accessing an FTL table of the second aspect of the present invention, there is provided a sixth method of accessing an FTL table according to the second aspect of the present invention, further comprising: an update request is issued for the FTL entry of the first index to write the FTL entry indicated by the update request to the main memory location indicated by the FTL entry address indicated by the index of the update request in the FTL entry storage unit.
According to a third aspect of the present invention, there is provided an apparatus for accessing an FTL table according to the third aspect of the present invention, comprising: the logic address acquisition module is used for responding to the received IO command and acquiring the logic address of the IO command; the address calculation module is used for calculating a virtual address and a physical address of an entry of the FTL table corresponding to the logical address; a memory management unit setting module, configured to set a Memory Management Unit (MMU) according to the physical address; a memory access module, configured to access the table entry of the FTL table through the memory management unit using a virtual address.
Apparatus for accessing FTL tables according to the first aspect of the present invention provides apparatus for accessing FTL tables according to the second aspect of the present invention, wherein the width of the physical address is larger than the virtual address.
The apparatus for accessing FTL table according to the third aspect of the present invention provides the apparatus for accessing FTL table according to the third aspect of the present invention, further comprising: and the bypass translation cache setting module is used for recording the mapping relation between the virtual address and the physical address in a bypass translation cache (TLB) of a memory management unit.
There is provided a fourth apparatus for accessing an FTL table according to the third aspect of the present invention, wherein the width of the physical address is larger than the address bus bit width of a CPU executing the method, according to one of the first to third apparatuses for accessing an FTL table of the third aspect of the present invention.
According to a fourth aspect of the present invention, there is provided an apparatus for accessing an FTL table according to the fourth aspect of the present invention, comprising: the logic address acquisition module is used for responding to the received IO command and acquiring the logic address of the IO command; the address calculation module is used for calculating a virtual address and a physical address of an entry of the FTL table corresponding to the logical address according to the logical address; a memory management unit setting module, configured to set a Memory Management Unit (MMU) according to the physical address; the preloading module is used for sending a preloading request to an FTL table entry of a specified address so as to load the FTL table entry located at the virtual address to a storage position indicated by a first index of the FTL table entry storage component; and a read request module, configured to send a read request for the FTL entry of the first index, so as to obtain the FTL entry of the first index from the FTL entry storage component.
The first apparatus for accessing FTL table according to the fourth aspect of the present invention provides the second apparatus for accessing FTL table according to the fourth aspect of the present invention, further comprising: an update request module, configured to send an update request for the FTL entry of the first index to update the FTL entry indicated by the first index in the FTL entry storage component; and the flushing request module is used for sending a flushing request to the FTL table entry of the first index so as to write the FTL table entry indicated by the first index in the FTL table entry storage unit into the main memory location indicated by the FTL table entry address indicated by the first index in the FTL table entry storage unit.
Means for accessing the FTL table according to the fourth aspect of the present invention, wherein the width of the physical address is larger than the virtual address, is provided.
There is provided one of the first to third FTL table accessing apparatuses according to the fourth aspect of the present invention, further comprising: and the bypass translation cache setting module is used for recording the mapping relation between the virtual address and the physical address in a bypass translation cache (TLB) of a memory management unit.
There is provided a fifth means for accessing an FTL table according to the fourth aspect of the present invention, wherein the width of the physical address is larger than the address bus bit width of a CPU executing the method.
There is provided the sixth apparatus for accessing an FTL table according to the fourth aspect of the present invention, further comprising: and the updating request module is used for sending an updating request to the FTL table entry of the first index, and writing the FTL table entry indicated by the updating request into the main memory location indicated by the address of the FTL table entry indicated by the index of the updating request in the FTL table entry storage unit.
According to a fifth aspect of the present invention, there is provided a first apparatus for accessing an FTL table according to the fifth aspect of the present invention, which includes an FTL table entry address storing unit that stores a plurality of FTL table entry addresses, an FTL table entry data storing unit, a CPU interface, and a Memory Management Unit (MMU); the FTL table entry data storage component stores a plurality of FTL table entry data; the CPU interface is used for receiving a request sent by the CPU; the memory management unit is used for converting the received virtual address into a physical address for the main memory and accessing the main memory; and wherein the FTL entry address and the FTL entry data associated therewith are accessed in the FTL entry address storing means and the FTL entry data storing means, respectively, by the same index, the FTL entry address being a physical address in the main memory of the FTL entry data associated therewith.
According to a fifth aspect of the present invention, there is provided a second apparatus for accessing an FTL table according to the fifth aspect of the present invention, wherein in response to a CPU interface receiving a preload request for an FTL table entry specifying a virtual address, the preload request further specifies a first index indicating a storage location of the FTL table entry in an FTL table entry data storage unit, the FTL table entry is obtained from a main memory by a memory management unit converting the specified virtual address into a physical address of the main memory; and providing the FTL table entry to the CPU through the CPU interface in response to the CPU interface receiving a read request for the FTL table entry of the first index.
The first or second apparatus for accessing an FTL table according to the fifth aspect of the present invention provides the third apparatus for accessing an FTL table according to the fifth aspect of the present invention, wherein in response to the CPU receiving a setting request to the memory management unit, a mapping relationship of a virtual address and a physical address is recorded in the memory management unit, so that the memory management unit can obtain the physical address corresponding to the virtual address based on the virtual address.
A fourth apparatus for accessing a FTL table according to the fifth aspect of the present invention is provided in one of the first to third apparatuses for accessing a FTL table according to the fifth aspect of the present invention, wherein the width of the physical address is larger than the virtual address.
According to one of the first to fourth apparatuses for accessing an FTL table of the fifth aspect of the present invention, there is provided the fifth apparatus for accessing an FTL table of the fifth aspect of the present invention, wherein the memory management unit further comprises a Translation Lookaside Buffer (TLB) in which a mapping relationship between the virtual address and the physical address is recorded.
According to one of the first to fifth means for accessing the FTL table of the fifth aspect of the present invention, there is provided a sixth means for accessing the FTL table according to the fifth aspect of the present invention, wherein the width of the physical address is larger than the address bus bit width of the CPU.
The second apparatus for accessing an FTL table according to the fifth aspect of the present invention provides the seventh apparatus for accessing an FTL table according to the fifth aspect of the present invention, wherein in response to a CPU interface receiving a preload request for an FTL entry of a specified address, the apparatus for accessing an FTL processes the load request in an asynchronous manner; and in response to the CPU interface receiving a read request for the FTL entry of the first index, the means for accessing the FTL processes the read request in a synchronous manner.
According to one of the first to seventh apparatuses for accessing an FTL table of the fifth aspect of the present invention, there is provided an eighth apparatus for accessing an FTL table of the fifth aspect of the present invention, wherein in response to receiving a read request for an FTL entry of a first index, if an FTL entry of a specified virtual address does not exist in the FTL entry data storage unit, further waiting for a memory management unit to load an FTL entry at a physical address corresponding to the specified virtual address from a main memory into the FTL entry data storage unit.
According to one of the first to eighth apparatuses for accessing an FTL table of the fifth aspect of the present invention, there is provided an ninth apparatus for accessing an FTL table according to the fifth aspect of the present invention, wherein the retrieving an FTL entry from a main memory further comprises: and locking the FTL table entry at the physical address.
According to one of the first to ninth apparatuses for accessing an FTL table of the fifth aspect of the present invention, there is provided the tenth apparatus for accessing an FTL table of the fifth aspect of the present invention, wherein in response to the CPU interface receiving an update request for the FTL entry of the first index, the FTL entry data indicated by the first index in the FTL entry data storage means is updated; in response to the CPU interface receiving a flush request for the FTL entry of the first index, the FTL entry data specified by the first index is written to the main memory at the FTL entry address in the FTL entry address storage unit specified by the first index.
According to a tenth apparatus for accessing an FTL table of the fifth aspect of the present invention, there is provided the eleventh apparatus for accessing an FTL table of the fifth aspect of the present invention, wherein in response to receiving an FTL entry update request, the apparatus for accessing an FTL processes the FTL entry update request in a synchronous manner; in response to receiving an FTL entry flush request, the means for accessing the FTL processes the FTL entry flush request in an asynchronous manner.
According to a tenth or eleventh apparatus for accessing an FTL table of the fifth aspect of the present invention, there is provided the twelfth apparatus for accessing an FTL table of the fifth aspect of the present invention, wherein in response to receiving an FTL entry flush request, it is further detected whether a specified FTL entry has been written to the main memory, and after writing the specified FTL entry to the main memory, it is indicated through the CPU interface that processing of the FTL entry flush request is completed.
According to a sixth aspect of the present invention there is provided a computer program comprising computer program code which, when loaded into a computer system and executed thereon, causes the computer system to perform a method of accessing FTL tables as provided in accordance with the first or second aspect of the present invention.
According to a seventh aspect of the present invention there is provided a program comprising program code which, when loaded into and executed on a storage device, causes the storage device to perform a method of accessing FTL tables as provided in accordance with the first or second aspects of the present invention.
Drawings
The invention, as well as a preferred mode of use and further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 is a block diagram of circuitry according to an embodiment of the invention;
FIG. 2A is a schematic diagram of a prior art virtual address space to physical address space mapping;
FIG. 2B is a schematic diagram of a virtual address space to physical address space mapping according to an embodiment of the invention;
FIG. 3 is a flowchart of an FTL entry reading process according to an embodiment of the present invention;
FIG. 4 is a block diagram of circuitry according to yet another embodiment of the invention;
FIG. 5 is a block diagram of an FTL acceleration circuit according to yet another embodiment of the present invention;
FIG. 6 is a flowchart of an FTL entry reading process according to yet another embodiment of the present invention;
FIG. 7A is a flowchart of an FTL entry update process according to yet another embodiment of the present invention; and
fig. 7B is a flowchart of an FTL entry updating process according to yet another embodiment of the present invention.
Detailed Description
FIG. 1 is a block diagram of circuitry according to an embodiment of the invention. The CPU 110 is coupled with a main memory 130, such as a DRAM, to constitute a circuit system. CPU 110 accesses main Memory (130) through a Memory Management Unit (MMU). The CPU provides the virtual address to the MMU120, and the MMU120 translates the virtual address to a physical address and accesses the main memory 130 with the physical address. By way of example, the CPU 110 provides a virtual address of 32 bits, while the MMU120 provides a physical address of 40 bits. The width of the physical address is related to the size of the main memory 130. Alternatively, MMU120 accesses main memory 130 via bus 140. Still optionally, MMU120 is also coupled to TLB 150 (Translation lookaside Buffer). The TLB 150 is used to cache the correspondence between the virtual address and the physical address.
The main memory stores FTL tables. The storage space of the main memory may be several GB (gigabytes), and the FTL table occupies more than 4 GB. In the embodiment according to the present invention, the data structure of the FTL table is known, so that according to the logical address shown by the IO command accessing the solid-state storage device, the physical address of the table entry storing the logical address in the FTL table can be obtained. In one example, FTL tables are stored in a Flat (Flat) structure, and entries of FTL tables recording logical addresses L are stored at physical addresses (B + L × 8), where B indicates the base address of the FTL table, and the FTL table entries for each logical address occupy 8 bytes of storage space. In another example, FTL tables are stored in a tree structure. Leaf nodes of the tree record entries of the FTL table, and non-leaf nodes record indexes of entries of the FTL table. The CPU 110 traverses the tree structure through the logical address L to obtain a physical address of the FTL table entry corresponding to the recorded logical address L.
In the prior art, MMUs are used to convert a large virtual address space into a small physical address space, to solve the problem of insufficient physical memory capacity, and/or to provide exclusive virtual memory space for multiple applications. FIG. 2 is a schematic diagram of a prior art virtual address space to physical address space mapping. By way of example, physical memory provides a physical address space of 0-2GB and a virtual address space of 0-4 GB. In operation, the MMU maps 2GB of virtual address space to physical address space, and when the CPU attempts to access a virtual address that is not mapped to physical address space, the data in the current physical address space is stored to external memory by a page fault interrupt or other mechanism, and the mapping relationship between the virtual address space and the physical memory space is changed. So that the CPU can access the corresponding physical address through the virtual address.
FIG. 2B is a schematic diagram of a virtual address space to physical address space mapping according to an embodiment of the invention. In an embodiment in accordance with the invention, the physical address space provided by main memory 130 (see FIG. 1) is greater than the addressing space of the CPU. By way of example, in FIG. 2B, the CPU has an addressable virtual address space of 0-4GB, while the main memory provides a physical address space of 0-8 GB. At runtime, MMU120 (see FIG. 1) maps the virtual address space to a portion of the physical address space (e.g., 4GB in size). CPU 110 may also instruct MMU120 to change the address space mapping relationship to map the virtual address space to other portions of the physical address space. In fig. 2B, the current 4GB virtual address space is mapped to the physical address space in the range of 4GB-8GB, and when the CPU needs to access the physical address in the range of 0-4GB, the MMU120 needs to be instructed to change the mapping relationship of the address space, map the current 4GB virtual address space to the physical address space in the range of 0-4GB, and then access the MMU120 with the virtual address.
There are a variety of ways of mapping virtual address space to physical address space. Such as page-based mapping. The virtual address space may be provided by a discontinuous physical address space.
As another example, CPU 110 provides a 32-bit virtual address. The MMU120 provides a 10-bit base address register and concatenates the 32-bit virtual address provided by the CPU with the base address register to obtain a 40-bit address for accessing main memory. As another example, the MMU120 or TLB 150 provides an address mapping table for recording the correspondence of 32-bit virtual addresses to 40-bit physical addresses. The CPU 110 may access or set the address mapping table. And after the MMU120 receives the 32-bit virtual address from the CPU, it looks up the address mapping table, obtains the 40-bit physical address, and accesses the main memory with the 40-bit physical address (130). As an example the CPU sets the MMU's base address register with an 8-bit extended address. As another example, the CPU updates the TLB 150, using a 32-bit virtual address as an index to the TLB 150 and a 40-bit physical address as a value corresponding to the index. In the embodiment according to the present invention, each time before the CPU 110 needs to obtain an entry of the FTL table from the main memory 130, the MMU120 is updated or whether the MMU120 needs to be updated is determined. Thus, the CPU accesses the MMU120 with a 32-bit virtual address, while the MMU120 accesses the main memory 130 with a 40-bit physical address and provides the access results to the CPU 110.
In the embodiment of the present invention, when an IO command for accessing the solid-state storage device is received, a physical address of an entry of the FTL table recording a logical address is obtained from the logical address indicated by the IO command by using a known FTL table structure. The CPU 110 splits, for example, a 40-bit physical address into a 32-bit virtual address and an 8-bit extended address, and sets the MMU120 with the 8-bit extended address, such that when the 32-bit virtual address is next received, the MMU120 combines the 32-bit virtual address and the 8-bit extended address into a 40-bit physical address and accesses the main memory 130 with the 40-bit physical address.
Fig. 3 is a flowchart of an FTL entry reading process according to an embodiment of the present invention. When the CPU wishes to use the FTL table entry, the FTL table entry is obtained from the main memory according to the embodiment of the invention.
By way of example, the CPU 110 (see also fig. 1) of the solid-state storage device receives the IO command, and obtains a logical address (denoted as logical address L) to be accessed by the IO command (310). Based on the logical address L, the physical address (denoted as physical address P) of the entry corresponding to the logical address L is stored in the FTL table (320), and the physical address P has 48 bits, for example. If the FLT table is stored in a Flat (Flat) structure in the solid-state storage device, the entry of the FTL table in which the logical address L is recorded is stored at the physical address (B + L × 8), where B indicates the base address of the FTL table, and the FTL table entry for each logical address occupies 8 bytes of storage space. CPU 110 sets MMU120 with the calculated physical address P (330). For example, the upper 16 bits of the 48-bit physical address P are filled into the base address register of the MMU120, such that when the MMU120 is next accessed with a virtual address (the lower 32 bits of the physical address P), the MMU120 concatenates the 16-bit base address register with the 32-bit address, forming a 48-bit physical address, and accesses main memory. In another example, the CPU uses the lower 32 bits of the 48-bit physical address P as an index and the 48-bit physical address as a value, and fills the index and value data pairs into an address mapping table provided by the MMU120 or TLB 150, such that when the MMU120 is next accessed with a 32-bit virtual address (the lower 32 bits of the physical address P), the MMU120 obtains the 48-bit physical address by querying the address mapping table using the 32-bit address as an index, and uses it to access the main memory 130. As still another example, the CPU uses the logical address L as an index and the physical address P as a value, and sets the MMU120 with the index and value data pair, so that the MMU120 finds the physical address P and accesses the main memory with the physical address P when it receives a 32-bit virtual address (logical address L) from the CPU.
The MMU120 records 335 the mapping of the virtual address to the physical address set by the CPU 110 and indicates to the CPU 110 that the address mapping setup is complete. After MMU120 completes setting the mapping relationship between the virtual address and the physical address, CPU 120 accesses the MMU using the 32-bit virtual address (340). The MMU translates the 32-bit virtual address to a 48-bit physical address, accesses main memory at the physical address (345), and provides the results returned by the main memory to the CPU.
In one embodiment according to the present invention, for each FTL table access operation, the CPU sets the MMU according to the calculated physical address of the entry of the FTL table, so that in the next access to the MMU by the CPU, the MMU can generate a physical address of, for example, 48 bits to access the main memory by using the 32-bit virtual address. In another embodiment, FTL entries are stored in memory pages or leaf nodes of a tree structure, each page of memory holds, for example, 1000 entries of FTL table, and MMU may be set only once for FTL table entries in the same page.
Optionally, between the CPU performing steps 330 and 340 of FIG. 3, or after step 340, other instructions may be executed to handle other tasks when MMU120 sets an address mapping or accesses main memory.
It will be apparent to those skilled in the art that in the above example, the virtual address is 32 bits, while the physical address is 40 or 48 bits for example purposes only. The virtual addresses and physical addresses may each have different widths. In an embodiment in accordance with the invention, the bit width of the physical address is greater than the bit width of the virtual address.
In FIG. 3, the steps performed by the CPU are implemented by software or firmware running in the CPU, while the steps performed by the MMU are implemented by the hardware circuitry of the MMU.
In addition to accessing the FTL table, any data stored in the main memory may be accessed according to embodiments of the present invention. In yet another example, a plurality of entries are included in an address mapping table of the MMU, each entry recording a correspondence of a virtual address to a physical address. Assigning the plurality of entries of the address mapping table of the MMU to a plurality of tasks of the CPU, for example, a task of accessing the FTL table, a task of performing DMA transfer, a task of performing scheduling on IO commands, etc., such that each CPU task owns an entry of the address mapping table of the MMU exclusively and each CPU task sets an address mapping table entry of the MMU owned by itself when using the MMU.
In yet another embodiment according to the present invention, the physical address generated by the MMU is provided to the FTL acceleration circuit for efficient access to entries of the FTL table.
Fig. 4 is a block diagram of circuitry according to yet another embodiment of the invention. The CPU 110 is coupled with a main memory 130, such as a DRAM, to constitute a circuit system. An FTL acceleration circuit 410 is provided between the CPU 110 and the main memory 130. FTL acceleration circuit 410 is coupled to CPU 110. CPU 110 accesses FTL acceleration circuit 410 through a high speed local interface. For example, the CPU 110 accesses the FTL acceleration circuit 410 in a register access manner. When the CPU 110 accesses the FTL acceleration circuit 410, the FTL acceleration circuit 410 can provide the result to the CPU 110 within one instruction execution cycle. CPU 110 is also coupled to MMU 420. The MMU 420 translates a virtual address, such as 32 bits, to a physical address, such as 40 bits. CPU 110 may set the address translation mode of MMU 420. Optionally, MMU 420 is also coupled to TLB 450.
The FTL acceleration circuit 410 is coupled to a main memory (e.g., DRAM) (130) by a bus. The main memory 130 stores FTL tables therein. The storage space of the main memory 130 may be several GB, and the FTL table may occupy more than 4GB of storage space. In the embodiment according to fig. 4, CPU 110 provides a virtual address of, for example, 32 bits to FTL acceleration circuit 410, and FTL acceleration circuit 410 translates the virtual address into a physical address of, for example, 40 bits using MMU 420 and accesses the FTL table stored in main memory 130 with the physical address.
The CPU 110 accesses the FTL table stored in the main memory 130 through the FTL acceleration circuit 410. Alternatively, the CPU 110 may also access the main memory directly without utilizing the FTL acceleration circuit 410. CPU 110 may also access main memory through MMU 420. Alternatively, the CPU 110 accesses the FTL table stored in the main memory through the FTL acceleration circuit 410, and the CPU 110 directly accesses the main memory 130 to obtain contents other than the FTL table stored in the main memory 130.
Fig. 5 is a block diagram of FTL acceleration circuit according to still another embodiment of the present invention. The FTL acceleration circuit 410 includes an FTL entry storage unit. The FTL entry storage unit includes an FTL entry address storage unit 520 and an FTL entry data storage unit 530, which are respectively used for storing an address of the FTL entry (e.g., a storage address of the FTL entry in the main memory) and data of the FTL entry corresponding to the address.
In the FTL acceleration circuit 410, a plurality of FTL entries may be stored. The FTL entry may be, for example, { FTL entry address, FTL entry data }. Also, by indexing, the CPU 110 (see fig. 4) may explicitly access the FTL entry address storage 520 and/or the FTL entry data storage 530 of the FTL acceleration circuit 410.
In another example, the length of the FTL entry is also stored in association with the FTL entry in the FTL acceleration circuit 410. The FTL entry address thus indicates the starting address of the FTL entry in the main memory 130, while the FTL entry length indicates the size of the main memory 130 space occupied by the FTL entry.
The FTL acceleration circuit 410 also includes a CPU interface 510. The CPU 110 accesses the FTL acceleration circuit 410 through the CPU interface 510. The CPU interface 510 is a low latency access interface that supports synchronous/asynchronous modes of operation. In the synchronous operation mode, the CPU 110 sends an access request to the FTL acceleration circuit 410 through the CPU interface 510 of the FTL acceleration circuit 410, the FTL acceleration circuit 410 obtains an access result and then indicates the completion of processing the access request to the CPU 110, and provides the access result to the CPU 110, and the CPU 110 obtains the access result provided by the FTL acceleration circuit 410 and then continues to perform subsequent operations (e.g., execute the next instruction). In the asynchronous mode of operation, the CPU 110 issues an access request to the FTL acceleration circuit 410 through the CPU interface 510, and the FTL acceleration circuit 410 indicates completion of processing of the access request to the CPU 110 without waiting for a return result.
FTL acceleration circuit 410 accesses main memory (e.g., DDR memory) 130 (see also fig. 4) through MMU 420. The CPU interface 510 receives addresses from the CPU 110 to access the FTL table, such as virtual addresses or 32-bit addresses, and translates the addresses received from the CPU 110 to access the FTL table to physical addresses, such as 40-bit addresses, via the MMU 420. And also stores the physical address generated by MMU 420 in FTL entry address storage unit 520. The CPU 110 may also set the address translation mode of the MMU 420, and/or the mapping of virtual addresses to physical addresses, such that the MMU 420, upon next receiving a virtual address, translates the virtual address to a physical address and accesses the main memory 130 with the physical address.
The CPU interface 510 of the FTL acceleration circuit 410 provides multiple access means.
(1) FTL entry preloading (FTL _ Prefetch (index, addr))
CPU 110 issues an FTL entry preload request (FTL _ prefetcch) to FTL acceleration circuit 410 through CPU interface 510 to request FTL acceleration circuit 410 to load FTL entry a specifying a virtual address (e.g., an address indicated by addr) from main memory 130, MMU 420 translates the virtual address into a physical address, stores the physical address of FTL entry a in FTL entry address storage 520 of FTL acceleration circuit 410, and stores FTL entry data of entry a in FTL entry data storage 530 of FTL acceleration circuit 410. An Index (Index) is also included in the preload request sent by the CPU 110 to the FTL acceleration circuit 410, and indicates where the FTL entry is loaded to the FTL entry storage unit. Through the Index (Index), the CPU may also obtain the entry address and/or entry data of entry a from the FTL acceleration circuit 410. Since the time to access main memory is much longer than the CPU instruction execution time, to avoid the CPU 110 waiting a long time for the execution result of the FTL acceleration circuit 410, it is preferable to process the access request using the asynchronous operation mode.
Optionally, multiple addresses are included in the preload request sent by the CPU 110 to the FTL acceleration circuit 410. The FTL acceleration circuit 410 instructs the MMU 420 to stitch multiple addresses into an address for the main memory 130. For example, the plurality of addresses are respectively a high-order address and a low-order address, and the plurality of addresses may also be respectively a page address and an offset address within a page. Still alternatively, the CPU 110 uses a logical address of the IO command in the preload request sent to the FTL acceleration circuit 410, and the FTL acceleration circuit 410 calculates a physical address of the FTL entry in the main memory corresponding to the logical address according to the logical address of the IO command, and records a mapping relationship between the logical address and the physical address in the MMU 420, so that when the MMU 420 is accessed by using the logical address, the MMY420 will access the main memory 130 according to the physical address.
After receiving the preload request, the FTL acceleration circuit 410 causes the MMU 420 to read the data of the specified length from the main memory by using the main memory address to obtain the FTL entry a, and stores the entry data of the entry a in the entry data storage unit 530 indexed by the Index of the FTL acceleration circuit 410. The FTL acceleration circuit also stores the main memory address of entry a obtained by the MMU 420 in the entry address storage unit 520 of the FTL acceleration circuit indexed by Index.
Optionally, the FTL acceleration circuit 410 also locks the FTL entry data in the main memory when it reads the FTL entry data from the main memory. In the FTL entry preloading request issued by the CPU 110, it is also indicated whether to lock the FTL entry data in the main memory.
Optionally, the logic address of the FTL entry a to be loaded is indicated in the FTL entry preloading request. The FTL acceleration circuit 410 determines a main memory address at which the FTL entry a is stored in the main memory according to the logical address. For example, the main memory address storing FTL entry a is determined by the FTL table base address and the logical address as an offset value. Or, using the logical address as an index, looking up the main memory address storing the FTL entry a. And the FTL acceleration circuit 410 accesses the main memory 130 using the main memory address of FTL entry a.
Alternatively, if the index is indicated in the FTL entry preloading request, and the storage unit indicated by the index in the FTL entry storage unit of the FTL acceleration circuit 410 has already been allocated (the FTL entry has already been stored or a load of the FTL entry to be stored in the storage unit indicated by the index has been requested to the main memory 130), the content of the storage unit indicated by the index is also cleared or marked invalid, and the FTL entry is retrieved from the main memory 130.
(2) FTL entry Read (FTL _ Read _ item (index))
The CPU 110 issues an FTL entry read request to the FTL acceleration circuit 410 through the CPU interface 510. In the FTL entry read request, an index (index) of the FTL entry to be read in the FTL entry storage unit is specified. CPU 110 requests FTL acceleration circuit 410 to provide the corresponding FTL entry a by indexing Index. The FTL acceleration circuit 410 obtains the entry data from the FTL entry data storage unit 530 according to the Index and provides it to the CPU 110. The time required for FTL acceleration circuit 410 to provide the table entry data to CPU 110 is close to the CPU instruction execution time, and thus the access request is preferably processed using a synchronous mode of operation. If the CPU 110 requests the FTL acceleration circuit 410 to provide the entry data of the corresponding FTL entry a by using the Index, the FTL acceleration circuit 410 does not have the requested entry data in the FTL entry storage unit of the FTL acceleration circuit 410, and the FTL acceleration circuit 410 checks whether a request for obtaining the FTL entry a of the specified main memory address has been issued to the main memory 130. If the request has been issued, wait for the main memory 130 to return the FTL entry a, store the FTL entry a in the FTL entry storage unit, and provide the entry a to the CPU 110. If the request has not been issued, CPU 110 is informed that the requested FTL entry A does not exist in FTL acceleration component 410.
(3) FTL entry address Read (FTL _ Read _ addr (index))
The CPU 110 issues an FTL entry address read request to the FTL acceleration circuit 410 through the CPU interface 510. CPU 110 requests FTL acceleration circuit 410 to provide the entry address of the corresponding FTL entry a by Index (Index). The FTL acceleration circuit 410 obtains the entry address from the internal FTL entry address storage unit 520 according to the Index, and provides it to the CPU 110. The time required for FTL acceleration circuit 410 to provide the entry address to CPU 110 is close to the CPU instruction execution time, and thus the access request is preferably processed using a synchronous mode of operation. If the CPU 110 requests the FTL acceleration circuit 410 to provide the entry address of the corresponding FTL entry a by using the Index, and the FTL entry address storage unit 520 of the FTL acceleration circuit 410 does not have the requested entry address, the CPU 110 is informed that the requested FTL entry a does not exist in the FTL acceleration unit 410.
A read operation of the FTL entry address stored in the FTL acceleration circuit 410 is not necessary. Normally, when the CPU 110 updates the FTL entry, only the entry data is updated without updating the FTL entry address, so that the CPU 110 does not need to update the FTL entry address stored in the FTL acceleration circuit 410.
(4) FTL entry update (FTL _ Write _ item (index, item))
The CPU 110 issues an FTL entry update request to the FTL acceleration circuit 410 through the CPU interface 510. CPU 110 requests FTL acceleration circuit 410 to update FTL entry data indexed as Index to new entry data (item) specified by CPU 110. The FTL acceleration circuit 410 performs an update of the entry data in the FTL entry data storage unit 520. The time required for the FTL acceleration circuit 410 to update the table entry data is close to the CPU instruction execution time, and thus, the access request is preferably processed using the synchronous operation mode. Alternatively, if the CPU 110 requests the FTL acceleration circuit 410 to update the corresponding FTL entry data by using the Index, and there is no entry data with Index in the FTL entry data storage unit 520 of the FTL acceleration circuit 410, the new entry data is stored in the FTL entry data storage unit 520. And optionally, informs the CPU 110 that there is no entry data with Index in the FTL entry data storage unit 520 of the FTL acceleration circuit 410.
Optionally, in the FTL entry update request issued by the CPU 110, it is also indicated whether to unlock the FTL entry data in the main memory. The FTL acceleration circuit 4110 further unlocks the FTL entry data in the main memory according to the indication when the FTL entry data is updated to the main memory. Therefore, when the FTL entry data needs to be updated continuously, the CPU 110 may unlock the FTL entry data in the main memory only at the last update.
(5) FTL entry address update (FTL _ Write _ addr (index, addr))
The CPU 110 issues an FTL entry address update request to the FTL acceleration circuit 410 through the CPU interface 510. CPU 110 requests FTL acceleration circuit 410 to update the FTL table entry address indexed as Index in FTL table entry address storage unit 520 to the new table entry physical address corresponding to the virtual address (addr) specified by CPU 110. In one example, FTL acceleration circuit 410 accesses MMU 420 with a virtual address (addr) and updates the entry address in FTL entry address storage unit 520 with the physical address provided by MMU 420 corresponding to the virtual address (addr). Alternatively, if the CPU 110 requests the FTL acceleration circuit 410 to update the corresponding FTL entry address through the Index, and there is no entry address with the Index in the FTL entry address storage unit 520 of the FTL acceleration circuit 410, the new entry address is stored in the FTL entry address storage unit 520. And optionally, the FTL acceleration circuit 410 informs the CPU 110 that there is no entry address indexed as Index in the FTL entry address storage unit 520 of the FTL acceleration circuit 410. In another example, the address (addr) provided by the CPU 110 through the CPU interface 510 is a physical address, and the FTL acceleration unit 410 stores the physical address (addr) in the FTL entry address storage unit 520.
An update operation to the FTL entry address stored in the FTL acceleration circuit 410 is not necessary.
(6) FTL table entry flash (FTL _ Flush _ entry (index))
The CPU 110 issues an FTL entry flush request to the FTL acceleration circuit 410 through the CPU interface 510. CPU 110 requests FTL acceleration circuit 410 to write FTL entry a indexed by Index to main memory 130. The FTL acceleration circuit 410 obtains the main memory address (physical address) of the FTL entry a from the FTL entry address storage unit 520 according to the Index, obtains the FTL entry data (entry a) from the FTL entry data storage unit 530, and writes the entry a to the main memory 130 according to the main memory address. Since the time to access main memory is much longer than the CPU instruction execution time, to avoid the CPU 110 waiting a long time for the execution result of the FTL acceleration circuit 410, it is preferable to process the access request using the asynchronous operation mode.
Alternatively, no parameter is provided in the FTL entry flush request, and the FTL acceleration circuit 410 writes all FTL entries in the FTL entry storage unit to the main memory 130 in response to the FTL entry flush request without parameter. FTL entry data stored in the FTL entry data storage unit 530 is written to the main memory 130 by the corresponding FTL entry address stored in the FTL entry address storage unit 520.
Optionally, in the FTL entry flushing request issued by the CPU 110, it is also indicated whether to unlock the FTL entry data in the main memory.
(7) FTL entry write-through (FTL _ WriteThrough _ item (index, item))
The CPU 110 issues an FTL entry write-through request to the FTL acceleration circuit 410 through the CPU interface 510. The CPU 110 requests the FTL acceleration circuit 410 to update the FTL entry data indexed by Index by the FTL entry data storage unit 530 to the new entry data specified by the CPU 110 and write the new entry data to the main memory 130. The FTL acceleration circuit 410 performs an update of the entry data in the FTL entry data storage unit 530. The FTL acceleration circuit 410 also actively initiates a main memory update request, fetches a main memory address (physical address) from the FTL entry address storage unit 520 according to the index (index), and writes the FTL entry data indicated by the index (index) to the main memory 130 according to the main memory address without intervention of the CPU 110.
Since the operation of writing data into the main memory 130 is time-consuming, it is preferable that the access request is processed using an asynchronous operation mode, and the FTL entry write-through request processing is indicated to the CPU 110 to be completed after the FTL acceleration circuit 410 writes the FTL entry data into the FTL entry data storage unit 530.
Operation process
(1) Reading FTL table entries
Fig. 6 is a flowchart of an FTL entry reading process according to yet another embodiment of the present invention.
By way of example, the CPU 110 (see also fig. 1) of the solid-state storage device receives the IO command, and obtains a logical address (denoted as logical address L) to be accessed by the IO command (610). CPU 110 wishes to retrieve FTL entry a recording logical address L from main memory 130. According to the logical address L, the physical address (denoted as physical address P) of the table entry corresponding to the logical address L is stored in the FTL table is calculated (620). CPU 110 sets MMU 420 (see also fig. 4 or 5) of FTL acceleration circuit 410 with the calculated physical address P (630) so that the correspondence of virtual address addr to physical address P can be provided by MMU 420. The MMU 420 records 635 the mapping relationship between the virtual address addr and the physical address P set by the CPU 110, and indicates 635 the completion of the setting of the address mapping relationship to the CPU 110.
The CPU 110 determines to load the FTL entry a at the virtual address addr by reading the logical address of the request. The CPU requests FTL acceleration circuit 410 to preload FTL table entry a (e.g., FTL _ Prefetch (index, addr)) through CPU interface 510 of FTL acceleration circuit 410 (640). To request the FTL acceleration circuit 410 to preload the FTL entry a, the CPU 110 further informs the FTL acceleration circuit 410 of an index (index) for indexing the FTL entry a, so that after the FTL acceleration circuit 410 loads the FTL entry a, the CPU 110 can quickly obtain the FTL entry a from the FTL acceleration circuit 410 through the index. The FTL acceleration circuit 410 stores the obtained FTL entry a in the FTL entry data storage unit 530 and the FTL entry address storage unit 520 addressed by the index (index).
The operations that the CPU 110 requests the FTL acceleration circuit 410 to preload the FTL entry a are executed in an asynchronous manner. After the CPU 110 issues the request, it continues to perform other processing without waiting for the FTL acceleration circuit 410 to return the execution result (650). Thus, the processing power of the CPU 110 can be effectively utilized in the process of the FTL acceleration circuit 410 reading the FTL entry a from the main memory (e.g., DRAM).
FTL acceleration circuit 410 responds to a request issued by CPU 110 to preload FTL entry a by accessing MMU 420 using address addr as a virtual address, MMU 420 converts virtual address addr to a physical address, and reads data from main memory 130 using the physical address as FTL entry a (635). FTL acceleration circuit 410 responds to FTL entry a preloading request issued by CPU 110 in an asynchronous manner, and indicates to CPU 110 that the preload request processing is complete before FTL entry a is obtained from main memory after receiving FTL entry a preloading request.
In one example, CPU 110 also informs FTL acceleration circuit 410 of the length of FTL entry A requested, so that MMU 420 reads the specified length of data from main memory. Optionally, the CPU 110 further instructs the FTL acceleration circuit 410 to lock the requested FTL entry a, and the MMU 420 first requests to lock the FTL entry a when acquiring the FTL entry a from the main memory, and reads the FTL entry a from the main memory 130 after successful locking.
Some time later, CPU 110 estimates that FTL acceleration circuit 410 has retrieved FTL entry a from main memory 130. The CPU 110 reads (e.g., FTL _ Read _ item) FTL entry a indexed as index from the FTL acceleration circuit 410 through the CPU interface 510 of the FTL acceleration circuit 410 (660). In another example, FTL acceleration circuit 410 sends a message or interrupt to CPU 110 to indicate that CPU 110 has retrieved the FTL entry from main memory 130.
In the example of the following figure, when the CPU 110 reads the FTL entry a indexed by index from the FTL acceleration circuit 410, the FTL acceleration circuit 410 has obtained the FTL entry a from the main memory and stored in the FTL entry data storage unit 530 of the FTL acceleration circuit 410, the FTL acceleration circuit 410 also stores the main memory address of the entry a in the FTL entry address storage unit 520, and the FTL entry a stored in the FTL acceleration circuit 410 and the address of the entry a can be accessed by the index.
In response to the CPU 110 reading the FTL entry a indexed by index from the FTL acceleration circuit 410, the FTL acceleration circuit 410 acquires the entry a from the FTL entry data storage unit 530 and provides it to the CPU 110 (665). The operation of the CPU 110 reading the FTL entry a indexed by index from the FTL acceleration circuit 410 is performed in a synchronous manner. After the CPU 110 receives the entry a provided by the FTL acceleration circuit 410, it will not perform other operations (e.g., operations using FTL entry a).
In another example, the FTL acceleration circuit 410 has received a request to preload the FTL entry a from the main memory 130 before the CPU 110 reads the FTL entry a with index from the FTL acceleration circuit 410, but has not obtained the FTL entry a from the main memory. Since the operation of reading the FTL entry a indexed by index from the FTL acceleration circuit 410 is performed in a synchronous manner, the FTL acceleration circuit 410 waits for the FTL entry a to be obtained from the main memory 130, stores the entry a in the FTL entry data storage unit 520 of the FTL acceleration circuit, and provides the FTL entry a to the CPU 110.
In yet another example, when the CPU 110 reads the FTL entry a with index from the FTL acceleration circuit 410, the FTL acceleration circuit 410 has not received a request to preload the FTL entry a from the main memory. The FTL acceleration circuit 410 checks if a request to preload the FTL entry a of a specified main memory address has been issued to the main memory; if the request has not been issued, CPU 110 is informed that the requested FTL entry A does not exist in FTL acceleration component 410.
In another embodiment, to fully utilize the pre-load capability of the FTL acceleration circuit 410, the CPU 110 issues a plurality of pre-load requests to the FTL acceleration circuit 410 to request the FTL acceleration circuit 410 to pre-load a plurality of (e.g., m being a positive integer) FTL entries. After issuing the m preload requests, the FTL entry a initially requesting preload has been retrieved from the main memory 130 by the FTL acceleration circuit 410 so that immediately after the CPU 110 issues the FTL entry read request, the FTL acceleration circuit 410 can immediately provide the read FTL entry a to the CPU.
(2) Updating FTL table entry (FIG. 7A)
When some IO commands (e.g., write commands) are processed, a new corresponding physical address needs to be allocated for the logical address, and the FTL entry (e.g., entry a) needs to be updated. Fig. 7A shows a flow of updating FTL entry according to another embodiment of the present invention.
To update the FTL entry a, the CPU 110 obtains the FTL entry a from the main memory by the embodiment according to the present invention and stores it in the FTL entry data storage unit 530 of the FTL acceleration circuit 410.
Next, the CPU 110 assigns a new physical address (the physical address of the solid-state storage device for the write command) to the FTL entry a. To write FTL entry a with the updated physical address to the main memory, the CPU 110 writes FTL entry a with index (which contains the newly allocated physical address of the solid-state storage device for the write command) to the FTL acceleration circuit 410 through the CPU interface 510 of the FTL acceleration circuit 410 (710). In response to the CPU 110 writing (FTL _ Write _ item (index, item)) the FTL entry a indexed by index to the FTL acceleration circuit 410, the FTL acceleration circuit 410 updates the entry data indexed by index in the FTL entry data storage unit 530 (715). The operation of the CPU 110 writing the FTL entry a with index to the FTL acceleration circuit 410 is performed in a synchronous manner.
The CPU may then perform other processing (720).
In one example, the CPU 110 also requests the FTL acceleration circuit 410 to Flush the FTL entry indexed to index to the main memory (FTL _ Flush _ entry (index)) through the CPU interface 510 of the FTL acceleration circuit 410. In response to the CPU 110 requesting the FTL acceleration circuit 410 to flush the FTL entry indexed by index to the main memory 130, the FTL acceleration circuit 410 acquires the main memory address of the entry a from the FTL entry address storage unit 520, acquires the content of the entry a from the FTL entry data storage unit 530, and issues a write request to the main memory 130 through the MMU 420 or directly, based on the index of the entry a. The operation of the CPU 110 requesting the FTL acceleration circuit 410 to flush the FTL entry indexed by index to the main memory may be performed in an asynchronous manner, so that the CPU 110 may perform other processing during the process of the FTL acceleration circuit 410 flushing the FTL entry to the main memory 130. The operation that the CPU 110 requests the FTL acceleration circuit 410 to flush the FTL entry indexed by index to the main memory may also be executed in a synchronous manner, and the FTL acceleration circuit 410 indicates to the CPU 110 that the FTL entry flushing request is completed after the FTL entry is flushed to the main memory. Optionally, the FTL acceleration circuit 410 combines multiple FTL entries and writes the multiple FTL entries to the main memory in one main memory update request.
In another example, the CPU also requests the FTL acceleration circuit to Flush all "dirty" FTL entries to main memory (FTL _ Flush _ entry) through the FTL acceleration circuit CPU interface. An FTL entry that is "dirty" refers to an FTL entry that has been loaded into the FTL entry storage unit, has been updated, but has not been flushed to main memory.
Optionally, the FTL acceleration circuitry 410 also maintains status for each entry of the FTL entry storage means, including an empty status indicating that the FTL entry is not loaded, a "preload" status that the FTL entry loading request has been received but the FTL entry has not been received, a valid status that the FTL entry has been loaded and the FTL entry has not been updated, a "dirty" status that the FTL entry has been updated, etc.
Optionally, the CPU 110 further instructs the FTL acceleration circuit 410 to unlock the updated FTL entry a. The FTL acceleration circuit 410 unlocks the FTL entry a in the main memory when updating the FTL entry a to the main memory.
In another embodiment, CPU 110 generates a new FTL entry instead of fetching the FTL entry from main memory 130. The CPU 110 issues an FTL entry update (FTL _ Write _ item (index, item)) request to the FTL acceleration circuit 410 through the CPU interface 510 of the FTL acceleration circuit 410 to Write the generated FTL entry into the FTL entry storage unit of the FTL acceleration circuit 410 indicated by the index.
Fig. 7B shows a flow of updating FTL entry according to another embodiment of the present invention.
To update FTL entry a, CPU 110 obtains FTL entry a from main memory 130 by according to an embodiment of the present invention, and assigns a new physical address (the physical address of the solid state storage device for the write command) for FTL entry a. To write FTL entry a with updated physical address into the main memory 130, the CPU 110 issues FTL entry a (including newly allocated physical address) write-through request (FTL _ WriteThrough _ item) with index to FTL acceleration circuit through the CPU interface 510 of FTL acceleration circuit 410. In response to the CPU 110 issuing the FTL entry a write-through request to the FTL acceleration circuit 410, the FTL acceleration circuit 410 updates the entry data having the index of index in the FTL entry data storage unit 530. The FTL acceleration circuit 410 also fetches the main memory address from the FTL entry address storage unit 520 according to the index, and issues a write request to the main memory 130 through the MMU 420 or directly without intervention of the CPU 110. The operation of the CPU 110 to write-through the FTL entry a indexed by index to the FTL acceleration circuit 410 is executed in an asynchronous manner. Causing the CPU 110 to perform other processing in the course of the FTL acceleration circuit 410 updating the FTL entry storage means and writing the new entry data to the main memory 130 (420).
Optionally, the FTL acceleration circuit 410 combines multiple FTL entries and writes the multiple FTL entries to the main memory in one main memory update request.
Optionally, after issuing the FTL entry write-through request, the CPU 110 further requests the FTL acceleration circuit 410 to Flush the FTL entry indexed to index to the main memory (FTL _ Flush _ entry (index)) through the CPU interface 510 of the FTL acceleration circuit 410. The FTL acceleration circuit 410 checks the FTL entry with index in the FTL entry storage unit after receiving the FTL entry refresh request, and directly indicates to the CPU 110 that the FTL entry refresh request is completed if the FTL entry has been written into the main memory; if the FTL entry has not been written into the main memory, the FTL entry is waited to be written into the main memory before indicating to the CPU 110 that the FTL entry refresh request is completed.
It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data control apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data control apparatus create means for implementing the functions specified in the flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data control apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data control apparatus to cause a series of operational operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of operations for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Although the present invention has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the invention, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the invention.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (8)

1. A method of accessing FTL tables, comprising:
responding to the received IO command, and acquiring a logic address of the IO command;
calculating the physical address of the table entry of the FTL table corresponding to the logical address according to the logical address;
splitting a virtual address and an extended address according to the physical address;
setting a Memory Management Unit (MMU) according to the extended address;
accessing the table entry of the FTL table through the memory management unit using a virtual address;
wherein the physical address is wider than the virtual address.
2. The method of claim 1, wherein the width of the physical address is greater than an internal address bus bit width of a CPU executing the method.
3. A method of accessing FTL tables, comprising:
responding to the received IO command, and acquiring a logic address of the IO command;
calculating the physical address of the table entry of the FTL table corresponding to the logical address according to the logical address;
splitting a virtual address and an extended address according to the physical address;
setting a Memory Management Unit (MMU) according to the extended address;
sending a preloading request for an FTL table entry of a specified address so as to load the FTL table entry located at the virtual address to a storage position indicated by a first index of an FTL table entry storage unit; sending a read request to the FTL table entry of the first index so as to acquire the FTL table entry of the first index from the FTL table entry storage component;
wherein the physical address is wider than the virtual address.
4. A method of accessing an FTL table according to claim 3, further comprising: sending an update request to the FTL table entry of the first index to update the FTL table entry indicated by the first index in the FTL table entry storage unit; and issuing a flush request for the FTL entry of the first index to write the FTL entry indicated by the first index in the FTL entry storage unit into the main memory location indicated by the FTL entry address indicated by the first index in the FTL entry storage unit.
5. The method of claim 3 or 4, wherein the width of the physical address is greater than an address bus bit width of a CPU executing the method.
6. The method of claim 3, further comprising: an update request is issued for the FTL entry of the first index to write the FTL entry indicated by the update request to the main memory location indicated by the FTL entry address indicated by the index of the update request in the FTL entry storage unit.
7. An apparatus for accessing an FTL table, comprising:
the logic address acquisition module is used for responding to the received IO command and acquiring the logic address of the IO command;
the address calculation module is used for calculating the physical address of the table entry of the FTL table corresponding to the logical address according to the logical address;
splitting a virtual address and an extended address according to the physical address;
the memory management unit setting module is used for setting a Memory Management Unit (MMU) according to the extended address;
a memory access module, configured to access the table entry of the FTL table through the memory management unit using a virtual address;
wherein the physical address is wider than the virtual address.
8. An apparatus for accessing an FTL table, comprising:
the logic address acquisition module is used for responding to the received IO command and acquiring the logic address of the IO command;
the address calculation module is used for calculating the physical address of the table entry of the FTL table corresponding to the logical address according to the logical address;
splitting a virtual address and an extended address according to the physical address;
the memory management unit setting module is used for setting a Memory Management Unit (MMU) according to the extended address;
the preloading module is used for sending a preloading request to an FTL table entry of a specified address so as to load the FTL table entry located at the virtual address to a storage position indicated by a first index of the FTL table entry storage component; and
a read request module, configured to send a read request for the FTL entry of the first index, so as to obtain the FTL entry of the first index from the FTL entry storage component;
wherein the physical address is wider than the virtual address.
CN201610862939.8A 2016-09-28 2016-09-28 Method and device for 32-bit CPU to access memory space larger than 4GB Active CN107870867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610862939.8A CN107870867B (en) 2016-09-28 2016-09-28 Method and device for 32-bit CPU to access memory space larger than 4GB

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610862939.8A CN107870867B (en) 2016-09-28 2016-09-28 Method and device for 32-bit CPU to access memory space larger than 4GB

Publications (2)

Publication Number Publication Date
CN107870867A CN107870867A (en) 2018-04-03
CN107870867B true CN107870867B (en) 2021-12-14

Family

ID=61761634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610862939.8A Active CN107870867B (en) 2016-09-28 2016-09-28 Method and device for 32-bit CPU to access memory space larger than 4GB

Country Status (1)

Country Link
CN (1) CN107870867B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110389904B (en) * 2018-04-20 2024-10-01 北京忆恒创源科技股份有限公司 Memory device with compressed FTL table
CN110554833B (en) * 2018-05-31 2023-09-19 北京忆芯科技有限公司 Parallel processing IO commands in a memory device
CN110968525B (en) * 2018-09-30 2024-10-01 北京忆恒创源科技股份有限公司 FTL provided cache, optimization method and storage device thereof
CN110704338B (en) * 2019-10-18 2021-01-26 安徽寒武纪信息科技有限公司 Address conversion device, artificial intelligence chip and electronic equipment
CN113806251B (en) * 2021-11-19 2022-02-22 沐曦集成电路(上海)有限公司 System for sharing memory management unit, building method and memory access method
US11989127B2 (en) 2022-09-15 2024-05-21 Western Digital Technologies, Inc. Efficient L2P DRAM for high-capacity drives

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356509A (en) * 2006-01-04 2009-01-28 索尼爱立信移动通讯股份有限公司 Data compression method for supporting virtual memory management in a demand paging system
CN103116556A (en) * 2013-03-11 2013-05-22 无锡江南计算技术研究所 Internal storage static state partition and virtualization method
CN104102586A (en) * 2013-04-15 2014-10-15 中兴通讯股份有限公司 Address mapping processing method and address mapping processing device
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system
CN105027090A (en) * 2012-10-05 2015-11-04 西部数据技术公司 Methods, devices and systems for physical-to-logical mapping in solid state drives
CN105283855A (en) * 2014-04-25 2016-01-27 华为技术有限公司 Method and device for addressing
CN105378642A (en) * 2013-05-13 2016-03-02 高通股份有限公司 System and method for high performance and low cost flash translation layer
CN105830059A (en) * 2014-11-28 2016-08-03 华为技术有限公司 Fine pitch connector socket

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102331978A (en) * 2011-07-07 2012-01-25 曙光信息产业股份有限公司 DMA (Direct Memory Access) controller access implementation method for Loongson blade large-memory address devices
CN104182352B (en) * 2014-08-19 2017-11-24 湖北盛天网络技术股份有限公司 For accessing the method and device of more than 4GB physical memory address spaces
US9483413B2 (en) * 2014-10-24 2016-11-01 Samsung Electronics Co., Ltd. Nonvolatile memory devices and methods of controlling the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101356509A (en) * 2006-01-04 2009-01-28 索尼爱立信移动通讯股份有限公司 Data compression method for supporting virtual memory management in a demand paging system
CN105027090A (en) * 2012-10-05 2015-11-04 西部数据技术公司 Methods, devices and systems for physical-to-logical mapping in solid state drives
CN103116556A (en) * 2013-03-11 2013-05-22 无锡江南计算技术研究所 Internal storage static state partition and virtualization method
CN104102586A (en) * 2013-04-15 2014-10-15 中兴通讯股份有限公司 Address mapping processing method and address mapping processing device
CN105378642A (en) * 2013-05-13 2016-03-02 高通股份有限公司 System and method for high performance and low cost flash translation layer
CN105283855A (en) * 2014-04-25 2016-01-27 华为技术有限公司 Method and device for addressing
CN104166634A (en) * 2014-08-12 2014-11-26 华中科技大学 Management method of mapping table caches in solid-state disk system
CN105830059A (en) * 2014-11-28 2016-08-03 华为技术有限公司 Fine pitch connector socket

Also Published As

Publication number Publication date
CN107870867A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107870867B (en) Method and device for 32-bit CPU to access memory space larger than 4GB
CN108804350B (en) Memory access method and computer system
CN104346294B (en) Data read/write method, device and computer system based on multi-level buffer
US10248576B2 (en) DRAM/NVM hierarchical heterogeneous memory access method and system with software-hardware cooperative management
CN109582214B (en) Data access method and computer system
JP6505132B2 (en) Memory controller utilizing memory capacity compression and associated processor based system and method
KR101379596B1 (en) Tlb prefetching
EP2510444B1 (en) Hierarchical translation tables control
CN111061655B (en) Address translation method and device for storage device
US9792221B2 (en) System and method for improving performance of read/write operations from a persistent memory device
US10997078B2 (en) Method, apparatus, and non-transitory readable medium for accessing non-volatile memory
US11210020B2 (en) Methods and systems for accessing a memory
CN111858404B (en) Method and system for address translation, and computer readable medium
JP2017138852A (en) Information processing device, storage device and program
JP6478843B2 (en) Semiconductor device and cache memory control method
US20170083444A1 (en) Configuring fast memory as cache for slow memory
JP2009512943A (en) Multi-level translation index buffer (TLBs) field updates
JP2007048296A (en) Method, apparatus and system for invalidating multiple address cache entries
CN107870870B (en) Accessing memory space beyond address bus width
CN111352865B (en) Write caching for memory controllers
CN107423232B (en) FTL quick access method and device
CN111480151A (en) Flushing cache lines from a common memory page to memory
CN110362509B (en) Unified address conversion method and unified address space
WO2023217255A1 (en) Data processing method and device, processor and computer system
WO2022021158A1 (en) Cache system, method and chip

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant