US9916256B1 - DDR storage adapter - Google Patents

DDR storage adapter Download PDF

Info

Publication number
US9916256B1
US9916256B1 US15/262,434 US201615262434A US9916256B1 US 9916256 B1 US9916256 B1 US 9916256B1 US 201615262434 A US201615262434 A US 201615262434A US 9916256 B1 US9916256 B1 US 9916256B1
Authority
US
United States
Prior art keywords
memory
buffer
pages
dimm
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/262,434
Other versions
US20180074971A1 (en
Inventor
David Stanley Maxey
Nidish Ramachandra Kamath
Vikas Kumar AGRAWAL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kioxia Corp
Original Assignee
Toshiba Memory Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US15/262,434 priority Critical patent/US9916256B1/en
Application filed by Toshiba Memory Corp filed Critical Toshiba Memory Corp
Assigned to TOSHIBA MEMORY CORPORATION reassignment TOSHIBA MEMORY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KABUSHIKI KAISHA TOSHIBA
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGRAWAL, VIKAS, KAMATH, NIDISH, MAXEY, DAVID
Priority to US15/888,483 priority patent/US10430346B2/en
Publication of US9916256B1 publication Critical patent/US9916256B1/en
Application granted granted Critical
Publication of US20180074971A1 publication Critical patent/US20180074971A1/en
Assigned to TOSHIBA MEMORY CORPORATION reassignment TOSHIBA MEMORY CORPORATION CHANGE OF NAME AND ADDRESS Assignors: K.K. PANGEA
Assigned to KIOXIA CORPORATION reassignment KIOXIA CORPORATION CHANGE OF NAME AND ADDRESS Assignors: TOSHIBA MEMORY CORPORATION
Assigned to K.K. PANGEA reassignment K.K. PANGEA MERGER (SEE DOCUMENT FOR DETAILS). Assignors: TOSHIBA MEMORY CORPORATION
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1045Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/128Replacement control using replacement algorithms adapted to multidimensional cache systems, e.g. set-associative, multicache, multiset or multilevel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/69
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/70Details relating to dynamic memory management

Definitions

  • the present invention generally relates to adapting storage technologies to the Double Data Rate (DDR) memory interface that are not intrinsically compatible.
  • DDR Double Data Rate
  • OSs operating systems
  • HDDs hard disk drives
  • SSDs solid state drives
  • DRAM physical dynamic random-access memory
  • HDDs traditional physical block access storage device
  • SSDs solid state drives
  • legacy communications interfaces such as Serial AT Attachment (SATA) with the overhead of a storage software stack to perform input/output (I/O) operations.
  • SATA Serial AT Attachment
  • Modern storage technologies can produce storage devices whose access latency is significantly lower than traditional spinning disk storage devices (i.e. HDDs) and even flash-based storage devices.
  • HDDs spinning disk storage devices
  • flash-based storage devices modern storage technologies have not yet achieved a low-enough latency to render them compatible with DDR specifications.
  • the latency overhead of the software stack that current OSs have in place to support block and file system access is disproportionate and acts as a significant performance penalty for these modern low-latency storage devices.
  • a method of accessing a persistent memory over a memory interface includes allocating a virtual address range comprising virtual memory pages to be associated with physical pages of a memory buffer and marking each page table entry associated with the virtual address range as not having a corresponding one of the physical pages of the memory buffer.
  • the method further includes generating a page fault when one or more of the virtual memory pages within the virtual address range is accessed, and mapping page table entries of the virtual memory pages to the physical pages of the memory buffer.
  • the method further includes transferring data between a physical page of the persistent memory and one of the physical pages of the memory buffer mapped to a corresponding one of the virtual memory pages.
  • the persistent memory has a latency that is higher than a maximum specified latency of the memory interface.
  • the memory interface is a double data rate (DDR)-compliant interface.
  • the persistent memory comprises non-volatile flash memory devices.
  • the persistent memory comprises magnetic recording media.
  • the persistent memory comprises a dual in-line memory module (DIMM)-attached storage subsystem (DASS).
  • the DASS comprises an SSD.
  • the DASS comprises an HDD.
  • the DASS comprises a solid state hybrid drive (SSHD).
  • the memory buffer comprises at least one of a host buffer and a DIMM buffer.
  • the mapping includes detecting if a physical page of the DIMM buffer is available, and if the physical page of the DIMM buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the DIMM buffer.
  • the mapping includes detecting if a physical page of the host buffer is available, and if the physical page of the host buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the host buffer.
  • the method further includes evicting a selected one of the physical pages of the DIMM buffer and updating the page table entry of the one of the virtual memory pages with the selected one of the physical pages of the DIMM buffer.
  • evicting the selected one of the physical pages of the DIMM buffer includes determining whether the selected one of the physical pages of the DIMM buffer corresponds to a dirty page table entry and, if so, transferring data from the selected one of the physical pages of the DIMM buffer to a physical page of the persistent memory.
  • the selected one of the physical pages of the DIMM buffer is selected using an eviction algorithm.
  • the eviction algorithm is one of a Least Recently Used (LRU) algorithm, an Adaptive Replacement Cache (ARC) algorithm, or a Least Frequently Used (LFU) algorithm.
  • LRU Least Recently Used
  • ARC Adaptive Replacement Cache
  • LFU Least Frequently Used
  • a computer system includes a memory management unit communicatively coupled to a memory buffer having physical pages, a memory interface controller, and a persistent memory via a memory interface.
  • the memory management unit is configured to generate a page fault in response to a request to access one or more virtual memory pages of a virtual address range not having a corresponding one or more of the physical pages of the memory buffer.
  • the memory interface controller is configured to transfer data between a physical page of the persistent memory and one of the physical pages of the memory buffer corresponding to one of the one or more virtual memory pages.
  • the memory buffer comprises at least one of a host buffer and a DIMM buffer.
  • the persistent memory has a latency that is higher than a maximum specified latency of the memory interface.
  • the memory interface is a DDR-compliant interface.
  • the persistent memory comprises non-volatile flash memory devices.
  • the persistent memory comprises magnetic recording media.
  • the persistent memory comprises a DASS.
  • the DASS comprises an SSD.
  • the DASS comprises an HDD.
  • the DASS comprises a solid state hybrid drive (SSHD).
  • FIG. 1 is a block diagram of a host system, according to one embodiment of the invention.
  • FIG. 2 is a block diagram of hardware and software components enabling access to a DASS, according to one embodiment of the invention.
  • FIG. 3 is a block diagram software applications accessing a DASS over a memory interface, according to one embodiment of the invention.
  • FIG. 4 is a flowchart of method steps to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention.
  • FIG. 5 is a flowchart of method steps for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention.
  • FIG. 6 is a flowchart of method steps for enabling software application access to a DASS, according to one embodiment of the invention.
  • FIG. 1 is a block diagram of a host system 100 , according to one embodiment of the invention.
  • host system 100 comprises a Central Processing Unit (CPU) 104 having a Memory Management Unit (MMU) 106 .
  • the MMU 106 is communicatively coupled to a plurality of DIMM connectors 108 ( 108 a , 108 b , 108 c , and 108 d ) and is responsible for managing access to memory connected to the DIMM connectors 108 .
  • the MMU 106 is communicatively coupled to the plurality of DIMM connectors 108 via a DDR-compliant interface. While only four DIMM connectors 108 a , 108 b , 108 c , and 108 d are shown in FIG. 1 for simplicity, in other embodiments, the number of DIMM connectors 108 may be one or more.
  • one or more memory modules may be connected to the DIMM connectors 108 a , 108 b , 108 c , or 108 d .
  • the memory modules may comprise any suitable memory devices, including DRAM, static random-access memory (SRAM), magnetoresistive random-access memory (MRAM), or the like, and may serve as a host memory buffer for the host system 100 .
  • One or more DASSs may also be attached to the DIMM connectors 108 a , 108 b , 108 c , or 108 d .
  • the DASSs may serve as a persistent memory for applications running on the host system 100 .
  • the DASSs may include one or more DIMM-connected memory buffers.
  • the one or more DIMM buffers may be any suitable memory devices, including DRAM, SRAM, MRAM, etc.
  • the DASSs comprise non-volatile flash memory.
  • the DASSs comprise magnetic recording media.
  • the DASSs comprise an SSD.
  • the DASSs comprise an HDD.
  • the DASSs comprises a SSHD.
  • FIG. 2 is a block diagram of hardware and software components enabling access to a DASS 218 , according to one embodiment of the invention.
  • the hardware components includes a CPU/MMU 210 communicatively coupled to a pool of I/O memory buffers 212 that includes DIMM buffers 215 associated with the DASS 218 mounted to one or more DIMM connectors (such as the DIMM connectors 108 shown in FIG. 1 ), host buffers 217 (also mounted to one or more DIMM connectors, such as the DIMM connectors 108 shown in FIG. 1 ), and a controller command buffer 213 of the DIMM controller 216 .
  • DIMM connectors such as the DIMM connectors 108 shown in FIG. 1
  • controller command buffer 213 of the DIMM controller 216 such as the DIMM connectors 108 shown in FIG. 1
  • the DIMM interface 220 exposes the DIMM controller command buffer 213 used by the software components to communicate with the DIMM controller 216 .
  • the DIMM buffers 215 , the DIMM controller 216 , and a DASS 218 are connected to a DIMM interface 220 .
  • the DIMM interface 220 operates in accordance with the DDR standards, such as DDR3 or DDR4.
  • the host buffers 217 may comprise DDR memory modules.
  • the DIMM buffers 215 comprises DDR memory modules.
  • the host buffers 217 and the DIMM buffers 215 may comprise any suitable volatile memory modules capable of operating according to the DDR standards.
  • the DASS 218 comprises non-volatile flash memory.
  • the DASS 218 comprises magnetic recording media.
  • the DASS 218 comprises an SSD.
  • the DASS 218 comprises an HDD.
  • the DASS 218 comprises a SSHD.
  • software applications 202 allocate a virtual address range for use by the software applications 202 .
  • the software applications 202 may use Load/Store CPU instructions over the DIMM interface 220 .
  • the software applications 202 calls OS Agent 206 to provide memory mapping functions to map some or all of the DASS 218 physical address space to the allocated virtual memory address range. This is because virtual memory is as its name implies: virtual. Virtual memory should be backed by a physical memory.
  • Page Table Entries (PTEs) 208 The relationship between the virtual memory address space and the physical memory address space is stored as Page Table Entries (PTEs) 208 .
  • PTEs Page Table Entries
  • the software applications 202 cannot directly access the physical address spaces of the DASS 218 over the DIMM interface 220 . Rather, the OS Agent 206 acts on a page fault that is generated by the CPU/MMU 210 in response to an attempt by the software applications 202 to access a location within the allocated virtual address range which is not yet mapped to a physical address space.
  • CPU/MMU 210 will generate a page fault signal when a Load/Store instruction to read or write a virtual memory address fails because the PTE 208 associated with the address is either indicated as “not valid” or “physical page not present,” or the like, and the OS Agent 206 receives the page fault signal and is responsible for resolving the fault.
  • the OS Agent 206 allocates one or more pages of the memory buffers 212 to be used for mapping the virtual address range.
  • the OS Agent 206 may allocate pages of the DRAM buffers 212 in a number of different ways. In one embodiment, the OS Agent 206 allocates pages of the memory buffers 212 based on its record of current mappings in effect using the memory buffers 212 . In another embodiment, the OS Agent 206 allocates pages of the memory buffers 212 based on a free memory buffer page list.
  • the OS Agent 206 allocates pages of the memory buffers 212 using an eviction algorithm to evict a previous mapping of the memory buffers 212 .
  • the eviction algorithm selects a location in the memory buffers 212 to be re-used for mapping the virtual address range.
  • the eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm.
  • the OS Agent 206 sends a request to the DIMM controller proxy service 204 to write out dirty (modified but unwritten) page(s) from the allocated memory buffers 212 to the DASS 218 if a previous mapping is being evicted (i.e. the allocated pages of the memory buffers 212 were previously mapped to another virtual address space), and read in the page(s) from the DASS 218 into the allocated memory buffers 212 .
  • the DIMM controller proxy service 204 uses a DIMM controller protocol to communicate with the DIMM controller 216 via the DIMM controller command buffer 213 .
  • a DASS interface function copies the page(s) to/from the specified DIMM buffer 215 .
  • the DIMM controller 216 copies the page to/from an internal field programmable gate array (FPGA) buffer.
  • the DIMM controller 216 may comprise an application-specific integrated circuit (ASIC) or a TOSHIBA FIT FAST STRUCTURED ARRAY (FFSA).
  • the DIMM controller proxy service 204 completes the mapping operation by causing the page to be copied from the FPGA buffer to the host buffer, if the memory buffers 212 allocated to the virtual memory address space are host buffer 217 pages, and sends a response to the OS Agent 206 that the mapping operation has been completed.
  • the OS Agent 204 updates the PTEs 208 .
  • the OS Agent 204 also invalidates any PTEs that referred to memory buffers 212 that were selected for eviction.
  • the OS Agent 204 then updates the PTEs to associate page(s) for the requested virtual address range for the software applications 202 to the allocated memory buffers 212 .
  • the page fault condition has been resolved, and the OS Agent 204 can allow the software applications 202 to resume.
  • the software applications 202 may access the DASS 218 over the memory interface without modifying existing standards or the software applications 202 .
  • the persistent memory storage devices can still be accessed via the host system's memory interface without the overhead of a legacy storage software stack to perform I/O operations.
  • FIG. 2 shows and describes an embodiment of the present invention in the form of a single software application 202 and a single DASS 218
  • a plurality of software applications 202 may access the DASS 218 or a plurality of DASSs 218 in the manner shown and described in connection with FIG. 2 .
  • FIG. 3 is a block diagram of software applications 302 and 304 accessing a DASS 320 over a memory interface 300 , according to one embodiment of the invention.
  • software applications 302 and 304 each have a virtual address space 306 and 308 , respectively.
  • Software application 302 allocates a virtual address range 303 and software application 304 allocates a virtual address range 305 .
  • the virtual address ranges 303 and 305 have corresponding PTEs 315 of page table 310 and PTEs 313 of page table 312 , respectively.
  • the PTEs 315 and 313 corresponding to the virtual address ranges 303 and 305 are mapped to DIMM buffers 318 of a DIMM-connected volatile memory device 316 associated with the DASS 320 .
  • the mapping of the DIMM buffers 318 to the virtual address ranges 303 and 305 may be accomplished using page faults in the manner as shown and described in FIG. 2 .
  • the mapping of the virtual address ranges 303 and 305 to physical memory address of a memory buffer is not limited to the DIMM buffers 318 , and can be, for example, host buffers 314 .
  • the mapped DRAM buffers 318 can then be copied to/from pages 322 of the DASS 320 , in effect, allowing the software applications 302 and 304 to access the DASS 320 .
  • FIG. 4 is a flowchart of method steps 400 to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention.
  • an OS Agent for example the OS Agent 206 shown and described in FIG. 2 , above, receives a request for a virtual address range from a software application.
  • the OS Agent marks each PTE within the requested virtual address region as “physical page not present” or “not valid,” or the like. In this manner, a page fault will be triggered when the application attempts to read or write to the virtual address range due to the lack of corresponding physical pages backing the virtual address range, allowing the OS Agent to map the virtual address range to DRAM buffer page(s) as shown and described in FIG. 2 .
  • FIG. 5 is a flowchart of method steps 500 for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention.
  • a PTE's physical buffer page corresponding to the virtual address range is determined to be dirty or not.
  • a dirty page refers to a modified page that has not been written to the DASS. If the PTE's physical buffer page is not dirty, then at step 504 , whether the PTE's corresponding physical buffer page is a host system DRAM page (i.e. a host buffer) or not is determined.
  • the host buffer can comprise any suitable memory device, including SRAM and MRAM, and is not limited to DRAM.
  • the physical buffer page is a DIMM buffer page, and at step 512 , the DIMM buffer page corresponding to the PTE is placed back on a free buffer list. If yes, then at step 514 , the host system DRAM page is released from the PTE. At step 516 , the PTE is then marked “physical page not present” or “not valid,” or the like.
  • the PTE's physical buffer page is dirty
  • the physical buffer page is a DIMM buffer page
  • the physical buffer page is a DIMM buffer page
  • the PTE's corresponding physical buffer page is a host system DRAM page or not is determined. If not, then the physical buffer page is a DIMM buffer page, and at step 508 , data from the DIMM buffer page is moved to a page of the DASS, and the DIMM buffer page is placed back on the free buffer list at step 512 . If yes, then at step 510 , data from the host system DRAM page is moved to a page of the DASS, and the host system DRAM page is released from the PTE at step 514 . Again, at step 516 , the PTE is then marked “physical page not present” or “not valid,” or the like.
  • the method steps 500 are then repeated for every other PTE corresponding to the requested virtual address range.
  • FIG. 6 is a flowchart of method steps 600 for enabling software application access to a DASS, according to one embodiment of the invention.
  • a software application requests access to the DASS using a Load/Store CPU instruction.
  • the CPU's MMU locates the PTEs corresponding to the requested virtual address range.
  • the MMU determines whether the PTEs corresponding to the requested virtual address range have been marked as “physical page not present,” or “not valid,” or the like, or whether they have been marked as “physical page present” or “valid,” or the like. If the PTEs have been marked as “physical page present” then at step 608 , the Load/Store instruction completes successfully and the software application reads or writes data to the physical pages corresponding to the virtual address range.
  • the MMU If the PTEs have been marked as “physical page not present,” then as previously described, the MMU generates a page fault signal at step 610 .
  • the OS Agent takes over and at step 612 , and it is determined whether the page fault resides within the virtual address range to be associated with the DASS.
  • the OS Agent can be, for example, the OS Agent 204 shown and described in connection with FIG. 2 . If the page fault does not reside within the virtual address range to be associated with the DASS, then at step 614 , the page fault is passed to another system component to handle as it is unrelated to the request by the software application to access the DASS. If, however, the page fault does in fact correspond to the virtual address range to be associated with the DASS, then at step 616 , it is determined whether a free DIMM buffer page is available.
  • step 618 it is determined whether the host system DRAM usage limit has been reached (i.e. there are no available host buffer pages to be allocated to the requested virtual address range). If not, and there are free host system DRAM buffer pages, then at step 622 , the free host system DRAM buffer pages are allocated to the virtual address range and data (either write or read, depending on the Load/Store CPU instruction) is moved into the allocated host system DRAM buffer pages. At step 636 , the PTEs are updated with the allocated host system DRAM buffer pages, and marked as “physical page present” or “valid,” or the like.
  • the Load/Store CPU instruction is re-tried at step 602 , the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” at step 604 , which proceeds to step 606 and onto step 608 where the instruction is completed successfully and the instruction is carried out.
  • an eviction algorithm is used to identify one or more “victim” DIMM page buffers (depending on the virtual address range) to evict.
  • the eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm.
  • the data from the victim DIMM page buffers are moved to page buffers of the DASS, and at step 636 , the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like. If the victim DIMM page buffers are determined to be not dirty at step 632 , then there is no need to move the data at step 634 , and at step 636 , the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like.
  • the Load/Store CPU instruction is re-tried at step 602 , the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” at step 604 , which proceeds to step 606 and onto step 608 where the instruction is completed successfully and the instruction is carried out.
  • data can be written to or read from the allocated memory buffers by the DASS, as shown in FIG. 3 , for example, in effect enabling the software application to access the DASS over the memory interface.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method of accessing a persistent memory over a memory interface is disclosed. In one embodiment, the method includes allocating a virtual address range comprising virtual memory pages to be associated with physical pages of a memory buffer and marking each page table entry associated with the virtual address range as not having a corresponding one of the physical pages of the memory buffer. The method further includes generating a page fault when one or more of the virtual memory pages within the virtual address range is accessed and mapping page table entries of the virtual memory pages to the physical pages of the memory buffer. The method further includes transferring data between a physical page of the persistent memory and one of the physical pages of the memory buffer mapped to a corresponding one of the virtual memory pages.

Description

FIELD OF THE INVENTION
The present invention generally relates to adapting storage technologies to the Double Data Rate (DDR) memory interface that are not intrinsically compatible.
BACKGROUND OF THE INVENTION
Today, operating systems (OSs) provide software mechanisms to allow virtual memory backed by traditional physical block access storage devices such as hard disk drives (HDDs) and solid state drives (SSDs). These mechanisms allow for expanded application memory availability by making the available physical dynamic random-access memory (DRAM) of a host system act as a cache for a much larger traditional physical block access storage device (i.e. HDDs or SSDs). However, traditional physical block access storage devices such as HDDs and SSDs remain reliant on legacy communications interfaces, such as Serial AT Attachment (SATA) with the overhead of a storage software stack to perform input/output (I/O) operations.
Modern storage technologies can produce storage devices whose access latency is significantly lower than traditional spinning disk storage devices (i.e. HDDs) and even flash-based storage devices. However, these modern storage technologies have not yet achieved a low-enough latency to render them compatible with DDR specifications. The latency overhead of the software stack that current OSs have in place to support block and file system access is disproportionate and acts as a significant performance penalty for these modern low-latency storage devices.
What is needed, therefore, is an improved computing environment having access to new, high-performance persistent memory technologies over the memory interface compatible with existing standards or host system software, and without the overhead of a storage software stack to perform I/O operations.
BRIEF DESCRIPTION OF THE INVENTION
In one embodiment, a method of accessing a persistent memory over a memory interface includes allocating a virtual address range comprising virtual memory pages to be associated with physical pages of a memory buffer and marking each page table entry associated with the virtual address range as not having a corresponding one of the physical pages of the memory buffer. The method further includes generating a page fault when one or more of the virtual memory pages within the virtual address range is accessed, and mapping page table entries of the virtual memory pages to the physical pages of the memory buffer. The method further includes transferring data between a physical page of the persistent memory and one of the physical pages of the memory buffer mapped to a corresponding one of the virtual memory pages.
In one embodiment, the persistent memory has a latency that is higher than a maximum specified latency of the memory interface. In one embodiment, the memory interface is a double data rate (DDR)-compliant interface. In one embodiment, the persistent memory comprises non-volatile flash memory devices. In one embodiment, the persistent memory comprises magnetic recording media. In one embodiment, the persistent memory comprises a dual in-line memory module (DIMM)-attached storage subsystem (DASS). In one embodiment, the DASS comprises an SSD. In another embodiment, the DASS comprises an HDD. In yet a further embodiment, the DASS comprises a solid state hybrid drive (SSHD).
In one embodiment, the memory buffer comprises at least one of a host buffer and a DIMM buffer. In one embodiment, the mapping includes detecting if a physical page of the DIMM buffer is available, and if the physical page of the DIMM buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the DIMM buffer. In yet a further embodiment, the mapping includes detecting if a physical page of the host buffer is available, and if the physical page of the host buffer is available, updating the page table entry of one of the virtual memory pages with the physical page of the host buffer.
In one embodiment, if the physical page of the host buffer is unavailable, the method further includes evicting a selected one of the physical pages of the DIMM buffer and updating the page table entry of the one of the virtual memory pages with the selected one of the physical pages of the DIMM buffer. In one embodiment, evicting the selected one of the physical pages of the DIMM buffer includes determining whether the selected one of the physical pages of the DIMM buffer corresponds to a dirty page table entry and, if so, transferring data from the selected one of the physical pages of the DIMM buffer to a physical page of the persistent memory.
In one embodiment, the selected one of the physical pages of the DIMM buffer is selected using an eviction algorithm. In one embodiment, the eviction algorithm is one of a Least Recently Used (LRU) algorithm, an Adaptive Replacement Cache (ARC) algorithm, or a Least Frequently Used (LFU) algorithm.
In one embodiment, a computer system includes a memory management unit communicatively coupled to a memory buffer having physical pages, a memory interface controller, and a persistent memory via a memory interface. The memory management unit is configured to generate a page fault in response to a request to access one or more virtual memory pages of a virtual address range not having a corresponding one or more of the physical pages of the memory buffer. The memory interface controller is configured to transfer data between a physical page of the persistent memory and one of the physical pages of the memory buffer corresponding to one of the one or more virtual memory pages.
In one embodiment, the memory buffer comprises at least one of a host buffer and a DIMM buffer. In one embodiment, the persistent memory has a latency that is higher than a maximum specified latency of the memory interface. In one embodiment, the memory interface is a DDR-compliant interface. In one embodiment, the persistent memory comprises non-volatile flash memory devices. In one embodiment, the persistent memory comprises magnetic recording media. In one embodiment, the persistent memory comprises a DASS. In one embodiment, the DASS comprises an SSD. In another embodiment, the DASS comprises an HDD. In yet a further embodiment, the DASS comprises a solid state hybrid drive (SSHD).
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 is a block diagram of a host system, according to one embodiment of the invention.
FIG. 2 is a block diagram of hardware and software components enabling access to a DASS, according to one embodiment of the invention.
FIG. 3 is a block diagram software applications accessing a DASS over a memory interface, according to one embodiment of the invention.
FIG. 4 is a flowchart of method steps to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention.
FIG. 5 is a flowchart of method steps for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention.
FIG. 6 is a flowchart of method steps for enabling software application access to a DASS, according to one embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 is a block diagram of a host system 100, according to one embodiment of the invention. As shown in FIG. 1, host system 100 comprises a Central Processing Unit (CPU) 104 having a Memory Management Unit (MMU) 106. The MMU 106 is communicatively coupled to a plurality of DIMM connectors 108 (108 a, 108 b, 108 c, and 108 d) and is responsible for managing access to memory connected to the DIMM connectors 108. In one embodiment, the MMU 106 is communicatively coupled to the plurality of DIMM connectors 108 via a DDR-compliant interface. While only four DIMM connectors 108 a, 108 b, 108 c, and 108 d are shown in FIG. 1 for simplicity, in other embodiments, the number of DIMM connectors 108 may be one or more.
In one embodiment, one or more memory modules may be connected to the DIMM connectors 108 a, 108 b, 108 c, or 108 d. The memory modules may comprise any suitable memory devices, including DRAM, static random-access memory (SRAM), magnetoresistive random-access memory (MRAM), or the like, and may serve as a host memory buffer for the host system 100. One or more DASSs may also be attached to the DIMM connectors 108 a, 108 b, 108 c, or 108 d. The DASSs may serve as a persistent memory for applications running on the host system 100. The DASSs may include one or more DIMM-connected memory buffers. The one or more DIMM buffers may be any suitable memory devices, including DRAM, SRAM, MRAM, etc. In one embodiment, the DASSs comprise non-volatile flash memory. In one embodiment, the DASSs comprise magnetic recording media. In one embodiment, the DASSs comprise an SSD. In another embodiment, the DASSs comprise an HDD. In yet a further embodiment, the DASSs comprises a SSHD.
FIG. 2 is a block diagram of hardware and software components enabling access to a DASS 218, according to one embodiment of the invention. As shown in FIG. 2, the hardware components includes a CPU/MMU 210 communicatively coupled to a pool of I/O memory buffers 212 that includes DIMM buffers 215 associated with the DASS 218 mounted to one or more DIMM connectors (such as the DIMM connectors 108 shown in FIG. 1), host buffers 217 (also mounted to one or more DIMM connectors, such as the DIMM connectors 108 shown in FIG. 1), and a controller command buffer 213 of the DIMM controller 216. The DIMM interface 220 exposes the DIMM controller command buffer 213 used by the software components to communicate with the DIMM controller 216. The DIMM buffers 215, the DIMM controller 216, and a DASS 218 are connected to a DIMM interface 220.
In one embodiment, the DIMM interface 220 operates in accordance with the DDR standards, such as DDR3 or DDR4. In one embodiment, the host buffers 217 may comprise DDR memory modules. In one embodiment, the DIMM buffers 215 comprises DDR memory modules. In other embodiments, the host buffers 217 and the DIMM buffers 215 may comprise any suitable volatile memory modules capable of operating according to the DDR standards. In one embodiment, the DASS 218 comprises non-volatile flash memory. In one embodiment, the DASS 218 comprises magnetic recording media. In one embodiment, the DASS 218 comprises an SSD. In another embodiment, the DASS 218 comprises an HDD. In yet a further embodiment, the DASS 218 comprises a SSHD.
In operation, software applications 202 allocate a virtual address range for use by the software applications 202. The software applications 202 may use Load/Store CPU instructions over the DIMM interface 220. The software applications 202 calls OS Agent 206 to provide memory mapping functions to map some or all of the DASS 218 physical address space to the allocated virtual memory address range. This is because virtual memory is as its name implies: virtual. Virtual memory should be backed by a physical memory. The relationship between the virtual memory address space and the physical memory address space is stored as Page Table Entries (PTEs) 208.
Given the DASS 218 is not inherently compliant with the DIMM interface 220 standards (e.g. the DDR standards), the software applications 202 cannot directly access the physical address spaces of the DASS 218 over the DIMM interface 220. Rather, the OS Agent 206 acts on a page fault that is generated by the CPU/MMU 210 in response to an attempt by the software applications 202 to access a location within the allocated virtual address range which is not yet mapped to a physical address space. CPU/MMU 210 will generate a page fault signal when a Load/Store instruction to read or write a virtual memory address fails because the PTE 208 associated with the address is either indicated as “not valid” or “physical page not present,” or the like, and the OS Agent 206 receives the page fault signal and is responsible for resolving the fault.
Following the page fault, the OS Agent 206 allocates one or more pages of the memory buffers 212 to be used for mapping the virtual address range. The OS Agent 206 may allocate pages of the DRAM buffers 212 in a number of different ways. In one embodiment, the OS Agent 206 allocates pages of the memory buffers 212 based on its record of current mappings in effect using the memory buffers 212. In another embodiment, the OS Agent 206 allocates pages of the memory buffers 212 based on a free memory buffer page list.
In yet another embodiment, the OS Agent 206 allocates pages of the memory buffers 212 using an eviction algorithm to evict a previous mapping of the memory buffers 212. The eviction algorithm selects a location in the memory buffers 212 to be re-used for mapping the virtual address range. The eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm.
Once the DRAM buffers 212 have been allocated, the OS Agent 206 sends a request to the DIMM controller proxy service 204 to write out dirty (modified but unwritten) page(s) from the allocated memory buffers 212 to the DASS 218 if a previous mapping is being evicted (i.e. the allocated pages of the memory buffers 212 were previously mapped to another virtual address space), and read in the page(s) from the DASS 218 into the allocated memory buffers 212.
The DIMM controller proxy service 204 uses a DIMM controller protocol to communicate with the DIMM controller 216 via the DIMM controller command buffer 213. In one embodiment, if the pages of the memory buffers 212 allocated to the virtual memory address space requested by the software applications 202 belong to the DIMM buffers 215, a DASS interface function copies the page(s) to/from the specified DIMM buffer 215. In one embodiment, if the pages of the memory buffers 212 allocated to the virtual memory address space requested by the software applications 202 belong to the host buffers 217, the DIMM controller 216 copies the page to/from an internal field programmable gate array (FPGA) buffer. In other embodiments, the DIMM controller 216 may comprise an application-specific integrated circuit (ASIC) or a TOSHIBA FIT FAST STRUCTURED ARRAY (FFSA).
Once the page(s) has been copied to/from the DASS 218, the DIMM controller proxy service 204 completes the mapping operation by causing the page to be copied from the FPGA buffer to the host buffer, if the memory buffers 212 allocated to the virtual memory address space are host buffer 217 pages, and sends a response to the OS Agent 206 that the mapping operation has been completed. The OS Agent 204 updates the PTEs 208. In one embodiment, where the memory buffers 212 were allocated using an eviction algorithm, the OS Agent 204 also invalidates any PTEs that referred to memory buffers 212 that were selected for eviction. The OS Agent 204 then updates the PTEs to associate page(s) for the requested virtual address range for the software applications 202 to the allocated memory buffers 212.
Now that there are physical memory buffers 212 allocated to the virtual address range, the page fault condition has been resolved, and the OS Agent 204 can allow the software applications 202 to resume. By generating the page fault, mapping the virtual memory spaces to system memory buffers 212, such as host buffers 217 or DIMM buffers 215, and copying pages to/from the DASS 218 to the mapped memory buffers 212, the software applications 202 may access the DASS 218 over the memory interface without modifying existing standards or the software applications 202. Thus, otherwise incompatible persistent memory storage devices having a latency higher than a maximum latency specified by a host system's memory interface, the persistent memory storage devices can still be accessed via the host system's memory interface without the overhead of a legacy storage software stack to perform I/O operations.
While FIG. 2 shows and describes an embodiment of the present invention in the form of a single software application 202 and a single DASS 218, in other embodiments, a plurality of software applications 202 may access the DASS 218 or a plurality of DASSs 218 in the manner shown and described in connection with FIG. 2.
FIG. 3 is a block diagram of software applications 302 and 304 accessing a DASS 320 over a memory interface 300, according to one embodiment of the invention. As shown in FIG. 3, software applications 302 and 304 each have a virtual address space 306 and 308, respectively. Software application 302 allocates a virtual address range 303 and software application 304 allocates a virtual address range 305. The virtual address ranges 303 and 305 have corresponding PTEs 315 of page table 310 and PTEs 313 of page table 312, respectively. The PTEs 315 and 313 corresponding to the virtual address ranges 303 and 305 are mapped to DIMM buffers 318 of a DIMM-connected volatile memory device 316 associated with the DASS 320.
The mapping of the DIMM buffers 318 to the virtual address ranges 303 and 305 may be accomplished using page faults in the manner as shown and described in FIG. 2. As previously described in connection with FIG. 2, the mapping of the virtual address ranges 303 and 305 to physical memory address of a memory buffer is not limited to the DIMM buffers 318, and can be, for example, host buffers 314. The mapped DRAM buffers 318 can then be copied to/from pages 322 of the DASS 320, in effect, allowing the software applications 302 and 304 to access the DASS 320.
FIG. 4 is a flowchart of method steps 400 to trigger a page fault in order to map a requested virtual address range to access a DASS, according to one embodiment of the invention. At step 402, an OS Agent, for example the OS Agent 206 shown and described in FIG. 2, above, receives a request for a virtual address range from a software application. At step 404, the OS Agent marks each PTE within the requested virtual address region as “physical page not present” or “not valid,” or the like. In this manner, a page fault will be triggered when the application attempts to read or write to the virtual address range due to the lack of corresponding physical pages backing the virtual address range, allowing the OS Agent to map the virtual address range to DRAM buffer page(s) as shown and described in FIG. 2.
FIG. 5 is a flowchart of method steps 500 for updating PTEs to un-map a virtual address range previously mapped to access a DASS, according to one embodiment of the invention. At step 502, a PTE's physical buffer page corresponding to the virtual address range is determined to be dirty or not. As previously discussed, a dirty page refers to a modified page that has not been written to the DASS. If the PTE's physical buffer page is not dirty, then at step 504, whether the PTE's corresponding physical buffer page is a host system DRAM page (i.e. a host buffer) or not is determined. As previously discussed, the host buffer can comprise any suitable memory device, including SRAM and MRAM, and is not limited to DRAM. If not, then the physical buffer page is a DIMM buffer page, and at step 512, the DIMM buffer page corresponding to the PTE is placed back on a free buffer list. If yes, then at step 514, the host system DRAM page is released from the PTE. At step 516, the PTE is then marked “physical page not present” or “not valid,” or the like.
Alternatively, if at step 502 the PTE's physical buffer page is dirty, then at step 506, whether the PTE's corresponding physical buffer page is a host system DRAM page or not is determined. If not, then the physical buffer page is a DIMM buffer page, and at step 508, data from the DIMM buffer page is moved to a page of the DASS, and the DIMM buffer page is placed back on the free buffer list at step 512. If yes, then at step 510, data from the host system DRAM page is moved to a page of the DASS, and the host system DRAM page is released from the PTE at step 514. Again, at step 516, the PTE is then marked “physical page not present” or “not valid,” or the like.
The method steps 500 are then repeated for every other PTE corresponding to the requested virtual address range.
FIG. 6 is a flowchart of method steps 600 for enabling software application access to a DASS, according to one embodiment of the invention. At step 602, a software application requests access to the DASS using a Load/Store CPU instruction. At step 604, the CPU's MMU locates the PTEs corresponding to the requested virtual address range. At step 606, the MMU determines whether the PTEs corresponding to the requested virtual address range have been marked as “physical page not present,” or “not valid,” or the like, or whether they have been marked as “physical page present” or “valid,” or the like. If the PTEs have been marked as “physical page present” then at step 608, the Load/Store instruction completes successfully and the software application reads or writes data to the physical pages corresponding to the virtual address range.
If the PTEs have been marked as “physical page not present,” then as previously described, the MMU generates a page fault signal at step 610. In this case the OS Agent takes over and at step 612, and it is determined whether the page fault resides within the virtual address range to be associated with the DASS. In one embodiment, the OS Agent can be, for example, the OS Agent 204 shown and described in connection with FIG. 2. If the page fault does not reside within the virtual address range to be associated with the DASS, then at step 614, the page fault is passed to another system component to handle as it is unrelated to the request by the software application to access the DASS. If, however, the page fault does in fact correspond to the virtual address range to be associated with the DASS, then at step 616, it is determined whether a free DIMM buffer page is available.
If a free DIMM buffer page is unavailable, then at step 618, it is determined whether the host system DRAM usage limit has been reached (i.e. there are no available host buffer pages to be allocated to the requested virtual address range). If not, and there are free host system DRAM buffer pages, then at step 622, the free host system DRAM buffer pages are allocated to the virtual address range and data (either write or read, depending on the Load/Store CPU instruction) is moved into the allocated host system DRAM buffer pages. At step 636, the PTEs are updated with the allocated host system DRAM buffer pages, and marked as “physical page present” or “valid,” or the like. Now that the virtual address range is backed by physical buffer pages, the Load/Store CPU instruction is re-tried at step 602, the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” at step 604, which proceeds to step 606 and onto step 608 where the instruction is completed successfully and the instruction is carried out.
Alternatively, if at step 618 it is determined that the host system DRAM usage limit has been reached, then at step 624, an eviction algorithm is used to identify one or more “victim” DIMM page buffers (depending on the virtual address range) to evict. As previously mentioned, the eviction algorithm may be any suitable algorithm, such as a LRU algorithm, an ARC algorithm, or a LFU algorithm. Once the victim DIMM page buffers have been identified, at step 630, the PTEs corresponding to the victim DIMM page buffers are marked as “physical page not present” or “not valid,” or the like. At step 632, it is determined whether the victim DIMM page buffers are dirty. If they are, then at step 634, the data from the victim DIMM page buffers are moved to page buffers of the DASS, and at step 636, the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like. If the victim DIMM page buffers are determined to be not dirty at step 632, then there is no need to move the data at step 634, and at step 636, the PTEs corresponding to the virtual address range are updated with the allocated victim DIMM buffer pages, and marked as “physical page present” or “valid,” or the like.
Again, now that the virtual address range is backed by physical buffer pages, the Load/Store CPU instruction is re-tried at step 602, the MMU locates the PTEs corresponding to the virtual address range that have now been marked as “physical page present” at step 604, which proceeds to step 606 and onto step 608 where the instruction is completed successfully and the instruction is carried out. Following the method steps 600, data can be written to or read from the allocated memory buffers by the DASS, as shown in FIG. 3, for example, in effect enabling the software application to access the DASS over the memory interface.
Other objects, advantages and embodiments of the various aspects of the present invention will be apparent to those who are skilled in the field of the invention and are within the scope of the description and the accompanying Figures. For example, but without limitation, structural or functional elements might be rearranged, or method steps reordered, consistent with the present invention. Similarly, principles according to the present invention could be applied to other examples, which, even if not specifically described here in detail, would nevertheless be within the scope of the present invention.

Claims (20)

What is claimed is:
1. A method of accessing a persistent memory over a memory interface comprising:
allocating a virtual address range comprising virtual memory pages to be associated with physical pages of a memory buffer;
marking each page table entry associated with the virtual address range as not having a corresponding one of the physical pages of the memory buffer;
generating a page fault when one or more of the virtual memory pages within the virtual address range is accessed;
mapping page table entries of the virtual memory pages to the physical pages of the memory buffer; and
transferring data between a physical page of the persistent memory and one of the physical pages of the memory buffer mapped to a corresponding one of the virtual memory pages.
2. The method of claim 1, wherein the memory buffer comprises at least one of a host buffer and a DIMM buffer.
3. The method of claim 2, wherein the mapping comprises:
detecting if a physical page of the DIMM buffer is available; and
if the physical page of the DIMM buffer is available,
updating the page table entry of one of the virtual memory pages with the physical page of the DIMM buffer.
4. The method of claim 2, wherein the mapping comprises:
detecting if a physical page of the host buffer is available; and
if the physical page of the host buffer is available,
updating the page table entry of one of the virtual memory pages with the physical page of the host buffer.
5. The method of claim 3, wherein if the physical page of the DIMM buffer is unavailable,
evicting a selected one of the physical pages of the DIMM buffer; and
updating the page table entry of one of the virtual memory pages with the selected one of the physical pages of the DIMM buffer.
6. The method of claim 5, wherein evicting the selected one of the physical pages of the DIMM buffer comprises:
determining whether the selected one of the physical pages of the DIMM buffer corresponds to a dirty page table entry; and
if so, transferring data from the selected one of the physical pages of the DIMM buffer to a physical page of the persistent memory.
7. The method of claim 5, wherein the selected one of the physical pages of the DIMM buffer is selected using an eviction algorithm.
8. The method of claim 7, wherein the eviction algorithm is one of a Least Recently Used algorithm, an Adaptive Replacement Cache algorithm, and a Least Frequently Used algorithm.
9. The method of claim 1, wherein the persistent memory has a latency that is higher than a maximum specified latency of the memory interface.
10. The method of claim 1, wherein the memory interface is a DDR-compliant interface.
11. The method of claim 1, wherein the persistent memory comprises at least one of non-volatile flash memory devices and magnetic recording media.
12. The method of claim 1, wherein the persistent memory comprises a DIMM-attached storage subsystem.
13. The method of claim 12, wherein the DIMM-attached storage subsystem is one of an SSD, an HDD, and an SSHD.
14. A computer system comprising:
a memory management unit communicatively coupled to a memory buffer having physical pages, a memory interface controller, and a persistent memory via a memory interface, wherein
the memory management unit is configured to generate a page fault in response to a request to access one or more virtual memory pages of a virtual address range not having a corresponding one or more of the physical pages of the memory buffer, and
the memory interface controller is configured to transfer data between a physical page of the persistent memory and one of the physical pages of the memory buffer corresponding to one of the one or more virtual memory pages.
15. The computer system of claim 14, where the memory buffer comprises at least one of a host buffer and a DIMM buffer.
16. The computer system of claim 14, wherein the persistent memory has a latency that is higher than a maximum specified latency of the memory interface.
17. The computer system of claim 14, wherein the memory interface is a DDR-compliant interface.
18. The computer system of claim 14, wherein the persistent memory comprises at least one of non-volatile flash memory devices and magnetic recording media.
19. The computer system of claim 14, wherein the persistent memory comprises a DIMM-attached storage subsystem.
20. The computer system of claim 19, wherein the DIMM-attached storage subsystem is one of an SSD, an HDD, and an SSHD.
US15/262,434 2016-09-12 2016-09-12 DDR storage adapter Active US9916256B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/262,434 US9916256B1 (en) 2016-09-12 2016-09-12 DDR storage adapter
US15/888,483 US10430346B2 (en) 2016-09-12 2018-02-05 DDR storage adapter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/262,434 US9916256B1 (en) 2016-09-12 2016-09-12 DDR storage adapter

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/888,483 Continuation US10430346B2 (en) 2016-09-12 2018-02-05 DDR storage adapter

Publications (2)

Publication Number Publication Date
US9916256B1 true US9916256B1 (en) 2018-03-13
US20180074971A1 US20180074971A1 (en) 2018-03-15

Family

ID=61525573

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/262,434 Active US9916256B1 (en) 2016-09-12 2016-09-12 DDR storage adapter
US15/888,483 Active US10430346B2 (en) 2016-09-12 2018-02-05 DDR storage adapter

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/888,483 Active US10430346B2 (en) 2016-09-12 2018-02-05 DDR storage adapter

Country Status (1)

Country Link
US (2) US9916256B1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463665A (en) * 2020-10-30 2021-03-09 中国船舶重工集团公司第七0九研究所 Switching method and device for multi-channel video memory interleaving mode

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190042460A1 (en) * 2018-02-07 2019-02-07 Intel Corporation Method and apparatus to accelerate shutdown and startup of a solid-state drive
US10802748B2 (en) * 2018-08-02 2020-10-13 MemVerge, Inc Cost-effective deployments of a PMEM-based DMO system
US11061609B2 (en) 2018-08-02 2021-07-13 MemVerge, Inc Distributed memory object method and system enabling memory-speed data access in a distributed environment
US11134055B2 (en) 2018-08-02 2021-09-28 Memverge, Inc. Naming service in a distributed memory object architecture
US11169920B2 (en) 2018-09-17 2021-11-09 Micron Technology, Inc. Cache operations in a hybrid dual in-line memory module
KR102400977B1 (en) * 2020-05-29 2022-05-25 성균관대학교산학협력단 Method for processing page fault by a processor

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4520441A (en) * 1980-12-15 1985-05-28 Hitachi, Ltd. Data processing system
US6112285A (en) * 1997-09-23 2000-08-29 Silicon Graphics, Inc. Method, system and computer program product for virtual memory support for managing translation look aside buffers with multiple page size support
US7475183B2 (en) * 2005-12-12 2009-01-06 Microsoft Corporation Large page optimizations in a virtual machine environment
US20090113216A1 (en) * 2007-10-30 2009-04-30 Vmware, Inc. Cryptographic multi-shadowing with integrity verification
US7716411B2 (en) 2006-06-07 2010-05-11 Microsoft Corporation Hybrid memory device with single interface
US20110161620A1 (en) * 2009-12-29 2011-06-30 Advanced Micro Devices, Inc. Systems and methods implementing shared page tables for sharing memory resources managed by a main operating system with accelerator devices
US8719543B2 (en) * 2009-12-29 2014-05-06 Advanced Micro Devices, Inc. Systems and methods implementing non-shared page tables for sharing memory resources managed by a main operating system with accelerator devices
US8943296B2 (en) * 2011-04-28 2015-01-27 Vmware, Inc. Virtual address mapping using rule based aliasing to achieve fine grained page translation
US9063877B2 (en) * 2013-03-29 2015-06-23 Kabushiki Kaisha Toshiba Storage system, storage controller, and method for managing mapping between local address and physical address
US9740637B2 (en) * 2007-10-30 2017-08-22 Vmware, Inc. Cryptographic multi-shadowing with integrity verification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8738840B2 (en) * 2008-03-31 2014-05-27 Spansion Llc Operating system based DRAM/FLASH management scheme

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4520441A (en) * 1980-12-15 1985-05-28 Hitachi, Ltd. Data processing system
US6112285A (en) * 1997-09-23 2000-08-29 Silicon Graphics, Inc. Method, system and computer program product for virtual memory support for managing translation look aside buffers with multiple page size support
US7475183B2 (en) * 2005-12-12 2009-01-06 Microsoft Corporation Large page optimizations in a virtual machine environment
US7716411B2 (en) 2006-06-07 2010-05-11 Microsoft Corporation Hybrid memory device with single interface
US8261265B2 (en) * 2007-10-30 2012-09-04 Vmware, Inc. Transparent VMM-assisted user-mode execution control transfer
US9336033B2 (en) * 2007-10-30 2016-05-10 Vmware, Inc. Secure identification of execution contexts
US20090113111A1 (en) * 2007-10-30 2009-04-30 Vmware, Inc. Secure identification of execution contexts
US9740637B2 (en) * 2007-10-30 2017-08-22 Vmware, Inc. Cryptographic multi-shadowing with integrity verification
US20090113216A1 (en) * 2007-10-30 2009-04-30 Vmware, Inc. Cryptographic multi-shadowing with integrity verification
US8555081B2 (en) * 2007-10-30 2013-10-08 Vmware, Inc. Cryptographic multi-shadowing with integrity verification
US8607013B2 (en) * 2007-10-30 2013-12-10 Vmware, Inc. Providing VMM access to guest virtual memory
US20090113110A1 (en) * 2007-10-30 2009-04-30 Vmware, Inc. Providing VMM Access to Guest Virtual Memory
US8819676B2 (en) * 2007-10-30 2014-08-26 Vmware, Inc. Transparent memory-mapped emulation of I/O calls
US9658878B2 (en) * 2007-10-30 2017-05-23 Vmware, Inc. Transparent memory-mapped emulation of I/O calls
US8719543B2 (en) * 2009-12-29 2014-05-06 Advanced Micro Devices, Inc. Systems and methods implementing non-shared page tables for sharing memory resources managed by a main operating system with accelerator devices
US20110161620A1 (en) * 2009-12-29 2011-06-30 Advanced Micro Devices, Inc. Systems and methods implementing shared page tables for sharing memory resources managed by a main operating system with accelerator devices
US8943296B2 (en) * 2011-04-28 2015-01-27 Vmware, Inc. Virtual address mapping using rule based aliasing to achieve fine grained page translation
US9063877B2 (en) * 2013-03-29 2015-06-23 Kabushiki Kaisha Toshiba Storage system, storage controller, and method for managing mapping between local address and physical address

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Anirudh Badam et al., "Better Flash Access via Shape-shifting Virtual Memory Pages," TRIOS'13 Proceedings of the First ACM SIGOPS Conference on Timely Results in Operating Systems, Article No. 3, ACM New York, NY, Nov. 3-Nov. 6, 2013, pp. 1-14.
Anirudh Badam et al., "SSDAlloc: Hybrid SSD/RAM Memory Management Made Easy," USENIX Association Berkeley, CA, Mar. 30-Apr. 1, 2011, pp. 1-14.

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463665A (en) * 2020-10-30 2021-03-09 中国船舶重工集团公司第七0九研究所 Switching method and device for multi-channel video memory interleaving mode
CN112463665B (en) * 2020-10-30 2022-07-26 中国船舶重工集团公司第七0九研究所 Switching method and device for multi-channel video memory interleaving mode

Also Published As

Publication number Publication date
US20180074971A1 (en) 2018-03-15
US10430346B2 (en) 2019-10-01
US20180157597A1 (en) 2018-06-07

Similar Documents

Publication Publication Date Title
US10430346B2 (en) DDR storage adapter
US10152428B1 (en) Virtual memory service levels
US9910602B2 (en) Device and memory system for storing and recovering page table data upon power loss
CN111033477B (en) Logical to physical mapping
US9323659B2 (en) Cache management including solid state device virtualization
US8627040B2 (en) Processor-bus-connected flash storage paging device using a virtual memory mapping table and page faults
US9047200B2 (en) Dynamic redundancy mapping of cache data in flash-based caching systems
US20170228191A1 (en) Systems and methods for suppressing latency in non-volatile solid state devices
US20090164715A1 (en) Protecting Against Stale Page Overlays
WO2012109679A2 (en) Apparatus, system, and method for application direct virtual memory management
US11016905B1 (en) Storage class memory access
US10769062B2 (en) Fine granularity translation layer for data storage devices
KR102168193B1 (en) System and method for integrating overprovisioned memory devices
US20160266793A1 (en) Memory system
US11449423B1 (en) Enhancing cache dirty information
US9785552B2 (en) Computer system including virtual memory or cache
US20220382478A1 (en) Systems, methods, and apparatus for page migration in memory systems
US10241906B1 (en) Memory subsystem to augment physical memory of a computing system
US10915262B2 (en) Hybrid storage device partitions with storage tiers
US12105968B2 (en) Systems, methods, and devices for page relocation for garbage collection
JP4792065B2 (en) Data storage method
US20230409472A1 (en) Snapshotting Pending Memory Writes Using Non-Volatile Memory
US20240061786A1 (en) Systems, methods, and apparatus for accessing data in versions of memory pages

Legal Events

Date Code Title Description
AS Assignment

Owner name: TOSHIBA MEMORY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:043397/0380

Effective date: 20170706

AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAXEY, DAVID;KAMATH, NIDISH;AGRAWAL, VIKAS;SIGNING DATES FROM 20160910 TO 20160916;REEL/FRAME:043836/0860

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: K.K. PANGEA, JAPAN

Free format text: MERGER;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055659/0471

Effective date: 20180801

Owner name: KIOXIA CORPORATION, JAPAN

Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:TOSHIBA MEMORY CORPORATION;REEL/FRAME:055669/0001

Effective date: 20191001

Owner name: TOSHIBA MEMORY CORPORATION, JAPAN

Free format text: CHANGE OF NAME AND ADDRESS;ASSIGNOR:K.K. PANGEA;REEL/FRAME:055669/0401

Effective date: 20180801

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4