WO2016122471A1 - Memory module persistent data back-ups - Google Patents

Memory module persistent data back-ups Download PDF

Info

Publication number
WO2016122471A1
WO2016122471A1 PCT/US2015/013213 US2015013213W WO2016122471A1 WO 2016122471 A1 WO2016122471 A1 WO 2016122471A1 US 2015013213 W US2015013213 W US 2015013213W WO 2016122471 A1 WO2016122471 A1 WO 2016122471A1
Authority
WO
WIPO (PCT)
Prior art keywords
persistent data
memory
volatile
engine
computing system
Prior art date
Application number
PCT/US2015/013213
Other languages
French (fr)
Inventor
Jeffrey A. Plank
David F. Heinrich
Han Wang
Patrick A. Raymond
Raghavan V. Venugopal
Barry L. Olawsky
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/013213 priority Critical patent/WO2016122471A1/en
Publication of WO2016122471A1 publication Critical patent/WO2016122471A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7203Temporary buffering, e.g. using volatile buffer or dedicated buffer blocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

An example device in accordance with an aspect of the present disclosure is to identify, according to metadata, persistent data to be backed up from a plurality of volatile memory modules. The persistent data is copied from the plurality of volatile memory modules to the non-volatile storage, while excluding the non-persistent data.

Description

MEMORY MODULE PERSISTENT DATA BACK-UPS
BACKGROUND
[0001] Persistent memory solutions have been limited to a granularity of entire memory modules, and may need contiguous memory ranges, associated with the overhead of needing more expensive non-volatile memory. Supporting non-volatile memory can impose high costs and impose design constraints on applications that access the memory.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
[0002] FIG. 1 is a block diagram of a computing system including a memory engine and metadata according to an example.
[0003] FIG. 2 is a block diagram of a computing system including memory controller instructions, DMA engine instructions, and metadata instructions according to an example.
[0004] FIG. 3 is a block diagram of a computing system including a memory controller, a DMA engine, and a metadata engine according to an example.
[0005] FIG. 4 is a flow chart based on copying persistent data according to an example.
DETAILED DESCRIPTION
[0006] A given server may support large numbers of memory modules, and include a basic input/output system (BIOS) that may group volatile and nonvolatile memory into different segments. However, applications to be run on the server may be allocated a single segment (linear region of memory) by the system, despite a desire to provide back-up memory support to the application. Accordingly, a mismatch may exist between how the system allocates memory in different segments, in contrast to how applications are allocated to a single linear region of memory, creating difficulties when desiring to have an application interact with volatile and non-volatile memory.
[0007] Examples described herein may avoid such difficulties, by enabling support of persistent and non-persistent application data in memory, in a manner that is compatible with how applications may expect data to be presented. Allowing persistent data to coexist with non-persistent data in a single contiguous memory block as presented by the system (e.g., as presented to the operating system) can simplify the task of memory allocation for applications, especially applications that interact with non-volatile memory. Examples described herein may identify that a portion, rather than all, information stored in memory is to be retained after a power loss condition at the server. Such portion of the stored information, referred to herein as persistent data, may be distributed in many small blocks throughout the memory (e.g., throughout a plurality of memory modules). Accordingly, examples may avoid a need for all memory modules to be non-volatile. In contrast, a subset of memory modules may be provided as non-volatile memory (i.e., memory that is relatively more expensive), with the remainder of the memory modules being volatile (e.g., less expensive commodity memory modules).
[0008] FIG. 1 is a block diagram of a computing system 100 including a memory engine 1 10 and metadata 120 according to an example. The metadata 120 may be stored in storage 104. The memory engine 1 10 and metadata 120 may interact with volatile memory module(s) 140 and non-volatile storage 150. A volatile memory module 140 includes persistent data 122 and non-persistent data 124. The non-volatile storage 150 includes backed-up persistent data 126, copied from persistent data 122 of the volatile memory modules 140.
[0009] The memory engine 1 10 is to generally coordinate data access/flow in the memory (volatile memory modules 140 and non-volatile storage 150), including identifying locations of the persistent data 122 and non-persistent data 124 throughout a plurality of volatile memory modules 140. Thus, the memory engine 1 10 may assist in building the metadata 120, such as by identifying characteristics (e.g., address/location) of persistent data 122 to be stored as metadata 120. In some examples, the memory engine 1 10 may include a memory controller and/or a DMA engine, and functionality described in terms of the memory engine 1 10 may be provided by a memory controller and/or a DMA engine.
[0010] The memory engine 1 10 may allow access to the memory 140, 150 directly, e.g., independent of a central processing unit (CPU) of the computing system 100 (not shown in FIG. 1 ). The memory engine 1 10 may serve as a data mover, and may copy/move the persistent data 122 from the volatile memory module 140 to the non-volatile storage 150 (which may include a memory module(s)), and vice-versa. In some examples, the memory engine 1 10 may be a memory controller and/or a direct memory access (DMA) engine.
[0011] The memory, such as volatile memory modules 140 and/or nonvolatile storage 150, may be formed of memory modules such as dual in-line memory modules (DIMMs), single in-line memory modules (SIMMs), small outline DIMMs (SO-DIMMs), and so on. The non-volatile storage 150 may include memory that retains its contents during a power loss, such as flash memory, memristor technology, and the like. The non-volatile storage 150 may include high speed random access memory, e.g., memory associated with faster performance than other forms of storage such as hard drives or solid state discs, and may include a portion of memory that is volatile.
[0012] The volatile memory modules 140 include persistent data 122 and non-persistent data 124. The persistent data 122 represents portion(s) of the volatile memory module 140 that are to be tracked and/or backed-up. The persistent data 122 may encompass a majority of a given volatile memory module 140, or may represent a small portion of the volatile memory module 140. Notably, examples are not limited to needing to perform a full or complete back-up of an entire given volatile memory module 140, in order to back-up the persistent data 122 existing in a volatile memory module 140. Examples described herein may use, e.g., tracking software (such as a metadata engine) to keep track of the areas of persistent data 122 that are to be backed-up. Such information regarding the persistent data 122 may be stored as metadata 120. ln response to a power loss condition (e.g., blackout or other interruption to power delivery to the computing system 100), the persistent data 122 may be moved/copied to the non-volatile storage 150. Thus, examples such as system 100 enable mapping of persistent data 122 and non-persistent data 124 within a single linear region of memory, which may be allocated to an application. Accordingly, the system 100 may use, e.g., commodity volatile DIMMs for a majority of the computing system's memory space, while providing a relatively smaller amount of non-volatile DIMMs to serve as non-volatile storage 150 for backed-up persistent data 126.
[0013] The persistent data 122 does not need to be from contiguous memory addresses, and may be located at disparate memory locations throughout the plurality of volatile memory modules 140. Thus, specific portions of the volatile memory modules 140 may be targeted (e.g., according to metadata 120) for back-up, whether located in block 1 , or block 12, or other non-contiguous locations from various memory modules. In alternate examples, the persistent data 122 may occupy the entirety of a given volatile memory module 140. The volatile memory modules 140 are not limited to those located physically within a given computing system, and may be located throughout systems across multiple geographical locations. The backed-up persistent data 126 is shown as a single block of data, but may be spread as multiple blocks throughout multiple memory modules comprising the non-volatile storage 150.
[0014] The metadata 120 may be used by the system 100 to keep track of addresses of the persistent data 122 that is to be backed-up, and may serve as a data pointer. The metadata 120 is shown stored in storage 104, and in alternate examples, the metadata 120 may be stored, along with the pertinent persistent data, in the non-volatile storage 150. In some examples, the metadata 120 may be provided as a descriptor table, a linked list of descriptors, and so on.
[0015] For example, the system 100 may use a caching procedure to identify areas of the volatile memory modules 140 that are to be backed-up. In an example, such areas may be identified in the metadata 120 according to the memory engine 1 10 identifying data that has passed through the static random access memory (SRAM) of the system's CPU. Other approaches may be used to flag persistent data 122 and store the corresponding identification information in the metadata 120. Such techniques may be applied dynamically in real-time during operation of the system 100. In alternate examples, the system 100 may periodically check for any updates to what data is to be considered persistent, and its corresponding locations etc.
[0016] Accordingly, use of metadata 120, to identify the persistent data 122 to be backed-up to non-volatile storage 150, can optimize the manner in which applications interact with system 100, by presenting memory to the applications as a whole memory space, e.g., in one continuous region, without a need to sector off the memory space into a volatile section(s) and a non-volatile section(s). Examples may thereby provide the memory space as a BlOS-to- application interface, enabling applications to indicate what data is to be backed up, and have the system 100 track that information in metadata 120 and save that pertinent information as backed-up persistent data 126 in non-volatile storage 150. Thus, a large variety of applications may seamlessly make use of non-volatile storage 150 in conjunction with the volatile memory modules 150, without a need to customize the application for dealing with segmented memory, because system 100 enables such interaction seamlessly.
[0017] System 100 may provide applications with a continuous block of memory that enjoys backed-up persistent data 126 on non-volatile storage 150. Memory map designs for using non-volatile storage/memory for persistent data can be simplified, enabling interaction and support for applications that would not otherwise support back-up of persistent data 122 onto non-volatile memory (e.g., due to a need for a single linear memory region that system 100 may provide, despite the presence of both volatile and non-volatile memory).
[0018] Thus, examples described herein enable multiple benefits and avoid underutilization of persistent storage/memory. Applications that otherwise could not, may now take advantage of persistent memory systems, and complex applications that have been modified to address segmented memory may be streamlined by avoiding a need to separately address volatile and non-volatile memory spaces. Furthermore, the non-volatile storage 150 may be used without slowing down system and/or application performance, due to the highspeed nature of the non-volatile storage 150 (in contrast to a slower forms of non-volatile storage, such as a hard drive, for example).
[0019] Storage 104 may be accessible by the system 100, to serve as a computer-readable repository to store information such as metadata 120 that may be referenced by the memory engine 1 10 during operation of the system 100. As described herein, the term "controller" and/or "engine" may include electronic circuitry for implementing functionality consistent with disclosed examples. For example, memory engine 1 10 may represent combinations of hardware devices (e.g., processor and/or memory) and programming to implement the functionality consistent with disclosed implementations. In examples, the programming for the engines may be processor-executable instructions stored on a non-transitory machine-readable storage media, and the hardware for the engines may include a processing resource to execute those instructions. An example system (e.g., a computing device), such as system 100, may include and/or receive tangible non-transitory computer-readable media storing the set of computer-readable instructions. As used herein, the processor/processing resource may include one or a plurality of processors, such as in a parallel processing system, to execute the processor-executable instructions. The memory can include memory addressable by the processor for execution of computer-readable instructions. The computer-readable media can include volatile and/or non-volatile memory such as a random access memory ("RAM"), magnetic memory such as a hard disk, floppy disk, and/or tape memory, a solid state drive ("SSD"), flash memory, phase change memory, and so on.
[0020] In some examples, the functionality of engines may correspond to operations performed in response to, e.g., information from storage 104, interactions as interpreted and/or initiated by the, e.g., memory engine 1 10, and so on. The storage 104 may be accessible by the system 100 as a computer- readable storage media, in which to store items in a format that may be accessible by the memory engine 1 10. Thus, examples may be provided as a software mechanism. [0021] FIG. 2 is a block diagram of a computing system 200 including memory controller instructions 210, DMA engine instructions 230, and metadata instructions 260 according to an example. The computer-readable media 204 is associated with a processor 202 and metadata 220, and may store the instructions 210, 230, 260. The memory controller instructions 210 may instruct a memory controller to perform data access/flow. The DMA engine instructions 230 may instruct a DMA engine to copy/move data between memory types. Accordingly, the memory controller instructions 210 and/or DMA engine instructions 230 may be carried out by the memory engine 1 10 of FIG. 1 . The metadata instructions 260 may correspond to a metadata engine (not specifically shown in FIG. 1 ; see metadata engine 320 of FIG. 3) that may be included in the computing system 100 of FIG. 1 . In some examples, operations performed when instructions 210, 230, and 260 are executed by processor 202 may correspond to the functionality of memory engine 1 10 and the metadata engine.
[0022] As set forth above with respect to FIG. 1 , memory engine 1 10 may include combinations of hardware and programming that may correspond to memory controller instructions 210 and/or DMA engine instructions 230. Such components may be implemented in a number of fashions. For example, the programming may be processor-executable instructions stored on tangible, non- transitory computer-readable media 204 and the hardware may include processor 202 for executing instructions 210, 230, 260. Processor 202 may, for example, include one or multiple processors. Such multiple processors may be integrated in a single device or distributed across devices. Media 204 may store program instructions, that when executed by processor 202, implement system 100 of FIG. 1 (and/or system 300 of FIG. 3). Media 204 may be integrated in the same device as processor 202, or it may be separate and accessible to that device and processor 202.
[0023] In some examples, program instructions can be part of an installation package that when installed can be executed by processor 202 to implement system 100. In this case, media 204 may be a portable media such as a CD, DVD, flash drive, or a memory maintained by a server from which the installation package can be downloaded and installed. In another example, the program instructions may be part of an application or applications already installed. Here, media 204 can include integrated memory such as a hard drive, solid state drive, or the like. While in FIG. 2, media 204 includes instructions 210, 230, 260, one or more instructions may be located remotely from media 204. Conversely, although FIG. 2 illustrates metadata 220 located separate from media 204, the metadata 220 may be included with media 204.
[0024] The computer-readable media 204 may provide volatile storage, e.g., random access memory for execution of instructions. The computer- readable media 204 also may provide non-volatile storage, e.g., hard disk or solid state disk for storage. Components of FIG. 2 may be stored in any type of computer-readable media, whether volatile or non-volatile. Content stored on media 204 may include images, text, executable files, scripts, or other content that may be used by examples as set forth below. For example, media 204 may contain instructions 210, 230, 260 and/or information that may be used by memory engine 1 10 (or other engines such as a metadata engine etc.), to provide control or other information.
[0025] FIG. 3 is a block diagram of a computing system 300 including a memory controller 310, a DMA engine 330, and a metadata engine 320 according to an example. The memory controller 310 may be associated with an allocation threshold 312. The DMA engine 330 and memory controller 310 may form at least a portion of a central processing unit (CPU) 302, which may receive a power loss signal 306. The DMA engine 330 and memory controller 310 may also correspond to a memory engine, such as memory engine 1 10 of FIG. 1 . The DMA engine 330 and memory controller 310 are coupled to memory bus 314, which is coupled to a plurality of volatile DIMMs 340 and nonvolatile DIMMs 350. Back-up energy source 303 is to provide power to the DMA engine 330, memory controller 310, volatile DIMMs 340, and non-volatile DIMMs 350 (the back-up energy source domain shown enclosed in dashed lines). The metadata engine 320 may interact with components of the CPU 302, DIMMs 340, 350 (via memory bus 314), BIOS 308, and operating system (OS) 309. The metadata engine 320 is associated with memory locations 328. [0026] Examples based on system 300 may enable non-volatile memory back-up at a CPU/system level. The CPU 302 may be selectively powered, e.g., to power those portions of the CPU 302 used to move/copy at least a portion of the memory contents (e.g., persistent data) from the volatile DIMMs 340 to the non-volatile DIMMs 350, while allowing other portions of the CPU 302 to remain unpowered. Thus, there is no need for system 300 to use a customized application-specific integrated circuit (ASIC) or system on a chip (SOC) in order to control the memory operations, because system 300 is operable based on using a portion of the computing system's CPU 302 (e.g., the DMA engine 330 and the memory controller 310). System 300 thus may shut down unnecessary resources in the CPU 302, to reduce power during back-up operations. In an example, system 300 may start a back-up process by quiescing the CPU 302 and going into a lowest power state that still supports operation of the DMA engine 330 and the memory controller 310, to move persistent data to the non-volatile DIMMs 350. Thus, there is no need to repurpose signals/channels used to interact with the DIMMs, which may be accessed and controlled normally by the CPU 302 (or other memory controller 310/DMA Engine 330). Although shown integrated into the CPU 302, in alternate examples, the memory controller 310 and/or the DMA Engine 330 may be provided separately. In some examples, the back-up procedure may use compression and/or encryption on the persistent data to be backed up to nonvolatile memory.
[0027] System 300 includes a plurality of memory slots that are occupied by the volatile and non-volatile DIMMs 340, 350. As shown, a relatively small portion of the DIMM slots are occupied by non-volatile DIMMs 350. However, system 300 enables non-volatile memory back-up support across the memory address map, even though the memory slots are populated mostly by volatile DIMMs 340 that do not have flash memory (e.g., standard or commodity DIMMs) in the example of FIG. 3. The persistent data may be distributed throughout the DIMMs 340, 350, e.g., according to the memory locations 328 of metadata. After a back-up procedure is performed, the persistent data may reside safely backed-up in the non-volatile DIMMs 350. Although illustrated as two non-volatile DIMMs 350 located in the third and tenth memory slots, examples may use fewer or greater numbers of non-volatile DIMMs, which may be located in other memory slot(s).
[0028] Thus, examples do no need to locate non-volatile memory at every or at specific regions of the memory map, thereby reducing system costs by avoiding a need to populate all memory slots with non-volatile DIMMs 350. For example, an application to be executed on system 300 may be allocated the use of two volatile DIMMs 340 for working memory. If only a small portion of the data in either of those two volatile DIMMs 340 is persistent data (as indicated by metadata/memory locations 328), there is no need to back-up the entirety of those two volatile DIMMs 340. Rather, system 300 may back-up just the persistent data located on the allocated two volatile DIMMs 340, which may fit on a fraction of a single non-volatile DIMM 350.
[0029] Example systems are not limited to removable memory modules, and may include fixed/soldered memory or other memory that is not in the form of a memory module. Example systems 300 may include memory slots that support volatile DIMMs 340, non-volatile DIMMs 350, and/or both volatile and nonvolatile DIMMs 340 and 350. In an example, a system's BIOS may discover what types of memory slots and/or memory modules are populating the system's memory slots, and/or may refer to a configuration process performed by the customer to designate a given memory configuration (e.g., what type and how much of the memory is available as non-volatile storage). Such information may be used by the BIOS, and/or communicated to an operating system (OS), to perform allocations according to available memory. In an example system, every nth memory slot may provide support for non-volatile memory modules, enabling the system to support a fewer number of non-volatile memory modules than volatile memory modules, thereby saving costs on the system motherboard (compared to enabling support at all of the memory slots). Such configurations may be supported according to desired memory mapping and corresponding capabilities of the memory controller 310 and/or DMA engine 330. In an example, the non-volatile memory may be provided at a top of a linear memory stack, e.g., to provide good performance in backing up the data efficiently and quickly by avoiding segmentation of the memory, in view of a limited supply of back-up energy source 303 to perform the back-up. Arranging the non-volatile DIM Ms 350 at the top of memory further supports techniques for improving memory performance such as interleaving etc., and enables the OS to exclude the non-volatile DIMMs 350 for such operational performance enhancements to the volatile DIMMs 340. In examples, a fixed number of memory slots may be designated/indicated as supporting non-volatile DIMMs 350, and in other examples the memory slots may dynamically support any arrangements of volatile and non-volatile DIMMs 340, 350.
[0030] The back-up energy source 303 may be used to perform back-ups, e.g., in response to a power loss condition. For example, the CPU 302 may receive a power loss signal 306 indicating a power loss. Upon loss of power to the system 300, the back-up energy source 303 may serve as a local finite power source to provide enough energy to continue to allow the CPU's memory controller 310 and DMA engine 330 to transfer the persistent data from the volatile DIMMs 340 to the non-volatile DIMM(s) 350. Thus, the back-up energy source 303 may provide enough energy to power the memory controller 310, the DMA engine 330, the volatile DIMMs 340 (e.g., those associated with the persistent data, which may be a subset of the volatile DIMMs 340), and the nonvolatile DIMMs 350 (e.g., those associated with a capacity to store the persistent data to be backed up, which may be a subset of the available nonvolatile DIMMs 350). Additional modules (not specifically shown) within the CPU 302 may also be powered as needed, or may share the same power delivery system as the memory controller 310 and/or DMA engine 330. The back-up energy source 303 may be chosen to provide power for a finite period of time, e.g., that amount of time needed to transfer persistent data from volatile DIMMs 340 to a flash component on a non-volatile DIMM 350.
[0031] In an example, the back-up energy source 303 may be integrated with a power delivery circuitry on a system motherboard, and may be provided as a battery, capacitor, and so on. In alternate examples, the back-up energy source 303 may be integrated with a system power supply (not shown). A capacity of the back-up energy source 303 may be chosen based on parameters/considerations including: 1 ) the total power needs of the CPU 302 and DIMMs 340,350 during the data transfer process; 2) the finite period of time which back-up power is to be available; 3) the maximum number of non-volatile DIMMs 350 to be supported; 4) any additional signals needed on the DIMM connector interface and/or which memory pins are repurposed, if applicable; and 5) whether a subset of the DIMM slots will support the non-volatile DIMMs 350 (static assignment) or if all slots will support the feature (dynamic). Even though a CPU 302 may have a high energy rating, the given percentage of persistent data to non-persistent data may be relatively small (e.g., as reflected by the relatively low number of non-volatile DIMMs 350). Thus, even though the CPU 302 may need a relatively large amount of energy typically, such needs would only exist for a relatively short time. Also, a small portion of the total CPU 302 may actually be needed to perform the transfer, so the actual CPU power draw is substantially reduced during the example data movement/memory transfers.
[0032] The metadata engine 320 may be provided as an executable application to keep track of what information is to be treated as persistent data and backed up from the volatile DIMMs 340 to the non-volatile DIMMs 350. The metadata engine 320 may communicate such information to the memory controller 310 (or other software to be performing the back-up), to back up the information identified in the memory locations 328 of the metadata. In an example, the metadata engine 320 may push addresses of the memory locations 328 to be backed up, and even the metadata itself, into the nonvolatile DIMM(s) 350.
[0033] The metadata engine 320 may communicate with, be part of, and/or be operated by the BIOS 308 and/or the operating system 309 associated with the system 300. The metadata engine 320 may make calls to the BIOS 308 and/or OS 309. In alternate examples, the metadata engine 320 may interact with the CPU 302 operating in a system management mode (SMM) to run in the background as an out-of-band operation outside of the OS context. Thus, the metadata engine 320 may identify the memory locations 328 corresponding to data to be backed up. Accordingly, the metadata engine 320 may collect information dynamically during system operations, such that the collected metadata information is ready for use in response to a power loss condition as indicated by the power loss signal 306.
[0034] The memory controller 310 may cause the system 300 to operate within the constraints of the back-up energy source 303, by applying an allocation threshold 312 to operation of the system 300. For example, the OS
309 and/or BIOS 308 may employ a "throttle bit" under direction of the memory controller 310 according to the allocation threshold 312. The memory controller
310 may allocate out a certain amount of memory to applications, limited according to available energy capacity of the back-up energy source 303 and power needs of a system to reliably back-up the certain amount memory that has been allocated. In an example, the system 300 may impose the allocation threshold 312 across the memory map to apply to memory usage altogether. Once the allocation threshold 312 is reached, the memory controller 310 may signal that it will no longer allocate more persistent memory, due to a lack of expected power capacity at the back-up energy source 303 to reliably back-up further allocated memory. Such information may be strictly enforced, or may be used to generate a notification to the user that the allocation threshold 312 has been reached and further allocation may be at risk. A given system 300 may identify how much time would be needed to back-up a given amount of data, how much back-up energy source 303 capacity is available, and how much time will be needed to back-up the given amount of data, to establish an allocation threshold 312. In examples, the BIOS 308 and/or OS 309 may manage the allocation threshold 312 and its enforcement, as each application is requesting more persistent memory. Such enforcement may be performed in real-time (i.e., during run time before a power loss condition occurs), to avoid allocation of too much data that would not fit within the window of opportunity provided by the capacity of a given back-up energy source 303.
[0035] Referring to Figure 4, a flow diagram is illustrated in accordance with various examples of the present disclosure. The flow diagram represents processes that may be utilized in conjunction with various systems and devices as discussed with reference to the preceding figures. While illustrated in a particular order, the disclosure is not intended to be so limited. Rather, it is expressly contemplated that various processes may occur in different orders and/or simultaneously with other processes than those illustrated.
[0036] FIG. 4 is a flow chart 400 based on copying persistent data according to an example. In block 410, a memory controller engine is to identify, according to metadata, persistent data to be backed up from a plurality of volatile memory modules including non-persistent data that is not to be backed up. For example, in a back-up operation, the memory controller engine may disregard a portion of the data in the volatile memory modules, and back-up the persistent data according to the address locations indicated in the metadata. In block 420, a direct memory access (DMA) engine is to copy the persistent data from the plurality of volatile memory modules to the non-volatile storage, excluding the non-persistent data. For example, by excluding the non-persistent data, the DMA engine may copy a portion of data from a relatively large number of volatile memory modules that happen to contain persistent data, and move that data to a relatively small number of non-volatile memory module(s) that form the non-volatile storage.
[0037] Examples provided herein may be implemented in hardware, programming, or a combination of both. Example systems can include a processor and memory resources for executing instructions stored in a tangible non-transitory computer-readable media (e.g., volatile memory, non-volatile memory, and/or computer-readable media). Non-transitory computer-readable media can be tangible and have computer-readable instructions stored thereon that are executable by a processor to implement examples according to the present disclosure. The term "engine" as used herein may include electronic circuitry for implementing functionality consistent with disclosed examples. For example, memory engine 1 10 of FIG. 1 may represent combinations of hardware devices and programming to implement the functionality consistent with disclosed implementations. In some examples, the functionality of engines may correspond to operations performed by user actions, such as selecting steps to be executed by processor 202 (described above with respect to FIG. 2).

Claims

WHAT IS CLAIMED IS:
1 . A computing system comprising:
a memory engine to identify, according to metadata, persistent data to be backed up from a plurality of volatile memory modules, wherein a given volatile memory module of the plurality of volatile memory modules includes persistent data to be backed up and non-persistent data that is not to be backed up;
wherein the memory engine is to copy the persistent data from the plurality of volatile memory modules to the non-volatile storage, and exclude the non-persistent data.
2. The computing system of claim 1 , further comprising a back-up energy source to power the memory engine, the plurality of volatile memory modules, and the non-volatile storage in response to a power loss condition, to enable the memory engine to copy the persistent data during a power loss condition.
3. The computing system of claim 2, wherein the memory engine is to identify an allocation threshold for persistent data memory allocation, corresponding to a capability of the back-up energy source to successfully backup an amount of the persistent data allocated under the allocation threshold.
4. The computing system of claim 1 , wherein the plurality of volatile memory modules and the non-volatile storage are comprised of dual in-line memory modules (DIMMs), and wherein the persistent data is copied to a number of the non-volatile storage DIMM(s) fewer in number than the volatile memory DIMMs from which the persistent data is copied.
5. The computing system of claim 4, further comprising a plurality of DIMM slots to receive the DIMMs, wherein a portion of the DIMM slots are compatible with the non-volatile storage DIMMs, and a portion of the DIMM slots are compatible with the volatile memory DIMMs.
6. The computing system of claim 1 , wherein the persistent data identified by the memory engine coexists in a given volatile memory module with the non-persistent data in a single contiguous block, such that a memory space address map of the computing system is presented as a contiguous block including persistent and non-persistent components.
7. The computing system of claim 1 , wherein the memory engine is to copy the persistent data from non-contiguous portions of the plurality of volatile memory modules, and write the persistent data to the non-volatile storage in a contiguous sequential manner.
8. The computing system of claim 1 , further comprising a metadata engine to track memory locations of the persistent data in response to the persistent data being written to the plurality of volatile memory modules, and store the memory locations in the metadata for locating the persistent data.
9. The computing system of claim 1 , where the metadata is collected by at least one of the computing system's Basic Input/Output System (BIOS), the computing system's Operating System (OS), and the computing system's central processing unit (CPU) operated in system management mode (SMM).
10. The computing system of claim 9, wherein the CPU, in response to a power loss signal, is to quiesce by entering a low power state, and copy the persistent data from the volatile memory modules to the non-volatile storage.
1 1 . The computing system of claim 1 , wherein the metadata is stored in a portion of the non-volatile storage.
12. A method, comprising:
identifying, by a memory engine according to metadata, persistent data to be backed up from a plurality of volatile memory modules, wherein a given volatile memory module of the plurality of volatile memory modules includes persistent data to be backed up and non-persistent data that is not to be backed up; and
copying, by the memory engine, the persistent data from the plurality of volatile memory modules to the non-volatile storage, excluding the non- persistent data.
13. The method of claim 12, further comprising tracking, by a metadata engine, memory locations of the persistent data in response to the persistent data being written to the plurality of volatile memory modules; and storing the memory locations in the metadata for locating the persistent data.
14. A non-transitory machine-readable storage medium encoded with instructions executable by a computing system that, when executed, cause the computing system to:
identify, according to metadata, persistent data to be backed up from a plurality of volatile memory modules, wherein a given volatile memory module of the plurality of volatile memory modules includes persistent data to be backed up and non-persistent data that is not to be backed up; and
copy, by a memory engine, the persistent data from the plurality of volatile memory modules to the non-volatile storage, excluding the non- persistent data.
15. The storage medium of claim 14, further comprising instructions that cause the computing system to identify a power loss condition, and power, by a back-up energy source, the memory engine, the plurality of volatile memory modules, and the non-volatile storage, to enable the memory engine to copy the persistent data.
PCT/US2015/013213 2015-01-28 2015-01-28 Memory module persistent data back-ups WO2016122471A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013213 WO2016122471A1 (en) 2015-01-28 2015-01-28 Memory module persistent data back-ups

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013213 WO2016122471A1 (en) 2015-01-28 2015-01-28 Memory module persistent data back-ups

Publications (1)

Publication Number Publication Date
WO2016122471A1 true WO2016122471A1 (en) 2016-08-04

Family

ID=56543900

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/013213 WO2016122471A1 (en) 2015-01-28 2015-01-28 Memory module persistent data back-ups

Country Status (1)

Country Link
WO (1) WO2016122471A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474550B2 (en) 2017-05-03 2019-11-12 Vmware, Inc. High availability for persistent memory
US10496443B2 (en) * 2017-05-03 2019-12-03 Vmware, Inc. OS/hypervisor-based persistent memory

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842823B1 (en) * 2000-04-14 2005-01-11 Stratus Technologies Bermuda Ltd Methods and apparatus for persistent volatile computer memory
US20120059967A1 (en) * 2009-12-21 2012-03-08 Sanmina-Sci Corporation Memory bus architecture for concurrently supporting volatile and non-volatile memory modules
WO2014003764A1 (en) * 2012-06-28 2014-01-03 Hewlett-Packard Development Company, L.P. Memory module with a dual-port buffer
US20140195564A1 (en) * 2010-12-13 2014-07-10 Fusion-Io, Inc. Persistent data structures
US20140215277A1 (en) * 2013-01-29 2014-07-31 International Business Machines Corporation Selective restoration of data from non-volatile storage to volatile memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6842823B1 (en) * 2000-04-14 2005-01-11 Stratus Technologies Bermuda Ltd Methods and apparatus for persistent volatile computer memory
US20120059967A1 (en) * 2009-12-21 2012-03-08 Sanmina-Sci Corporation Memory bus architecture for concurrently supporting volatile and non-volatile memory modules
US20140195564A1 (en) * 2010-12-13 2014-07-10 Fusion-Io, Inc. Persistent data structures
WO2014003764A1 (en) * 2012-06-28 2014-01-03 Hewlett-Packard Development Company, L.P. Memory module with a dual-port buffer
US20140215277A1 (en) * 2013-01-29 2014-07-31 International Business Machines Corporation Selective restoration of data from non-volatile storage to volatile memory

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474550B2 (en) 2017-05-03 2019-11-12 Vmware, Inc. High availability for persistent memory
US10496443B2 (en) * 2017-05-03 2019-12-03 Vmware, Inc. OS/hypervisor-based persistent memory
US11163656B2 (en) 2017-05-03 2021-11-02 Vmware, Inc. High availability for persistent memory
US11422860B2 (en) 2017-05-03 2022-08-23 Vmware, Inc. Optimizing save operations for OS/hypervisor-based persistent memory
US11740983B2 (en) 2017-05-03 2023-08-29 Vmware, Inc. High availability for persistent memory

Similar Documents

Publication Publication Date Title
KR102519904B1 (en) Allocating and configuring persistent memory
US10802984B2 (en) Techniques for persistent memory virtualization
KR102137761B1 (en) Heterogeneous unified memory section and method for manaing extended unified memory space thereof
US9417794B2 (en) Including performance-related hints in requests to composite memory
US8650379B2 (en) Data processing method for nonvolatile memory system
US8671241B2 (en) Systems and methods for using reserved solid state nonvolatile memory storage capacity for system reduced power state
US9317214B2 (en) Operating a memory management controller
US11609767B2 (en) Technologies for operating system transitions in multiple-operating-system environments
US11422860B2 (en) Optimizing save operations for OS/hypervisor-based persistent memory
KR102434170B1 (en) hybrid memory system
US10705747B2 (en) Latency-based storage in a hybrid memory system
US20180024768A1 (en) Partitioning memory modules into volatile and non-volatile portions
EP3251006A1 (en) Backup image restore
US8433873B2 (en) Disposition instructions for extended access commands
CN108694101B (en) Persistent caching of memory-side cache contents
EP3769202B1 (en) Hybrid memory system
US20190324868A1 (en) Backup portion of persistent memory
WO2016122471A1 (en) Memory module persistent data back-ups
CN110727470A (en) Hybrid non-volatile storage device
US11221766B2 (en) System and method for persistent memory rotation based on remaining write endurance
KR20110052902A (en) Computing system and method for controling memory of computing system
US20160077747A1 (en) Efficient combination of storage devices for maintaining metadata
US11023139B2 (en) System for speculative block IO aggregation to reduce uneven wearing of SCMs in virtualized compute node by offloading intensive block IOs
US10795771B2 (en) Information handling system with reduced data loss in block mode
US10346193B1 (en) Efficient placement of virtual machines based on cache hit ratio

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880373

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880373

Country of ref document: EP

Kind code of ref document: A1