WO2014209394A1 - Fault tolerance for persistent main memory - Google Patents

Fault tolerance for persistent main memory Download PDF

Info

Publication number
WO2014209394A1
WO2014209394A1 PCT/US2013/048759 US2013048759W WO2014209394A1 WO 2014209394 A1 WO2014209394 A1 WO 2014209394A1 US 2013048759 W US2013048759 W US 2013048759W WO 2014209394 A1 WO2014209394 A1 WO 2014209394A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
persistent main
main memory
persistent
Prior art date
Application number
PCT/US2013/048759
Other languages
French (fr)
Inventor
Gregg B. Lesartre
Dale C. Morris
Gary Gostin
Russ W. Herrell
Andrew R. Wheeler
Blaine D. Gaither
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to EP13887761.8A priority Critical patent/EP3014448A4/en
Priority to CN201380077638.9A priority patent/CN105308574A/en
Priority to PCT/US2013/048759 priority patent/WO2014209394A1/en
Priority to US14/901,559 priority patent/US10452498B2/en
Publication of WO2014209394A1 publication Critical patent/WO2014209394A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1666Error detection or correction of the data by redundancy in hardware where the redundant component is memory or memory area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1004Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • G06F11/108Parity data distribution in semiconductor storages, e.g. in SSD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/805Real-time

Definitions

  • a redundant array of independent disks is a storage technology that controls multiple disk drives and provides fault tolerance by storing data with redundancy.
  • RAID technology can store data with redundancy in a variety of ways. Examples of redundant data storage methods include duplicating data and storing the data in multiple locations and adding bits to store calculated error recovery bits.
  • FIG. 1 is a block diagram of an example of a computing system including fault tolerance
  • FIG. 2 is a block diagram of an example of a computing system including fault tolerance
  • FIG. 3 is a process flow diagram of an example of a method of writing data to memory with fault tolerance
  • FIG. 4 is a process flow diagram of an example of a method of accessing memory with fault tolerance.
  • Techniques described herein relate generally to redundant data storage. More specifically, techniques described herein relate to redundant data storage in persistent main memory.
  • Main memory is primary storage that is directly or indirectly connected to a central processing unit (CPU) and is directly accessible to the CPU.
  • CPU central processing unit
  • current systems provide a storage array controller that intercepts a block store and its associated data and stores the data redundantly across a number of disk devices to ensure that the data can be recovered in the event of a failure of one or more devices.
  • One approach is to calculate and store error recovery bits, such as RAID 5 parity, such that any data lost in a failing device can be recreated from the error recovery bits and data from the non-failing devices.
  • RAID 1 Another approach is to duplicate the data and store the data in multiple locations, such as via RAID 1 technology.
  • RAID levels and algorithms can be used to provide the desired level of protection.
  • Other systems have attached multiple channels of disk storage to a system. Duplication or RAIDing of the data is then handled within the storage management software for the system. When blocks of data are to be committed to storage, software initiates multiple transfers to multiple disks in order to implement the desired RAID algorithms.
  • New system architectures take advantage of dense, persistent, low latency memory devices to provide for large storage arrays accessed directly by a processor and cached in a processor's caches.
  • New solid state persistent memory devices with densities like flash memory and access times like DRAM memories allow the design of systems that treat this memory as storage, but access it as memory, i.e., through direct memory access, allowing the solid state persistent memory devices to be used as persistent main memory.
  • capabilities are integrated into the paths to access this memory which, in addition to routing write requests to memory, also route the data in a mirrored or RAID fashion to multiple storage locations in separate persistent memory devices. This routing ensures data recovery in the case of a persistent memory device failure while maintaining current programming paradigms.
  • this type of data protection (e.g., data duplication or RAIDing) can be extended to direct memory access, such as persistent main memory, without awareness of the protection mechanism at a software application level.
  • Current redundancy solutions only move data to disk when software is ready to commit that data. The system must then wait for this operation to complete, including the time required to write full RAID data to multiple devices before proceeding.
  • storage commit can be completed faster by performing the RAID updates to persistent main memory as the individual cache line writes occur.
  • RAID operations for multiple cache lines can be processed in parallel. By processing the RAID operations in parallel, the time to complete transactions is reduced and demand on the system is balanced. The overall result is a faster, more efficient distribution of protected data across storage devices, from a power and data- movement perspective.
  • Fig. 1 is a block diagram of a computing system 100 including fault tolerance.
  • computing system 100 is a server cluster.
  • the computing system 100 includes a number of nodes, such as computing nodes 102.
  • computing system 100 may also include a memory node, such as remote memory node 1 12, or multiple memory nodes.
  • a memory node is a collection of memory, such as a collection of memory devices, for storing a large amount of data.
  • the nodes 1 02 are communicably coupled to each other through a network 104, such as a server cluster fabric.
  • the computing system 100 can include several compute nodes, such as several tens or even thousands of compute nodes.
  • the compute nodes 102 include a Central Processing Unit (CPU) 106 to execute stored instructions.
  • the CPU 1 06 can be a single core processor, a multi- core processor, or any other suitable processor.
  • compute node 102 includes a single CPU.
  • compute node 102 includes multiple CPUs, such as two CPUs, three CPUs, or more.
  • the compute node 102 includes a persistent main memory 1 08.
  • the persistent main memory 108 can include volatile dynamic random access memory (DRAM) with battery backup, non-volatile phase change random access memory (PCRAM), spin transfer torque-magnetoresistive random access memory (STT- MRAM), resistive random access memory (reRAM), memristor, FLASH, or other types of memory devices.
  • DRAM volatile dynamic random access memory
  • PCRAM non-volatile phase change random access memory
  • STT- MRAM spin transfer torque-magnetoresistive random access memory
  • reRAM resistive random access memory
  • memristor FLASH
  • FLASH FLASH
  • the persistent main memory 108 can be solid state, persistent, dense, fast memory.
  • Fast memory can be memory having an access time similar to DRAM memory.
  • Compute node 1 02 further includes a memory controller 1 10.
  • the memory controller 1 10 communicates with the persistent main memory 108 and controls access to the persistent main memory 1 08by the CPU 106.
  • Persistent memory is non-volatile storage, such as storage on a storage device.
  • the memory controller 1 10 can be a RAID memory controller. When the memory controller 1 10 receives an access request for persistent main memory 108, the memory controller 1 10 generates transactions to the local persistent main memory 108.
  • Computing system 100 also includes remote memory 112.
  • Remote memory 1 12 can be persistent memory, and may be identical to persistent main memory 108.
  • Remote memory 1 12 is communicably coupled to the computing nodes through a network 104, such as a server cluster fabric.
  • Remote memory is remote and separate from persistent main memory 1 08.
  • remote memory 1 12 can be physically separate from persistent main memory 108.
  • remote memory can be persistent memory divided into regions or ranges of memory address spaces.
  • Each region can be assigned to a computing node 102.
  • Each region can additionally be accessed by computing nodes other than the assigned computing node. In the event of a failure of the assigned computing node, another computing node can access the region or the region can be reassigned in order to preserve access to the data in remote memory.
  • Remote memory 1 12 includes redundant data 1 14.
  • Remote memory 1 1 2 acts as a fault tolerance capability (i.e., providing a system and/or method of data recovery in order to ensure data integrity) to persistent main memory 108 via redundant data 1 14.
  • a memory controller 1 10 receives a write operation to persistent main memory configured for redundant storage to ensure the integrity of the data
  • the memory controller 1 1 0 will generate a transaction to the remote memory 1 12, resulting in generation and storage of redundant data 1 14, at the same time as the memory controller 1 1 0 writes to the persistent main memory 108.
  • the data is effectively spread across multiple devices such that the data can be recovered when a device, or even multiple devices, fails.
  • the write data is duplicated, or mirrored, to produce an identical copy of the data.
  • the data is written to local persistent main memory 108.
  • An identical copy of the data is written to remote memory 1 1 2, becoming redundant data 1 14.
  • memory controller 1 1 0 accesses persistent main memory 108 in response to requests by CPU 106.
  • remote memory 1 1 2 is large enough to store copies of all data stored in persistent main memory 1 08 of all computing nodes 102 in computing system 100.
  • remote memory 1 12 is at least three times larger than persistent main memory 108, in order to have the capacity to store copies of all data stored in persistent main memory 1 08.
  • the redundant data 1 14 is accessed.
  • Standard error-detection or error-correction codes such as parity or error-correcting codes (ECC)
  • ECC error correction code
  • the error- detection/-correction codes signal memory controller 1 1 0 to read the redundant data 1 14 from the remote memory 1 12.
  • the redundant data 1 14 from the remote memory 1 12 is provided to CPU 106 to satisfy the access request, and copied to persistent main memory 108, when persistent main memory 108 is functional, or used to create an alternate redundant copy of the data in a new location when persistent main memory 108 is no longer functional.
  • the redundant data 1 14 can also be accessed, such as by another computing node 1 02, in the event of a failure of the local computing node 102. In that event, the new computing node 102 will be able to access the redundant data 1 14 and continue where the failed node 102 left off.
  • the redundant data 1 14 can be accessed and a copy of the original data saved to persistent main memory 1 08.
  • another computing node 102 can access a copy of the data from remote memory 1 12, without involving the originating computing node 102.
  • Persistent main memory 108 is generally accessed, such as in a read transaction, in order to service requests at the lowest latency.
  • remote memory 1 12 is rarely accessed.
  • An implementation may choose to occasionally access remote memory 1 12 to confirm that remote memory 1 12 remains accessible and able to provide correct data. By confirming the accessibility of remote memory 1 12, the integrity of the redundant data 1 14 is ensured.
  • memory accesses such as read requests, can occasionally be serviced by accessing the redundant data 1 14 of remote memory 1 12 rather than persistent main memory 1 08.
  • the system 100 can verify that remote memory 1 1 2 and redundant data 1 14 have not failed.
  • Memory controllers 1 10 often scrub stored data in order to detect and correct any soft errors that may have occurred during a period of infrequent access.
  • scrubbing of redundant data 1 14 in remote memory 1 12 is supported by memory controller 1 10.
  • remote memory 1 12 provides scrubbing of redundant data 1 14 without involving memory controller 1 1 0.
  • RAID 4 across computing nodes 102 is employed to ensure data integrity.
  • Remote memory 1 12 is used to hold RAID parity controlled by memory controller 1 10.
  • memory controller 1 1 0 is a RAID controller.
  • the memory controller 1 10 can include a data recovery engine configured to rebuild data after an access failure renders the data inaccessible.
  • the data recovery engine includes a combination of hardware and programming.
  • the data recovery engine can include a non-transitory, computer-readable medium for storing instructions, one or more processors for executing the instructions, or a combination thereof.
  • memory controller 1 10 Upon receiving a request to write data to memory, memory controller 1 10 writes the new data to persistent main memory 1 08. In addition, the memory controller 1 1 0 sends a command to update the redundant data 1 14 on remote memory 1 12. For example, the memory controller 1 10 performs an exclusive or (XOR) operation on the incoming write data with the old data in persistent main memory 108. The memory controller 1 10 then sends a write delta (write changes) command with the results of the XOR operation to remote memory 1 12. Remote memory 1 12 will read the redundant data 1 14, which consists of RAID 4 parity, XOR it with the write delta data, and write the result back into redundant data 1 14. In an example, the redundant data 1 14 is updated at the same time that the data is written to persistent main memory 108.
  • XOR exclusive or
  • computing system 100 includes a single remote memory 1 12. In another example, computing system 100 includes multiple remote memories 1 12.
  • memory controller 1 1 0 in a system employing a RAID 4 level receives a request to read data, the memory controller attempts to read the data from local persistent main memory 108. If the data in persistent main memory 108 has a locally-uncorrectable error, or the local persistent main memory is inaccessible, the memory controller 1 10 will read the RAID 4 parity from redundant data 1 14, read data associated with the RAID 4 parity from persistent main memories 108 of all other computing nodes 1 02, and then XOR the RAID 4 parity with all associated data to regenerate the uncorrectable local data. The regenerated data is returned to CPU 106. If the local persistent main memory 108 is available, the regenerated data is saved to persistent main memory 108.
  • the regenerated data can be saved to the persistent main memory of another computing node.
  • the regenerated data can be saved to remote memory 1 1 2.
  • the regenerated data can then be accessed by the original computing node with non-available persistent main memory 108, or by a replacement computing node.
  • a replacement computing node 102 accesses the redundant data 1 14 on remote memory 1 12, and copies the data to the local persistent main memory 108 of the replacement computing node.
  • a second computing node acts as remote memory containing redundant data for the first computing node 102.
  • the second computing node includes a memory controller coupled to persistent main memory.
  • the memory controller 1 1 0 on the first computing node 102 receives a request from CPU 106 to write data to local persistent main memory 108.
  • the memory controller writes the new data, or updates stored data, to the persistent main memory 108 of the first computing node 102.
  • the redundant data is sent to the memory controller 1 10 on the second computing node.
  • the memory controller on the second computing node writes the redundant data into local persistent main memory 108, and then sends a response to the memory controller on the first computing node.
  • memory controller 1 10 on the first computing node sends a response to the system write transaction to CPU 106 on the first computing node.
  • multiple additional computing nodes can act as remote memories containing redundant data for the first computing node 102.
  • the redundant data for the first computing node 1 02 can be distributed across multiple computing nodes.
  • memory controller 1 1 0 in a system employing a RAID 1 level receives a request to read data, the memory controller reads the data from local persistent main memory 108. If the data in persistent main memory 108 has a locally-uncorrectable error, or the local persistent main memory is inaccessible, the memory controller 1 10 accesses the redundant data in the persistent main memory 108 of the second computing node 1 02. In the event of such a failure, the memory controller 1 1 0 can identify the memory location of the inaccessible data. A memory location can be in a memory address space. In some embodiments, a table of memory address spaces can be built to expedite the identification of memory locations corresponding to inaccessible data. The table of memory address spaces can be built by the memory controller 1 10.
  • FIG. 1 It is to be understood the block diagram of Fig. 1 is not intended to indicate that computing system 100 is to include all of the components shown in Fig. 1 in every case. Further, any number of additional components may be included within computing system 100, depending on the details of the specific implementation.
  • Fig. 2 is a block diagram of a computing system 200 including fault tolerance.
  • computing system 200 is a server cluster.
  • the computing system 200 includes a number of nodes, such as computing node 202.
  • computing system 200 may also include a number of memories, such as persistent main memory 21 0.
  • a persistent main memory 210 is a collection of memory, such as a collection of memory devices, for storing a large amount of data.
  • the computing nodes 202 and persistent main memories 21 0 are communicably coupled to each other through a network 204.
  • the computing system 200 can include several compute nodes, such as several tens or even thousands of compute nodes, and several persistent main memories.
  • the compute nodes 202 include a Central Processing Unit (CPU) 206 to execute stored instructions.
  • the CPU 206 can be a single core processor, a multi- core processor, or any other suitable processor.
  • a compute node 202 includes a single CPU.
  • a compute node 202 includes multiple CPUs, such as two CPUs, three CPUs, or more.
  • Compute node 202 further includes a memory controller 208.
  • the memory controller 208 communicates with persistent main memories 210 and controls access to persistent main memories 210, by the CPU 206.
  • the memory controller 208 is a RAID memory controller.
  • the memory controller 208 receives a request to write to persistent main memory 21 0, the memory controller 208 generates a write transaction to the selected persistent main memory 210.
  • Each compute node 202 can be communicably coupled with the persistent main memories 210.
  • the persistent main memory 21 0 can be volatile dynamic random access memory (DRAM) with battery backup, non-volatile phase change random access memory (PCRAM), spin transfer torque-magnetoresistive random access memory (STT-MRAM), resistive random access memory (reRAM), memristor, FLASH, or other types of memory devices.
  • the persistent main memory 210 is solid state, persistent, dense, fast memory. Fast memory can be memory having an access time similar to DRAM memory.
  • Persistent main memory 210 is remote to computing nodes 202 and is accessed via a network 204, such as a communication fabric.
  • the persistent main memories 210 can be combined to form a pool of nonvolatile, persistent main memory. Regions of the pool of non-volatile, persistent main memory are allocated to computing nodes 202 within computing system 200. In the event of a failure of a particular computing node 202, the region of the pool of non-volatile, persistent main memory allocated to the failed computing node may be reallocated to a functioning computing node 202. In this way, access to the data in the non-volatile memory region is not lost when the computing node fails. In a further example, regions of the pool of non-volatile, persistent main memory can be accessed by additional computing nodes 202.
  • the computing system 200 includes a single persistent memory.
  • the persistent memory is divided into regions (ranges of memory address spaces). Each region is assigned to a computing node to act as persistent main memory 21 0 for the computing node 102. Each region can also be accessed by additional computing nodes. In the event of a failure of the assigned computing node, the region of memory can be reassigned to a functioning computing node.
  • Computing system 200 also includes remote memory 21 2.
  • Remote memory is a persistent main memory, such as persistent main memory 210, which has been designated by the system 200 to act as remote memory 212.
  • multiple persistent main memories can be designated to act as remote memories 212.
  • Remote memory 212 is communicably coupled to the computing nodes through a network 204, such as a communication fabric.
  • Remote memory 212 includes redundant data 214.
  • Remote memory 21 2 provides a fault tolerance capability (i.e., providing a system and/or method of data recovery in order to ensure data integrity) to persistent main memory(s) 210 via redundant data 214.
  • the redundant data stored by the remote memory(s) 212 can be accessed by the computing nodes 102.
  • the redundant data stored by the remote memory(s) 212 can also be accessed by additional computing nodes, such as in the event of a failure of a computing node or data corruption.
  • persistent main memories 210 and remote memory 212 are organized as a RAID 1 group.
  • Remote memory 212 is used to hold data controlled by memory controllers 208.
  • memory controller 208 is a RAID controller.
  • Each memory controller 208 independently performs all RAID calculations for the region of the pool of non-volatile, persistent main memory allocated to the particular memory controller 208.
  • the memory controller 208 can include a data recovery engine configured to rebuild data after an access failure renders the data inaccessible.
  • the data recovery engine includes a combination of hardware and programming.
  • the data recovery engine can include a non-transitory, computer-readable medium for storing instructions, one or more processors for executing the instructions, or a combination thereof.
  • memory controller 208 Upon receiving a system write transaction from CPU 206, memory controller 208 writes the new data to the selected persistent main memory 210. In addition, the memory controller 208 sends a command to update the redundant data 214 stored in remote memory 212. In an example, the redundant data 214 is written with a copy of the new data at the same time that the new data is written to persistent main memory 210. When all memory writes have been acknowledged by persistent main memory 210 and remote memory 212, memory controller 208 sends a response to the system write transaction to CPU 206 indicating that the write has been completed and made durable.
  • computing system 200 includes a single remote memory 212. In another example, computing system 200 includes multiple remote memories 212.
  • a memory controller 208 in computing node 202 of a system 200 employing a RAID protocol receives a request from CPU 206 to read data, the memory controller 208 reads the data from the selected persistent main memory 210. If the data in the persistent main memory 210 has a locally-uncorrectable error, or the persistent main memory is inaccessible, the memory controller 208 accesses remote memory 21 2 to read the redundant data 214. In the event of such a failure, the memory controller 208 can identify the memory location of the inaccessible data.
  • a memory location can be a memory address space.
  • a table of memory address spaces can be built to expedite the identification of memory locations corresponding to inaccessible data. The table of memory address spaces can be built by the memory controller 208.
  • the memory controller 208 regenerates the data, such as with the data recovery engine, from redundant data 214 and sends the regenerated data to CPU 206, completing the read request. If the selected persistent main memory 210 is available, the regenerated data is also saved to the selected persistent main memory 210. If the selected persistent main memory 21 0 is not available, the regenerated data can be saved to an additional persistent main memory. The regenerated data can then be accessed by the computing nodes 202.
  • RAID 5 across persistent main memories 210 is employed to ensure data integrity.
  • parity is distributed across all persistent main memories 210, rather than placed solely in remote memory 212, in order to maximize available memory bandwidth.
  • memory controller 208 is a RAID controller.
  • the RAID controller can include a data recovery engine configured to regenerate data after a hardware failure renders the data inaccessible.
  • memory controller 208 Upon receiving a request from CPU 206 to write data to memory, memory controller 208 reads the old data from persistent main memory 210 selected by the write address, and performs an exclusive or (XOR) operation using the old data and the new write data. The memory controller 208 then writes the new data to the persistent memory 210 selected by the write address. Additionally, the memory controller 208 then sends a write delta (write changes) command with the results of the XOR operation to a second persistent main memory 210. The second persistent main memory is selected based on the write address, and contains the RAID 5 parity data for the specified write address. The second persistent main memory 210 will read the RAID 5 parity data, XOR it with the write delta data, and write the result back into the RAID 5 parity data.
  • XOR exclusive or
  • the RAID 5 parity data is updated at the same time that the data is written to persistent main memory 21 0.
  • memory controller 208 sends a response to the system write transaction to CPU 206 indicating that the write has been completed and made durable.
  • computing system 200 includes multiple persistent main memories 210, so as to reduce the memory capacity overhead of RAID 5.
  • memory controller 208 in a system employing RAID 5 receives a request from CPU 206 to read data, the memory controller attempts to read the data from a first persistent main memory 210 selected by the read address. If the data in the first persistent main memory 210 has a locally-uncorrectable error, or the persistent main memory is inaccessible, the memory controller 208 will read the RAID 5 parity associated with the uncorrectable data, and read all other data associated with the RAID 5 parity from other persistent main memories 210. The memory controller 208 will then compute the XOR of the RAID 5 parity with all associated data to regenerate the uncorrectable local data. The regenerated data is returned to CPU 206.
  • the regenerated data is saved to first persistent main memory 21 0. If the first persistent main memory 210 is not available, the regenerated data can be saved to an alternate persistent main memory. The regenerated data can then be accessed from the alternate persistent main memory by any of the computing nodes 202.
  • the computing system 200 can be adapted to employ other standard RAID levels, i.e., RAID level 2, 3, 4, or 6. Depending on the RAID level employed, remote memory 212 may or may not be used.
  • FIG. 2 It is to be understood the block diagram of Fig. 2 is not intended to indicate that computing system 200 is to include all of the components shown in Fig. 2 in every case. Further, any number of additional components may be included within computing system 200, depending on the details of the specific implementation.
  • Fig. 3 is a process flow diagram of a method 300 of writing data to memory with fault tolerance.
  • a request is received in a memory controller to write data to a non-volatile, persistent main memory.
  • the memory address space is an address space of non-volatile, persistent main memory, such as persistent main memory 108 or 210, of a computing system.
  • the memory controller resides in a computing node, such as computing node 1 02 or 202.
  • the request originates from a processor, such as CPU 106 or 206.
  • the data is written to the persistent main memory.
  • the persistent main memory resides in a computing node.
  • the persistent main memory is remote from a computing node and forms a pool of persistent main memory, and the data is written to a region of the pool of nonvolatile, persistent main memory allocated to the computing node.
  • redundant data is written to remote memory.
  • the redundant data is a mirrored copy of the data.
  • the redundant data is written with the result of calculating an XOR of the new write data, the old data and the old RAID parity.
  • blocks 304 and 306 occur
  • Fig. 4 is a process flow diagram of a method 400 of accessing memory with fault tolerance.
  • a request is received in a memory controller to access data stored in a non-volatile, persistent main memory.
  • the memory controller resides in a computing node, such as computing node 1 02 or 202.
  • the request originates from a processor, such as CPU 106 or 206.
  • the memory controller attempts to access the non-volatile, persistent main memory.
  • the memory controller initiates access to the address space of the non-volatile, persistent main memory in which the data resides.
  • the memory controller determines if it is able to access the persistent main memory by either receiving the requested data, or receiving an error message or having the access time out. If the memory controller receives the requested data at block 408, then at block 410 the memory controller returns data to the CPU.
  • the memory controller is not able to access the persistent main memory
  • notice of a failure to access the persistent main memory is received.
  • the failure is due to a failed non-volatile, persistent main memory.
  • a non-volatile persistent main memory failure can be due to a damaged memory device, a failed persistent main memory module, corrupted data, or any other failure.
  • redundant data is accessed. Redundant data can be stored on remote memory, on other persistent main memories, on additional computing nodes, or on a combination of remote memory, persistent main memories and additional computing nodes.
  • the redundant data is a mirrored copy of the data produced when the system is in mirroring mode.
  • the redundant data is a combination of parity data produced when the system is in RAID mode, and other data associated with the parity data.
  • the data is reconstructed from the redundant data. In RAID mode, the data can be reconstructed using a data recovery engine.
  • the memory controller determines if the non-volatile, persistent main memory failed. If the memory controller determines that the nonvolatile, persistent main memory has not failed, at block 422 the reconstructed data can be saved to the persistent main memory. At block 410, data can be returned to the CPU. If the memory controller determines that the non-volatile, persistent main memory has failed, method 400 can proceed to block 424. At block 424, the memory controller determines if alternate persistent main memory exists. If alternate persistent main memory does not exist, data is returned to the CPU at block 410. If alternate persistent main memory does exist, at block 426 the reconstructed data can be saved to an alternate persistent main memory. In an example, the alternate persistent main memory can be a remote memory. In another example, the alternate persistent main memory can be a persistent main memory of another computing node. At block 410, data is returned to the CPU.
  • a computing system includes a processor and a persistent main memory including a fault tolerance capability.
  • the computing system also includes a memory controller.
  • the memory controller is to store data in the persistent main memory, create redundant data, and store the redundant data remotely with respect to the persistent main memory.
  • the memory controller is also to store the redundant data remotely with respect to the persistent main memory.
  • the memory controller is further to access the redundant data during failure of the persistent main memory.
  • the fault tolerance capability can include storing data to the persistent main memory and mirroring the data to a remote persistent main memory.
  • the fault tolerance capability can also include storing data to the persistent main memory and updating at least one parity remote memory based on a difference between old and new data.
  • the computing system can also include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor, wherein the persistent main memories reside remotely to the computing nodes, the persistent main memories including a pool of shared persistent main memory, regions of which are allocated to each computing node.
  • the computing system includes a persistent main memory for storing data and a remote persistent main memory for storing redundant data.
  • the computing system also includes a memory controller. The memory controller accesses the redundant data when the data in the persistent main memory cannot be accessed.
  • the redundant data can include a mirrored copy of the data.
  • the computing system can include a plurality of nodes, each node including a persistent main memory for storing data, and a remote persistent memory for storing redundant data, the remote persistent memory acting as a parity node, including a check sum of parity bits of data in the persistent main memories of the nodes of the computing system, wherein the check sum is updated when data is written to the persistent main memory of a node, and wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data.
  • the computing system can include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor.
  • the plurality of persistent main memories can include a first persistent main memory selected by address for storing data, a second persistent main memory selected by address for storing a mirrored copy of the data, wherein data written to the first persistent main memory is also written to the second persistent main memory and wherein upon failure to read the data, the mirrored copy of the data is read.
  • the computing system can include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor.
  • the plurality of persistent main memories including a first persistent main memory selected by address for storing data and a second persistent main memory selected by address for storing a check sum associated with the data, wherein the check sum in the second persistent main memory is updated based on a difference between old data and new data stored in the first persistent main memory and wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data.
  • the persistent main memories can include a pool of shared persistent main memory, regions of which are allocated to each computing node.
  • a functional computing node accesses the region of the pool of shared persistent main memory so that data in the region of the pool of shared persistent memory is always available.
  • a method is described herein.
  • the method includes receiving, in a memory controller, a request to write data to a persistent main memory of a computing system.
  • the method also includes writing the data to the persistent main memory.
  • the method further includes writing redundant data to a persistent memory remote to the persistent main memory.
  • the method can further include simultaneously writing the data to the persistent main memory and writing the redundant data to the persistent memory remote to the persistent main memory.
  • Writing redundant data can include updating a check sum of parity bits of data in the computing system, wherein the data in the computing system is stored on persistent main memory of at least two nodes.
  • the method can further include receiving, in a memory controller, a request to access data in the persistent main memory, attempting to access the persistent main memory, receiving notice of a failure of the persistent main memory, reconstructing the data from the redundant data, saving the data, and returning the data to complete the request.
  • present examples may be susceptible to various modifications and alternative forms and have been shown only for illustrative purposes.
  • present techniques support both reading and writing operations to a data structure cache.
  • present techniques are not intended to be limited to the particular examples disclosed herein. Indeed, the scope of the appended claims is deemed to include all alternatives, modifications, and equivalents that are apparent to persons skilled in the art to which the disclosed subject matter pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Hardware Redundancy (AREA)

Abstract

A computing system can include a processor and a persistent main memory including a fault tolerance capability. The computing system can also include a memory controller to store data in the persistent main memory and create redundant data. The memory controller can also store the redundant data remotely with respect to the persistent main memory. The memory controller can further access the redundant data during failure of the persistent main memory.

Description

FAULT TOLERANCE FOR PERSISTENT MAIN MEMORY
BACKGROUND
[0001] Current data storage devices often include a fault tolerance to ensure that data is not lost in the event of a device error or failure. An example of a fault tolerance provided to current data storage devices is a redundant array of
independent disks. A redundant array of independent disks (RAID) is a storage technology that controls multiple disk drives and provides fault tolerance by storing data with redundancy. RAID technology can store data with redundancy in a variety of ways. Examples of redundant data storage methods include duplicating data and storing the data in multiple locations and adding bits to store calculated error recovery bits.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Certain examples are described in the following detailed description and in reference to the drawings, in which:
[0003] Fig. 1 is a block diagram of an example of a computing system including fault tolerance;
[0004] Fig. 2 is a block diagram of an example of a computing system including fault tolerance;
[0005] Fig. 3 is a process flow diagram of an example of a method of writing data to memory with fault tolerance;
[0006] Fig. 4 is a process flow diagram of an example of a method of accessing memory with fault tolerance.
DETAILED DESCRIPTION OF SPECIFIC EXAMPLES
[0007] Techniques described herein relate generally to redundant data storage. More specifically, techniques described herein relate to redundant data storage in persistent main memory. Main memory is primary storage that is directly or indirectly connected to a central processing unit (CPU) and is directly accessible to the CPU. For redundancy of data stored to a disk, current systems provide a storage array controller that intercepts a block store and its associated data and stores the data redundantly across a number of disk devices to ensure that the data can be recovered in the event of a failure of one or more devices. One approach is to calculate and store error recovery bits, such as RAID 5 parity, such that any data lost in a failing device can be recreated from the error recovery bits and data from the non-failing devices. Another approach is to duplicate the data and store the data in multiple locations, such as via RAID 1 technology. A variety of RAID levels and algorithms can be used to provide the desired level of protection. Other systems have attached multiple channels of disk storage to a system. Duplication or RAIDing of the data is then handled within the storage management software for the system. When blocks of data are to be committed to storage, software initiates multiple transfers to multiple disks in order to implement the desired RAID algorithms.
[0008] These current systems for providing redundancy to data stored to a disk work for block storage devices when blocks transferred to a disk are managed by paging software and IO DMA mechanics. However, these methods are not well suited to directly accessed storage, such as persistent main memory, which gains performance benefit not only from its short access latency, but also from the absence of block-transfer IO handlers. Directly accessed storage also gains performance benefit from the efficiencies of moving only data that is actually modified or requested, rather than the entire block.
[0009] When storage is written with relied-upon data, whether it is in traditional devices such as spinning hard disks, solid state disks, or in the direct access model, data integrity is preferably preserved. With slower disk devices, this data integrity is managed when blocks of data are transferred between non-persistent memory and disk. When blocks of data are transferred, software, or attached storage arrays, provide for the scattering of the data across multiple devices. In this model, a failure of one such device will cause the loss of a portion of the stored bits, but the complete data is recoverable from the remaining devices.
[0010] New system architectures take advantage of dense, persistent, low latency memory devices to provide for large storage arrays accessed directly by a processor and cached in a processor's caches. New solid state persistent memory devices with densities like flash memory and access times like DRAM memories allow the design of systems that treat this memory as storage, but access it as memory, i.e., through direct memory access, allowing the solid state persistent memory devices to be used as persistent main memory. To protect data stored in this persistent main memory, capabilities are integrated into the paths to access this memory which, in addition to routing write requests to memory, also route the data in a mirrored or RAID fashion to multiple storage locations in separate persistent memory devices. This routing ensures data recovery in the case of a persistent memory device failure while maintaining current programming paradigms.
[0011] By adding fault tolerance functionality in the main memory access path that operates on small units of data, for example individual cache lines, at main memory access speeds, this type of data protection (e.g., data duplication or RAIDing) can be extended to direct memory access, such as persistent main memory, without awareness of the protection mechanism at a software application level. Current redundancy solutions only move data to disk when software is ready to commit that data. The system must then wait for this operation to complete, including the time required to write full RAID data to multiple devices before proceeding. In the present invention, storage commit can be completed faster by performing the RAID updates to persistent main memory as the individual cache line writes occur. Further, by spreading cache lines of a memory page across multiple persistent main memories, RAID operations for multiple cache lines can be processed in parallel. By processing the RAID operations in parallel, the time to complete transactions is reduced and demand on the system is balanced. The overall result is a faster, more efficient distribution of protected data across storage devices, from a power and data- movement perspective.
[0012] Fig. 1 is a block diagram of a computing system 100 including fault tolerance. In an example, computing system 100 is a server cluster. The computing system 100 includes a number of nodes, such as computing nodes 102. In a further example, computing system 100 may also include a memory node, such as remote memory node 1 12, or multiple memory nodes. A memory node is a collection of memory, such as a collection of memory devices, for storing a large amount of data. The nodes 1 02 are communicably coupled to each other through a network 104, such as a server cluster fabric. The computing system 100 can include several compute nodes, such as several tens or even thousands of compute nodes. [0013] The compute nodes 102 include a Central Processing Unit (CPU) 106 to execute stored instructions. The CPU 1 06 can be a single core processor, a multi- core processor, or any other suitable processor. In an example, compute node 102 includes a single CPU. In another example, compute node 102 includes multiple CPUs, such as two CPUs, three CPUs, or more.
[0014] The compute node 102 includes a persistent main memory 1 08. The persistent main memory 108 can include volatile dynamic random access memory (DRAM) with battery backup, non-volatile phase change random access memory (PCRAM), spin transfer torque-magnetoresistive random access memory (STT- MRAM), resistive random access memory (reRAM), memristor, FLASH, or other types of memory devices. For example, the persistent main memory 108 can be solid state, persistent, dense, fast memory. Fast memory can be memory having an access time similar to DRAM memory.
[0015] Compute node 1 02 further includes a memory controller 1 10. The memory controller 1 10 communicates with the persistent main memory 108 and controls access to the persistent main memory 1 08by the CPU 106. Persistent memory is non-volatile storage, such as storage on a storage device. In an example, the memory controller 1 10 can be a RAID memory controller. When the memory controller 1 10 receives an access request for persistent main memory 108, the memory controller 1 10 generates transactions to the local persistent main memory 108.
[0016] Computing system 100 also includes remote memory 1 12. Remote memory 1 12 can be persistent memory, and may be identical to persistent main memory 108. Remote memory 1 12 is communicably coupled to the computing nodes through a network 104, such as a server cluster fabric. Remote memory is remote and separate from persistent main memory 1 08. For example, remote memory 1 12 can be physically separate from persistent main memory 108. In an example, remote memory can be persistent memory divided into regions or ranges of memory address spaces. Each region can be assigned to a computing node 102. Each region can additionally be accessed by computing nodes other than the assigned computing node. In the event of a failure of the assigned computing node, another computing node can access the region or the region can be reassigned in order to preserve access to the data in remote memory.
[0017] Remote memory 1 12 includes redundant data 1 14. Remote memory 1 1 2 acts as a fault tolerance capability (i.e., providing a system and/or method of data recovery in order to ensure data integrity) to persistent main memory 108 via redundant data 1 14. When a memory controller 1 10 receives a write operation to persistent main memory configured for redundant storage to ensure the integrity of the data, the memory controller 1 1 0 will generate a transaction to the remote memory 1 12, resulting in generation and storage of redundant data 1 14, at the same time as the memory controller 1 1 0 writes to the persistent main memory 108. By storing redundant data 1 14 to the remote memory 1 12, the data is effectively spread across multiple devices such that the data can be recovered when a device, or even multiple devices, fails.
[0018] In an embodiment employing RAID 1 , the write data is duplicated, or mirrored, to produce an identical copy of the data. In this mirroring mode, the data is written to local persistent main memory 108. An identical copy of the data is written to remote memory 1 1 2, becoming redundant data 1 14. In mirroring mode, memory controller 1 1 0 accesses persistent main memory 108 in response to requests by CPU 106. In mirroring mode, remote memory 1 1 2 is large enough to store copies of all data stored in persistent main memory 1 08 of all computing nodes 102 in computing system 100. In the example illustrated by Fig. 1 , remote memory 1 12 is at least three times larger than persistent main memory 108, in order to have the capacity to store copies of all data stored in persistent main memory 1 08.
[0019] However, if the persistent main memory 108 of a computing node 102 becomes inaccessible, such as because of a failure of persistent main memory 108, the redundant data 1 14 is accessed. Standard error-detection or error-correction codes, such as parity or error-correcting codes (ECC), can be used to detect when a locally uncorrectable error, i.e., an error in persistent main memory 108 that cannot be corrected with only the error correction code (ECC) of the persistent main memory 108, has occurred in the data read from persistent main memory 108. The error- detection/-correction codes signal memory controller 1 1 0 to read the redundant data 1 14 from the remote memory 1 12. The redundant data 1 14 from the remote memory 1 12 is provided to CPU 106 to satisfy the access request, and copied to persistent main memory 108, when persistent main memory 108 is functional, or used to create an alternate redundant copy of the data in a new location when persistent main memory 108 is no longer functional. The redundant data 1 14 can also be accessed, such as by another computing node 1 02, in the event of a failure of the local computing node 102. In that event, the new computing node 102 will be able to access the redundant data 1 14 and continue where the failed node 102 left off. In addition, in the event that the copy of the data stored to persistent main memory 108 becomes corrupted, the redundant data 1 14 can be accessed and a copy of the original data saved to persistent main memory 1 08. In another example, another computing node 102 can access a copy of the data from remote memory 1 12, without involving the originating computing node 102.
[0020] Persistent main memory 108 is generally accessed, such as in a read transaction, in order to service requests at the lowest latency. As a result, remote memory 1 12 is rarely accessed. An implementation may choose to occasionally access remote memory 1 12 to confirm that remote memory 1 12 remains accessible and able to provide correct data. By confirming the accessibility of remote memory 1 12, the integrity of the redundant data 1 14 is ensured. In an example, memory accesses, such as read requests, can occasionally be serviced by accessing the redundant data 1 14 of remote memory 1 12 rather than persistent main memory 1 08. By occasionally servicing a memory access from remote memory 1 1 2, the system 100 can verify that remote memory 1 1 2 and redundant data 1 14 have not failed.
[0021] Memory controllers 1 10 often scrub stored data in order to detect and correct any soft errors that may have occurred during a period of infrequent access. In an example, scrubbing of redundant data 1 14 in remote memory 1 12 is supported by memory controller 1 10. In another example, remote memory 1 12 provides scrubbing of redundant data 1 14 without involving memory controller 1 1 0.
[0022] In another embodiment, RAID 4 across computing nodes 102 is employed to ensure data integrity. Remote memory 1 12 is used to hold RAID parity controlled by memory controller 1 10. In this RAID mode, memory controller 1 1 0 is a RAID controller. The memory controller 1 10 can include a data recovery engine configured to rebuild data after an access failure renders the data inaccessible. The data recovery engine includes a combination of hardware and programming. For example, the data recovery engine can include a non-transitory, computer-readable medium for storing instructions, one or more processors for executing the instructions, or a combination thereof.
[0023] Upon receiving a request to write data to memory, memory controller 1 10 writes the new data to persistent main memory 1 08. In addition, the memory controller 1 1 0 sends a command to update the redundant data 1 14 on remote memory 1 12. For example, the memory controller 1 10 performs an exclusive or (XOR) operation on the incoming write data with the old data in persistent main memory 108. The memory controller 1 10 then sends a write delta (write changes) command with the results of the XOR operation to remote memory 1 12. Remote memory 1 12 will read the redundant data 1 14, which consists of RAID 4 parity, XOR it with the write delta data, and write the result back into redundant data 1 14. In an example, the redundant data 1 14 is updated at the same time that the data is written to persistent main memory 108. When all memory writes have been completed by persistent main memory 108 and remote memory 1 12, memory controller 1 10 sends a response to the system write transaction to CPU 106 indicating that the write has been completed and made durable. In an example, computing system 100 includes a single remote memory 1 12. In another example, computing system 100 includes multiple remote memories 1 12.
[0024] When memory controller 1 1 0 in a system employing a RAID 4 level receives a request to read data, the memory controller attempts to read the data from local persistent main memory 108. If the data in persistent main memory 108 has a locally-uncorrectable error, or the local persistent main memory is inaccessible, the memory controller 1 10 will read the RAID 4 parity from redundant data 1 14, read data associated with the RAID 4 parity from persistent main memories 108 of all other computing nodes 1 02, and then XOR the RAID 4 parity with all associated data to regenerate the uncorrectable local data. The regenerated data is returned to CPU 106. If the local persistent main memory 108 is available, the regenerated data is saved to persistent main memory 108. If the persistent main memory 108 is not available, the regenerated data can be saved to the persistent main memory of another computing node. In a further example, the regenerated data can be saved to remote memory 1 1 2. The regenerated data can then be accessed by the original computing node with non-available persistent main memory 108, or by a replacement computing node.
[0025] In the event of a failure of the computing node 102, a replacement computing node 102 accesses the redundant data 1 14 on remote memory 1 12, and copies the data to the local persistent main memory 108 of the replacement computing node.
[0026] In a further example employing RAID 1 , a second computing node acts as remote memory containing redundant data for the first computing node 102. The second computing node includes a memory controller coupled to persistent main memory. The memory controller 1 1 0 on the first computing node 102 receives a request from CPU 106 to write data to local persistent main memory 108. The memory controller writes the new data, or updates stored data, to the persistent main memory 108 of the first computing node 102. At the same time, the redundant data is sent to the memory controller 1 10 on the second computing node. The memory controller on the second computing node writes the redundant data into local persistent main memory 108, and then sends a response to the memory controller on the first computing node. When all memory writes have been completed by persistent main memory 108 on the first computing node and persistent main memory 108 on the second computing node, memory controller 1 10 on the first computing node sends a response to the system write transaction to CPU 106 on the first computing node. In an example, multiple additional computing nodes can act as remote memories containing redundant data for the first computing node 102. In a further example, the redundant data for the first computing node 1 02 can be distributed across multiple computing nodes.
[0027] When memory controller 1 1 0 in a system employing a RAID 1 level receives a request to read data, the memory controller reads the data from local persistent main memory 108. If the data in persistent main memory 108 has a locally-uncorrectable error, or the local persistent main memory is inaccessible, the memory controller 1 10 accesses the redundant data in the persistent main memory 108 of the second computing node 1 02. In the event of such a failure, the memory controller 1 1 0 can identify the memory location of the inaccessible data. A memory location can be in a memory address space. In some embodiments, a table of memory address spaces can be built to expedite the identification of memory locations corresponding to inaccessible data. The table of memory address spaces can be built by the memory controller 1 10.
[0028] It is to be understood the block diagram of Fig. 1 is not intended to indicate that computing system 100 is to include all of the components shown in Fig. 1 in every case. Further, any number of additional components may be included within computing system 100, depending on the details of the specific implementation.
[0029] Fig. 2 is a block diagram of a computing system 200 including fault tolerance. In an example, computing system 200 is a server cluster. The computing system 200 includes a number of nodes, such as computing node 202. In a further example, computing system 200 may also include a number of memories, such as persistent main memory 21 0. A persistent main memory 210 is a collection of memory, such as a collection of memory devices, for storing a large amount of data. The computing nodes 202 and persistent main memories 21 0 are communicably coupled to each other through a network 204. The computing system 200 can include several compute nodes, such as several tens or even thousands of compute nodes, and several persistent main memories.
[0030] The compute nodes 202 include a Central Processing Unit (CPU) 206 to execute stored instructions. The CPU 206 can be a single core processor, a multi- core processor, or any other suitable processor. In an example, a compute node 202 includes a single CPU. In another example, a compute node 202 includes multiple CPUs, such as two CPUs, three CPUs, or more.
[0031] Compute node 202 further includes a memory controller 208. The memory controller 208 communicates with persistent main memories 210 and controls access to persistent main memories 210, by the CPU 206. In an example, the memory controller 208 is a RAID memory controller. When the memory controller 208 receives a request to write to persistent main memory 21 0, the memory controller 208 generates a write transaction to the selected persistent main memory 210.
[0032] Each compute node 202 can be communicably coupled with the persistent main memories 210. The persistent main memory 21 0 can be volatile dynamic random access memory (DRAM) with battery backup, non-volatile phase change random access memory (PCRAM), spin transfer torque-magnetoresistive random access memory (STT-MRAM), resistive random access memory (reRAM), memristor, FLASH, or other types of memory devices. In an example, the persistent main memory 210 is solid state, persistent, dense, fast memory. Fast memory can be memory having an access time similar to DRAM memory. Persistent main memory 210 is remote to computing nodes 202 and is accessed via a network 204, such as a communication fabric.
[0033] The persistent main memories 210 can be combined to form a pool of nonvolatile, persistent main memory. Regions of the pool of non-volatile, persistent main memory are allocated to computing nodes 202 within computing system 200. In the event of a failure of a particular computing node 202, the region of the pool of non-volatile, persistent main memory allocated to the failed computing node may be reallocated to a functioning computing node 202. In this way, access to the data in the non-volatile memory region is not lost when the computing node fails. In a further example, regions of the pool of non-volatile, persistent main memory can be accessed by additional computing nodes 202.
[0034] In a further example, the computing system 200 includes a single persistent memory. The persistent memory is divided into regions (ranges of memory address spaces). Each region is assigned to a computing node to act as persistent main memory 21 0 for the computing node 102. Each region can also be accessed by additional computing nodes. In the event of a failure of the assigned computing node, the region of memory can be reassigned to a functioning computing node.
[0035] Computing system 200 also includes remote memory 21 2. Remote memory is a persistent main memory, such as persistent main memory 210, which has been designated by the system 200 to act as remote memory 212. In an example, multiple persistent main memories can be designated to act as remote memories 212. Remote memory 212 is communicably coupled to the computing nodes through a network 204, such as a communication fabric.
[0036] Remote memory 212 includes redundant data 214. Remote memory 21 2 provides a fault tolerance capability (i.e., providing a system and/or method of data recovery in order to ensure data integrity) to persistent main memory(s) 210 via redundant data 214. The redundant data stored by the remote memory(s) 212 can be accessed by the computing nodes 102. The redundant data stored by the remote memory(s) 212 can also be accessed by additional computing nodes, such as in the event of a failure of a computing node or data corruption.
[0037] In an embodiment, persistent main memories 210 and remote memory 212 are organized as a RAID 1 group. Remote memory 212 is used to hold data controlled by memory controllers 208. In this embodiment, memory controller 208 is a RAID controller. Each memory controller 208 independently performs all RAID calculations for the region of the pool of non-volatile, persistent main memory allocated to the particular memory controller 208. The memory controller 208 can include a data recovery engine configured to rebuild data after an access failure renders the data inaccessible. The data recovery engine includes a combination of hardware and programming. For example, the data recovery engine can include a non-transitory, computer-readable medium for storing instructions, one or more processors for executing the instructions, or a combination thereof.
[0038] Upon receiving a system write transaction from CPU 206, memory controller 208 writes the new data to the selected persistent main memory 210. In addition, the memory controller 208 sends a command to update the redundant data 214 stored in remote memory 212. In an example, the redundant data 214 is written with a copy of the new data at the same time that the new data is written to persistent main memory 210. When all memory writes have been acknowledged by persistent main memory 210 and remote memory 212, memory controller 208 sends a response to the system write transaction to CPU 206 indicating that the write has been completed and made durable. In an example, computing system 200 includes a single remote memory 212. In another example, computing system 200 includes multiple remote memories 212.
[0039] When a memory controller 208 in computing node 202 of a system 200 employing a RAID protocol receives a request from CPU 206 to read data, the memory controller 208 reads the data from the selected persistent main memory 210. If the data in the persistent main memory 210 has a locally-uncorrectable error, or the persistent main memory is inaccessible, the memory controller 208 accesses remote memory 21 2 to read the redundant data 214. In the event of such a failure, the memory controller 208 can identify the memory location of the inaccessible data. A memory location can be a memory address space. In some embodiments, a table of memory address spaces can be built to expedite the identification of memory locations corresponding to inaccessible data. The table of memory address spaces can be built by the memory controller 208.
[0040] The memory controller 208 regenerates the data, such as with the data recovery engine, from redundant data 214 and sends the regenerated data to CPU 206, completing the read request. If the selected persistent main memory 210 is available, the regenerated data is also saved to the selected persistent main memory 210. If the selected persistent main memory 21 0 is not available, the regenerated data can be saved to an additional persistent main memory. The regenerated data can then be accessed by the computing nodes 202.
[0041] In another embodiment, RAID 5 across persistent main memories 210 is employed to ensure data integrity. For RAID 5, parity is distributed across all persistent main memories 210, rather than placed solely in remote memory 212, in order to maximize available memory bandwidth. In this RAID mode, memory controller 208 is a RAID controller. The RAID controller can include a data recovery engine configured to regenerate data after a hardware failure renders the data inaccessible.
[0042] Upon receiving a request from CPU 206 to write data to memory, memory controller 208 reads the old data from persistent main memory 210 selected by the write address, and performs an exclusive or (XOR) operation using the old data and the new write data. The memory controller 208 then writes the new data to the persistent memory 210 selected by the write address. Additionally, the memory controller 208 then sends a write delta (write changes) command with the results of the XOR operation to a second persistent main memory 210. The second persistent main memory is selected based on the write address, and contains the RAID 5 parity data for the specified write address. The second persistent main memory 210 will read the RAID 5 parity data, XOR it with the write delta data, and write the result back into the RAID 5 parity data. In an example, the RAID 5 parity data is updated at the same time that the data is written to persistent main memory 21 0. When all memory writes have been completed by persistent main memory 21 0 and parity persistent main memory 210, memory controller 208 sends a response to the system write transaction to CPU 206 indicating that the write has been completed and made durable. In an example, computing system 200 includes multiple persistent main memories 210, so as to reduce the memory capacity overhead of RAID 5.
[0043] When memory controller 208 in a system employing RAID 5 receives a request from CPU 206 to read data, the memory controller attempts to read the data from a first persistent main memory 210 selected by the read address. If the data in the first persistent main memory 210 has a locally-uncorrectable error, or the persistent main memory is inaccessible, the memory controller 208 will read the RAID 5 parity associated with the uncorrectable data, and read all other data associated with the RAID 5 parity from other persistent main memories 210. The memory controller 208 will then compute the XOR of the RAID 5 parity with all associated data to regenerate the uncorrectable local data. The regenerated data is returned to CPU 206. If the first persistent main memory 210 is available, the regenerated data is saved to first persistent main memory 21 0. If the first persistent main memory 210 is not available, the regenerated data can be saved to an alternate persistent main memory. The regenerated data can then be accessed from the alternate persistent main memory by any of the computing nodes 202.
[0044] The computing system 200 can be adapted to employ other standard RAID levels, i.e., RAID level 2, 3, 4, or 6. Depending on the RAID level employed, remote memory 212 may or may not be used.
[0045] It is to be understood the block diagram of Fig. 2 is not intended to indicate that computing system 200 is to include all of the components shown in Fig. 2 in every case. Further, any number of additional components may be included within computing system 200, depending on the details of the specific implementation.
[0046] Fig. 3 is a process flow diagram of a method 300 of writing data to memory with fault tolerance. At block 302, a request is received in a memory controller to write data to a non-volatile, persistent main memory. The memory address space is an address space of non-volatile, persistent main memory, such as persistent main memory 108 or 210, of a computing system. In an example, the memory controller resides in a computing node, such as computing node 1 02 or 202. The request originates from a processor, such as CPU 106 or 206.
[0047] At block 304, the data is written to the persistent main memory. In an example, the persistent main memory resides in a computing node. In another example, the persistent main memory is remote from a computing node and forms a pool of persistent main memory, and the data is written to a region of the pool of nonvolatile, persistent main memory allocated to the computing node.
[0048] At block 306, redundant data is written to remote memory. In an example, the redundant data is a mirrored copy of the data. In another example, the redundant data is written with the result of calculating an XOR of the new write data, the old data and the old RAID parity. In a further example, blocks 304 and 306 occur
simultaneously.
[0049] It is to be understood that the process flow diagram of Fig. 3 is not intended to indicate that the steps of the method 300 are to be executed in any particular order, or that all of the steps of the method 300 are to be included in every case. Further, any number of additional steps not shown in Fig. 3 may be included within the method 300, depending on the details of the specific implementation.
[0050] Fig. 4 is a process flow diagram of a method 400 of accessing memory with fault tolerance. At block 402, a request is received in a memory controller to access data stored in a non-volatile, persistent main memory. In an example, the memory controller resides in a computing node, such as computing node 1 02 or 202. The request originates from a processor, such as CPU 106 or 206.
[0051] At block 404, the memory controller attempts to access the non-volatile, persistent main memory. In particular, the memory controller initiates access to the address space of the non-volatile, persistent main memory in which the data resides. At block 406, the memory controller determines if it is able to access the persistent main memory by either receiving the requested data, or receiving an error message or having the access time out. If the memory controller receives the requested data at block 408, then at block 410 the memory controller returns data to the CPU.
[0052] If, at block 406, the memory controller is not able to access the persistent main memory, at block 414 notice of a failure to access the persistent main memory is received. In an example, the failure is due to a failed non-volatile, persistent main memory. A non-volatile persistent main memory failure can be due to a damaged memory device, a failed persistent main memory module, corrupted data, or any other failure. At block 41 6, redundant data is accessed. Redundant data can be stored on remote memory, on other persistent main memories, on additional computing nodes, or on a combination of remote memory, persistent main memories and additional computing nodes. In an example, the redundant data is a mirrored copy of the data produced when the system is in mirroring mode. In another example, the redundant data is a combination of parity data produced when the system is in RAID mode, and other data associated with the parity data. At block 418, the data is reconstructed from the redundant data. In RAID mode, the data can be reconstructed using a data recovery engine.
[0053] At block 420, the memory controller determines if the non-volatile, persistent main memory failed. If the memory controller determines that the nonvolatile, persistent main memory has not failed, at block 422 the reconstructed data can be saved to the persistent main memory. At block 410, data can be returned to the CPU. If the memory controller determines that the non-volatile, persistent main memory has failed, method 400 can proceed to block 424. At block 424, the memory controller determines if alternate persistent main memory exists. If alternate persistent main memory does not exist, data is returned to the CPU at block 410. If alternate persistent main memory does exist, at block 426 the reconstructed data can be saved to an alternate persistent main memory. In an example, the alternate persistent main memory can be a remote memory. In another example, the alternate persistent main memory can be a persistent main memory of another computing node. At block 410, data is returned to the CPU.
[0054] It is to be understood that the process flow diagram of Fig. 4 is not intended to indicate that the steps of the method 400 are to be executed in any particular order, or that all of the steps of the method 400 are to be included in every case. Further, any number of additional steps not shown in Fig. 4 may be included within the method 400, depending on the details of the specific implementation.
Example 1
[0055] A computing system is described herein. The computing system includes a processor and a persistent main memory including a fault tolerance capability. The computing system also includes a memory controller. The memory controller is to store data in the persistent main memory, create redundant data, and store the redundant data remotely with respect to the persistent main memory. The memory controller is also to store the redundant data remotely with respect to the persistent main memory. The memory controller is further to access the redundant data during failure of the persistent main memory.
[0056] The fault tolerance capability can include storing data to the persistent main memory and mirroring the data to a remote persistent main memory. The fault tolerance capability can also include storing data to the persistent main memory and updating at least one parity remote memory based on a difference between old and new data. The computing system can also include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor, wherein the persistent main memories reside remotely to the computing nodes, the persistent main memories including a pool of shared persistent main memory, regions of which are allocated to each computing node.
Example 2
[0057] A computing system including fault tolerance is described herein. The computing system includes a persistent main memory for storing data and a remote persistent main memory for storing redundant data. The computing system also includes a memory controller. The memory controller accesses the redundant data when the data in the persistent main memory cannot be accessed.
[0058] The redundant data can include a mirrored copy of the data. The computing system can include a plurality of nodes, each node including a persistent main memory for storing data, and a remote persistent memory for storing redundant data, the remote persistent memory acting as a parity node, including a check sum of parity bits of data in the persistent main memories of the nodes of the computing system, wherein the check sum is updated when data is written to the persistent main memory of a node, and wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data. The computing system can include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor. The plurality of persistent main memories can include a first persistent main memory selected by address for storing data, a second persistent main memory selected by address for storing a mirrored copy of the data, wherein data written to the first persistent main memory is also written to the second persistent main memory and wherein upon failure to read the data, the mirrored copy of the data is read. The computing system can include a plurality of computing nodes and a plurality of persistent main memories, each computing node including a processor and a memory controller to communicate with the persistent main memories in response to a request by the processor. The plurality of persistent main memories including a first persistent main memory selected by address for storing data and a second persistent main memory selected by address for storing a check sum associated with the data, wherein the check sum in the second persistent main memory is updated based on a difference between old data and new data stored in the first persistent main memory and wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data. The persistent main memories can include a pool of shared persistent main memory, regions of which are allocated to each computing node. In event of a failure of an original computing node to which a region of the pool of shared persistent main memory is allocated, a functional computing node accesses the region of the pool of shared persistent main memory so that data in the region of the pool of shared persistent memory is always available.
Example 3
[0059] A method is described herein. The method includes receiving, in a memory controller, a request to write data to a persistent main memory of a computing system. The method also includes writing the data to the persistent main memory. The method further includes writing redundant data to a persistent memory remote to the persistent main memory.
[0060] The method can further include simultaneously writing the data to the persistent main memory and writing the redundant data to the persistent memory remote to the persistent main memory. Writing redundant data can include updating a check sum of parity bits of data in the computing system, wherein the data in the computing system is stored on persistent main memory of at least two nodes. The method can further include receiving, in a memory controller, a request to access data in the persistent main memory, attempting to access the persistent main memory, receiving notice of a failure of the persistent main memory, reconstructing the data from the redundant data, saving the data, and returning the data to complete the request.
[0061] The present examples may be susceptible to various modifications and alternative forms and have been shown only for illustrative purposes. For example, the present techniques support both reading and writing operations to a data structure cache. Furthermore, it is to be understood that the present techniques are not intended to be limited to the particular examples disclosed herein. Indeed, the scope of the appended claims is deemed to include all alternatives, modifications, and equivalents that are apparent to persons skilled in the art to which the disclosed subject matter pertains.

Claims

CLAIMS What is claimed is:
1 . A computing system, comprising:
a processor;
a persistent main memory comprising a fault tolerance capability; and a memory controller to:
store data in the persistent main memory;
create redundant data;
store the redundant data remotely with respect to the persistent main memory; and
access the redundant data during failure of the persistent main memory.
2. The computing system of claim 1 , wherein the fault tolerance capability comprises storing data to the persistent main memory and mirroring the data to a remote persistent memory.
3. The computing system of claim 1 , wherein the fault tolerance capability comprises storing data to the persistent main memory and updating at least one parity remote memory based on a difference between old data and new data.
4. The computing system of claim 1 , wherein the computing system comprises a plurality of computing nodes and a plurality of persistent main memories, each computing node comprising:
a processor; and
a memory controller to communicate with the persistent main memories in response to a request by the processor,
wherein the persistent main memories reside remotely to the computing nodes, the persistent main memories comprising a pool of shared persistent main memory, regions of which are allocated to each computing node.
5. A computing system comprising fault tolerance, the computing system comprising:
a persistent main memory for storing data;
a remote persistent main memory for storing redundant data; and a memory controller,
wherein the memory controller accesses the redundant data when the data in the persistent main memory cannot be accessed.
6. The system of claim 5, wherein the redundant data comprises a mirrored copy of the data.
7. The system of claim 5, wherein the computing system comprises: a plurality of nodes, each node comprising a persistent main memory for storing data; and
a remote persistent memory for storing redundant data, the remote persistent memory acting as a parity node, comprising a check sum of parity bits of data in the persistent main memories of the nodes of the computing system,
wherein the check sum is updated when data is written to the persistent main memory of a node; and
wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data.
8. The system of claim 5, wherein the computing system comprises a plurality of computing nodes and a plurality of persistent main memories, each computing node comprising:
a processor; and
a memory controller to communicate with the persistent main memories in response to a request by the processor;
the plurality of persistent main memories comprising a first persistent main memory selected by address for storing data, a second persistent main memory selected by address for storing a mirrored copy of the data, wherein data written to the first persistent main memory is also written to the second persistent main memory; and
wherein upon failure to read the data, the mirrored copy of the data is read.
9. The system of claim 5, wherein the computing system comprises a plurality of computing nodes and a plurality of persistent main memories, each computing node comprising:
a processor; and
a memory controller to communicate with the persistent main memories in response to a request by the processor;
the plurality of persistent main memories comprising a first persistent main memory selected by address for storing data, a second persistent main memory selected by address for storing a check sum associated with the data;
wherein the check sum in the second persistent main memory is updated based on a difference between old data and new data stored in the first persistent main memory; and
wherein upon failure to read the data, the check sum and all other data values contributing to the check sum are read and combined to reconstruct lost data.
10. The system of claim 9, wherein the persistent main memories comprise a pool of shared persistent main memory, regions of which are allocated to each computing node.
1 1 . The system of claim 10, wherein in event of a failure of an original computing node to which a region of the pool of shared persistent main memory is allocated, a functional computing node accesses the region of the pool of shared persistent main memory so that data in the region of the pool of shared persistent memory is always available.
12. A method, comprising:
receiving, in a memory controller, a request to write data to a persistent main memory of a computing system; writing the data to the persistent main memory; and
writing redundant data to a persistent memory remote to the persistent main memory.
13. The method of claim 12, further comprising simultaneously writing the data to the persistent main memory and writing the redundant data to the persistent memory remote to the persistent main memory.
14. The method of claim 12, writing redundant data comprising updating a check sum of parity bits of data in the computing system, wherein the data in the computing system is stored on persistent main memory of at least two nodes.
15. The method of claim 12, further comprising:
receiving, in a memory controller, a request to access data in the
persistent main memory;
attempting to access the persistent main memory;
receiving notice of a failure of the persistent main memory; reconstructing the data from the redundant data;
saving the data; and
returning the data to complete the request.
PCT/US2013/048759 2013-06-28 2013-06-28 Fault tolerance for persistent main memory WO2014209394A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP13887761.8A EP3014448A4 (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory
CN201380077638.9A CN105308574A (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory
PCT/US2013/048759 WO2014209394A1 (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory
US14/901,559 US10452498B2 (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2013/048759 WO2014209394A1 (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory

Publications (1)

Publication Number Publication Date
WO2014209394A1 true WO2014209394A1 (en) 2014-12-31

Family

ID=52142515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/048759 WO2014209394A1 (en) 2013-06-28 2013-06-28 Fault tolerance for persistent main memory

Country Status (4)

Country Link
US (1) US10452498B2 (en)
EP (1) EP3014448A4 (en)
CN (1) CN105308574A (en)
WO (1) WO2014209394A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017023244A1 (en) * 2015-07-31 2017-02-09 Hewlett Packard Enterprise Development Lp Fault tolerant computing
CN114697372A (en) * 2022-05-31 2022-07-01 深圳市泛联信息科技有限公司 Data transmission processing and storage method, system and medium in distributed system

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10735500B2 (en) 2012-12-11 2020-08-04 Hewlett Packard Enterprise Development Lp Application server to NVRAM path
JP6489827B2 (en) * 2014-12-26 2019-03-27 キヤノン株式会社 Image processing apparatus, image processing apparatus control method, and program
US10013171B2 (en) * 2015-06-29 2018-07-03 International Business Machines Corporation Reducing stress on RAIDS under rebuild
US10241683B2 (en) * 2015-10-26 2019-03-26 Nxp Usa, Inc. Non-volatile RAM system
CN106020975B (en) * 2016-05-13 2020-01-21 华为技术有限公司 Data operation method, device and system
US10181348B2 (en) * 2017-01-13 2019-01-15 Samsung Electronics Co., Ltd. Memory device comprising resistance change material and method for driving the same
US10592424B2 (en) 2017-07-14 2020-03-17 Arm Limited Range-based memory system
US10565126B2 (en) 2017-07-14 2020-02-18 Arm Limited Method and apparatus for two-layer copy-on-write
US10534719B2 (en) 2017-07-14 2020-01-14 Arm Limited Memory system for a data processing network
US10489304B2 (en) 2017-07-14 2019-11-26 Arm Limited Memory address translation
US10613989B2 (en) 2017-07-14 2020-04-07 Arm Limited Fast address translation for virtual machines
US10467159B2 (en) 2017-07-14 2019-11-05 Arm Limited Memory node controller
CN111917656B (en) * 2017-07-27 2023-11-07 超聚变数字技术有限公司 Method and device for transmitting data
CN107886692B (en) * 2017-10-30 2020-08-25 皖西学院 Sensor loRa wireless network communication system for bioengineering
US10884850B2 (en) 2018-07-24 2021-01-05 Arm Limited Fault tolerant memory system
US20220129198A1 (en) * 2019-07-12 2022-04-28 Hewlett-Packard Development Company, L.P. Data updates for controllers
US11928497B2 (en) * 2020-01-27 2024-03-12 International Business Machines Corporation Implementing erasure coding with persistent memory
US11544153B2 (en) * 2020-03-12 2023-01-03 International Business Machines Corporation Memory error handling during and/or immediately after a virtual machine migration
CN112306763B (en) * 2020-11-03 2024-04-09 中国航空工业集团公司西安航空计算技术研究所 Method and device for selecting redundant data sources
RU2766271C1 (en) * 2021-03-17 2022-02-10 Акционерное общество «Информационные спутниковые системы» имени академика М.Ф.Решетнёва» Method for ensuring fault tolerance of memory elements

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020194530A1 (en) * 1997-09-30 2002-12-19 Santeler Paul A. Fault tolerant memory
US20030070043A1 (en) * 2001-03-07 2003-04-10 Jeffrey Vernon Merkey High speed fault tolerant storage systems
US20040260977A1 (en) * 2003-06-06 2004-12-23 Minwen Ji Fault-tolerant data redundancy technique
US20060026451A1 (en) * 2004-07-28 2006-02-02 Voigt Douglas L Managing a fault tolerant system
US20130103978A1 (en) * 2011-10-19 2013-04-25 Hitachi, Ltd. Storage system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5546558A (en) 1994-06-07 1996-08-13 Hewlett-Packard Company Memory system with hierarchic disk array and memory map store for persistent storage of virtual mapping information
US6785783B2 (en) * 2000-11-30 2004-08-31 International Business Machines Corporation NUMA system with redundant main memory architecture
US7269608B2 (en) * 2001-05-30 2007-09-11 Sun Microsystems, Inc. Apparatus and methods for caching objects using main memory and persistent memory
US7165187B2 (en) * 2003-06-06 2007-01-16 Hewlett-Packard Development Company, L.P. Batch based distributed data redundancy
US7334154B2 (en) * 2004-06-18 2008-02-19 Microsoft Corporation Efficient changing of replica sets in distributed fault-tolerant computing system
WO2008078334A2 (en) * 2006-12-22 2008-07-03 Hewlett-Packard Development Company, L.P. Computer system and method of control thereof
WO2009134772A2 (en) 2008-04-29 2009-11-05 Maxiscale, Inc Peer-to-peer redundant file server system and methods
KR20120052251A (en) * 2009-08-25 2012-05-23 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. Error correcting
US9122579B2 (en) 2010-01-06 2015-09-01 Intelligent Intellectual Property Holdings 2 Llc Apparatus, system, and method for a storage layer
US8694737B2 (en) * 2010-06-09 2014-04-08 Micron Technology, Inc. Persistent memory for processor main memory
US8775868B2 (en) 2010-09-28 2014-07-08 Pure Storage, Inc. Adaptive RAID for an SSD environment
US9037788B2 (en) * 2010-09-30 2015-05-19 Micron Technology, Inc. Validating persistent memory content for processor main memory
WO2012083308A2 (en) 2010-12-17 2012-06-21 Fusion-Io, Inc. Apparatus, system, and method for persistent data management on a non-volatile storage media
US8639968B2 (en) * 2011-01-17 2014-01-28 Hewlett-Packard Development Company, L. P. Computing system reliability
US10735500B2 (en) 2012-12-11 2020-08-04 Hewlett Packard Enterprise Development Lp Application server to NVRAM path
US20140195634A1 (en) * 2013-01-10 2014-07-10 Broadcom Corporation System and Method for Multiservice Input/Output

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020194530A1 (en) * 1997-09-30 2002-12-19 Santeler Paul A. Fault tolerant memory
US20030070043A1 (en) * 2001-03-07 2003-04-10 Jeffrey Vernon Merkey High speed fault tolerant storage systems
US20040260977A1 (en) * 2003-06-06 2004-12-23 Minwen Ji Fault-tolerant data redundancy technique
US20060026451A1 (en) * 2004-07-28 2006-02-02 Voigt Douglas L Managing a fault tolerant system
US20130103978A1 (en) * 2011-10-19 2013-04-25 Hitachi, Ltd. Storage system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3014448A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017023244A1 (en) * 2015-07-31 2017-02-09 Hewlett Packard Enterprise Development Lp Fault tolerant computing
CN114697372A (en) * 2022-05-31 2022-07-01 深圳市泛联信息科技有限公司 Data transmission processing and storage method, system and medium in distributed system
CN114697372B (en) * 2022-05-31 2022-09-06 深圳市泛联信息科技有限公司 Data transmission processing and storage method, system and medium in distributed system

Also Published As

Publication number Publication date
US20160147620A1 (en) 2016-05-26
CN105308574A (en) 2016-02-03
EP3014448A4 (en) 2017-03-08
EP3014448A1 (en) 2016-05-04
US10452498B2 (en) 2019-10-22

Similar Documents

Publication Publication Date Title
US10452498B2 (en) Fault tolerance for persistent main memory
US10191676B2 (en) Scalable storage protection
US5379417A (en) System and method for ensuring write data integrity in a redundant array data storage system
US9823876B2 (en) Nondisruptive device replacement using progressive background copyback operation
US7062704B2 (en) Storage array employing scrubbing operations using multiple levels of checksums
US10402261B2 (en) Preventing data corruption and single point of failure in fault-tolerant memory fabrics
US7017107B2 (en) Storage array employing scrubbing operations at the disk-controller level
JP3129732B2 (en) Storage array with copy-back cache
TWI451257B (en) Method and apparatus for protecting the integrity of cached data in a direct-attached storage (das) system
JP2016530637A (en) RAID parity stripe reconstruction
JPH0731582B2 (en) Method and apparatus for recovering parity protected data
JPH0683717A (en) Large fault-resistant nonvolatile plural port memories
US9898365B2 (en) Global error correction
JP2018508073A (en) Data removal, allocation and reconstruction
US10324782B1 (en) Hiccup management in a storage array
US20230068214A1 (en) Storage system
US20050193273A1 (en) Method, apparatus and program storage device that provide virtual space to handle storage device failures in a storage system
US9106260B2 (en) Parity data management for a memory architecture
TW202145011A (en) Method and system for data recovery, and storage array controller
WO2016122602A1 (en) Systems and methods for sharing non-volatile memory between multiple access models
JP2004164675A (en) Disk array device
JP5464347B2 (en) Memory failure processing apparatus, memory failure processing method, and memory failure processing program

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201380077638.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13887761

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2013887761

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 14901559

Country of ref document: US