WO2023023223A1 - Partitionnement efficace pour groupes de résilience de système de stockage - Google Patents

Partitionnement efficace pour groupes de résilience de système de stockage Download PDF

Info

Publication number
WO2023023223A1
WO2023023223A1 PCT/US2022/040714 US2022040714W WO2023023223A1 WO 2023023223 A1 WO2023023223 A1 WO 2023023223A1 US 2022040714 W US2022040714 W US 2022040714W WO 2023023223 A1 WO2023023223 A1 WO 2023023223A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
data
storage system
resiliency
group
Prior art date
Application number
PCT/US2022/040714
Other languages
English (en)
Inventor
Robert Lee
Hari Kannan
Original Assignee
Pure Storage, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/407,806 external-priority patent/US20210382800A1/en
Application filed by Pure Storage, Inc. filed Critical Pure Storage, Inc.
Publication of WO2023023223A1 publication Critical patent/WO2023023223A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • Figure 1C illustrates a third example system for data storage in accordance with some implementations.
  • Figure ID illustrates a fourth example system for data storage in accordance with some implementations.
  • FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes and storage units of Figs. 1-3 in accordance with some embodiments.
  • Figure 3A sets forth a diagram of a storage system that is coupled for data communications with a cloud services provider in accordance with some embodiments of the present disclosure.
  • Figure 3B sets forth a diagram of a storage system in accordance with some embodiments of the present disclosure.
  • Figure 3C sets forth an example of a cloud-based storage system in accordance with some embodiments of the present disclosure.
  • Figure 3E illustrates an example of a fleet of storage systems for providing storage services.
  • Figure 5 sets forth a flow chart illustrating an additional example method for dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments of the present disclosure.
  • Figure 6 sets forth a flow chart illustrating an additional example method for dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments of the present disclosure.
  • Figure 10 sets forth a storage system embodiment that evaluates storage system resources and rules in terms of data survivability versus data capacity efficiency, and produces an explicit trade-off determination to bias a resiliency groups generator in the formation of resiliency groups of storage system resources.
  • Figure 11 illustrates a RAID stripe, and the data capacity efficiency for a resiliency group that supports the N+R coding of the RAID stripe and has S spares.
  • Figure 15B illustrates a multi-chassis storage system, in which resiliency groups can span across multiple chassis, in various embodiments.
  • Figure 15D illustrates resiliency groups formed with portions of blades as storage system resources having membership in resiliency groups.
  • Figure 15E illustrates resiliency groups formed with storage drives as storage system resources having membership in resiliency groups.
  • Figure 15F illustrates resiliency groups formed with portions of storage drives as storage system resources having membership in resiliency groups.
  • Figure 16 illustrates write groups in a resiliency group, including a widest possible write group, for various embodiments.
  • Figure 17 illustrates zigzag coding of RAID stripes, for increasing data capacity efficiency in one embodiment.
  • Storage system mechanisms and storage system embodiments described herein emphasize reliability and data survivability, especially as storage systems expand. It is desirable that adding storage memory to a storage system should not adversely affect data survivability in the face of memory media failure and component failure. Storage systems with failure domains and various considerations for reliability and data survivability are described below with reference to Figures 1-9. Storage systems with resiliency groups and various considerations for reliability and data survivability are described below with reference to Figures 10-18. Features and mechanisms from the various embodiments may be further combined in various combinations in further embodiments of storage systems.
  • the various modules described herein can be implemented in software, hardware, firmware and combinations thereof in various embodiments.
  • Various types of storage memory are applicable to the embodiments, as are heterogeneity and homogeneity of components and storage memory.
  • a failure domain may represent a group of components within the storage system that can be negatively impacted by the failure of another component in the storage system.
  • Such a failure domain may be embodied, for example, as a group of blades that are physically dependent on a particular component (e.g., a group of blades connected to the same power source) or as a group of blades that are logically dependent on a particular component.
  • a failure domain may consist of a group of blades that some piece of data (e.g., all data in a database) is striped across. In such an example, a failure of one of the blades could negatively impact the group of blades that are logically dependent upon each other, as the portion of the piece of data that is stored on the failed blade could be lost.
  • dynamically forming a failure domain in a storage system may be carried out by identifying, in dependence upon a failure domain formation policy, an available configuration for a failure domain.
  • the failure domain formation policy may be embodied, for example, as a set of rules that are used to identify satisfactory configurations for a particular failure domain.
  • the failure domain formation policy may include rules, for example, that specify: the maximum number of blades in each chassis that may be included in the failure domain the maximum number of blades in a particular failure domain that may fail without data loss the maximum number of chassis in a particular failure domain that may fail without data loss the maximum number of network hops that are permissible between two or more blades in a particular failure domain the minimum amount of network bandwidth that must be available between two or more blades in a particular failure domain the minimum amount of storage capacity for one or more blades in a particular failure domain the maximum age for one or more blades in a particular storage domain
  • one or more characteristics of the storage system may change over time, such that a particular set of blades may adhere to a failure domain formation policy at one point in time, but the same set of blades may not adhere to a failure domain formation policy at another point in time some.
  • a failure domain formation policy includes one or more rules that specify the minimum amount of storage capacity required for one or more blades in a particular failure domain.
  • a particular blade may initially include an amount of capacity that exceeds the minimum amount of storage capacity required for the blades in the particular failure domain.
  • the particular blade may no longer include an amount of capacity that exceeds the minimum amount of storage capacity required for the blades in the particular failure domain.
  • any failure domain that includes the particular blade would no longer adhere to the failure domain formation policy.
  • the failure domain formation policy may therefore be applied on a continuous basis, according to a predetermined schedule, at the behest of a user such as a system administrator, or in some other manner so as to verify that a particular failure domain continues to adhere to the failure domain formation policy.
  • the failure domain formation policy also contains one or more rules specifying that the failure domain should be able to tolerate the failure an entire chassis without the loss of user data while the loss of two or more chassis can result in user data being lost. Readers will appreciate that while many possible configurations that include three blades can be identified, some of those configurations would not adhere to the failure domain formation policy. For example, a configuration in which all three blades are located on a single chassis would not adhere to the failure domain formation policy as the failure of the single chassis would result in the loss of user data, given that all three blades in the failure domain would be lost.
  • dynamically forming a failure domain in a storage system may be carried out by creating the failure domain in accordance with the available configuration for a failure domain.
  • Creating the failure domain in accordance with the available configuration may be carried out, for example, by configuring a storage array controller or other component that writes data to the storage system to write data for applications, users, or other entities that are associated with a particular failure domain to the blades that are included in the available configuration that was identified for the failure domain.
  • the failure domain can include at least a one blade mounted within a first chassis and another blade mounted within a second chassis.
  • the SAN 158 may be implemented with a variety of data communications fabrics, devices, and protocols.
  • the fabrics for SAN 158 may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface ('SAS'), or the like.
  • Data communications protocols for use with SAN 158 may include Advanced Technology Attachment ('ATA'), Fibre Channel Protocol, Small Computer System Interface ('SCSI'), Internet Small Computer System Interface ('iSCSI'), HyperSCSI, Non-Volatile Memory Express ('NVMe') over Fabrics, or the like.
  • SAN 158 is provided for illustration, rather than limitation.
  • Other data communication couplings may be implemented between computing devices 164A-B and storage arrays 102A-B.
  • the LAN 160 may also be implemented with a variety of fabrics, devices, and protocols.
  • the fabrics for LAN 160 may include Ethernet (802.3), wireless (802.11), or the like.
  • Data communication protocols for use in LAN 160 may include Transmission Control Protocol ('TCP'), User Datagram Protocol ('UDP'), Internet Protocol ('IP'), HyperText Transfer Protocol ('HTTP'), Wireless Access Protocol ('WAP'), Handheld Device Transport Protocol ('HDTP'), Session Initiation Protocol ('SIP'), Real Time Protocol ('RTP'), or the like.
  • Storage arrays 102A-B may provide persistent data storage for the computing devices 164A-B.
  • Storage array 102A may be contained in a chassis (not shown), and storage array 102B may be contained in another chassis (not shown), in implementations.
  • Storage array 102A and 102B may include one or more storage array controllers 110A-D (also referred to as "controller" herein).
  • a storage array controller 110A-D may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers 110A-D may be configured to carry out various storage tasks.
  • Storage array controller 110A-D may be implemented in a variety of ways, including as a Field Programmable Gate Array ('FPGA'), a Programmable Logic Chip ('PLC'), an Application Specific Integrated Circuit ('ASIC'), System-on-Chip ('SOC'), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters.
  • Storage array controller 110A-D may include, for example, a data communications adapter configured to support communications via the SAN 158 or LAN 160. In some implementations, storage array controller 110A-D may be independently coupled to the LAN 160.
  • storage array controller 110A-D may include an I/O controller or the like that couples the storage array controller 110A-D for data communications, through a midplane (not shown), to a persistent storage resource 170A-B (also referred to as a "storage resource” herein).
  • the persistent storage resource 170A-B main include any number of storage drives 171 A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory ('NVRAM') devices (not shown).
  • the NVRAM devices of a persistent storage resource 170A-B may be configured to receive, from the storage array controller 110A-D, data to be stored in the storage drives 171A-F.
  • the data may originate from computing devices 164A-B.
  • writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive 171 A-F.
  • the storage array controller 110A-D may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives 171 A-F.
  • Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller 110A-D writes data directly to the storage drives 171 A-F.
  • the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM.
  • the NVRAM device is referred to as "non-volatile" because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device.
  • a power source may be a battery, one or more capacitors, or the like.
  • the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives 171A-F.
  • storage drive 171A-F may refer to any device configured to record data persistently, where "persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power.
  • storage drive 171 A-F may correspond to non-disk storage media.
  • the storage drive 171 A-F may be one or more solid-state drives ('SSDs'), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device.
  • storage drive 171 A-F may include mechanical or spinning hard disk, such as hard-disk drives ('HDD').
  • control information may be stored with an associated memory block as metadata.
  • control information for the storage drives 171A-F may be stored in one or more particular memory blocks of the storage drives 171 A-F that are selected by the storage array controller 110A-D.
  • the selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information.
  • the identifier may be utilized by the storage array controllers 110A-D in conjunction with storage drives 171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers 110A-D may issue a command to locate memory blocks that contain control information.
  • control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive 171A-F.
  • storage array controllers 110A-D may offload device management responsibilities from storage drives 171 A-F of storage array 102A-B by retrieving, from the storage drives 171 A-F, control information describing the state of one or more memory blocks in the storage drives 171A-F. Retrieving the control information from the storage drives 171 A-F may be carried out, for example, by the storage array controller 110A-D querying the storage drives 171 A-F for the location of control information for a particular storage drive 171A-F.
  • the storage drives 171A-F may be configured to execute instructions that enable the storage drive 171 A-F to identify the location of the control information.
  • the instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive 171A-F and may cause the storage drive 171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives 171 A-F.
  • the storage drives 171 A-F may respond by sending a response message to the storage array controller 110A-D that includes the location of control information for the storage drive 171 A-F. Responsive to receiving the response message, storage array controllers 110A-D may issue a request to read data stored at the address associated with the location of control information for the storage drives 171 A-F.
  • the storage array controllers 110A-D may further offload device management responsibilities from storage drives 171A-F by performing, in response to receiving the control information, a storage drive management operation.
  • a storage drive management operation may include, for example, an operation that is typically performed by the storage drive 171 A-F (e.g., the controller (not shown) associated with a particular storage drive 171 A-F).
  • a storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive 171 A-F, ensuring that data is written to memory blocks within the storage drive 171 A-F in such a way that adequate wear leveling is achieved, and so forth.
  • storage array 102A-B may implement two or more storage array controllers 110A-D.
  • storage array 102A may include storage array controllers 110A and storage array controllers HOB.
  • a single storage array controller 110A-D e.g., storage array controller 110 A
  • other storage array controllers 110A-D e.g., storage array controller 110A
  • secondary controller also referred to as "secondary controller” herein.
  • the primary controller may have particular rights, such as permission to alter data in persistent storage resource 170A-B (e.g., writing data to persistent storage resource 170A-B).
  • At least some of the rights of the primary controller may supersede the rights of the secondary controller.
  • the secondary controller may not have permission to alter data in persistent storage resource 170A-B when the primary controller has the right.
  • the status of storage array controllers 110A-D may change.
  • storage array controller 110A may be designated with secondary status
  • storage array controller HOB may be designated with primary status.
  • Storage array controllers HOC and HOD may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers 110A and 110B, respectively) and storage array 102B.
  • storage array controller 110A of storage array 102A may send a write request, via SAN 158, to storage array 102B.
  • the write request may be received by both storage array controllers HOC and HOD of storage array 102B.
  • Storage array controllers HOC and 110D facilitate the communication, e.g., send the write request to the appropriate storage drive 171 A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers.
  • storage array controllers 110A-D are communicatively coupled, via a midplane (not shown), to one or more storage drives 171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array 102A-B.
  • the storage array controllers 110A-D may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives 171 A-F and the NVRAM devices via one or more data communications links.
  • the data communications links described herein are collectively illustrated by data communications links 108A-D and may include a Peripheral Component Interconnect Express ('PCIe') bus, for example.
  • 'PCIe' Peripheral Component Interconnect Express
  • Figure IB illustrates an example system for data storage, in accordance with some implementations.
  • Storage array controller 101 illustrated in Figure IB may be similar to the storage array controllers 110A-D described with respect to Figure 1 A.
  • storage array controller 101 may be similar to storage array controller 110A or storage array controller HOB.
  • Storage array controller 101 includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller 101 may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements of Figure 1A may be included below to help illustrate features of storage array controller 101.
  • Storage array controller 101 may include one or more processing devices 104 and random access memory ('RAM') 111.
  • Processing device 104 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 104 (or controller 101) may be a complex instruction set computing ('CISC') microprocessor, reduced instruction set computing ('RISC') microprocessor, very long instruction word ('VLIW') microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
  • the processing device 104 (or controller 101) may also be one or more special-purpose processing devices such as an ASIC, an FPGA, a digital signal processor ('DSP'), network processor, or the like.
  • the processing device 104 may be connected to the RAM 111 via a data communications link 106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 ('DDR4') bus.
  • a data communications link 106 Stored in RAM 111 is an operating system 112.
  • instructions 113 are stored in RAM 111. Instructions 113 may include computer program instructions for performing operations in in a direct-mapped flash storage system.
  • a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives.
  • storage array controller 101 includes one or more host bus adapters 103A-C that are coupled to the processing device 104 via a data communications link 105A-C.
  • host bus adapters 103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays.
  • host bus adapters 103A-C may be a Fibre Channel adapter that enables the storage array controller 101 to connect to a SAN, an Ethernet adapter that enables the storage array controller 101 to connect to a LAN, or the like.
  • Host bus adapters 103A-C may be coupled to the processing device 104 via a data communications link 105A-C such as, for example, a PCIe bus.
  • storage array controller 101 may include a host bus adapter 114 that is coupled to an expander 115.
  • the expander 115 may be used to attach a host system to a larger number of storage drives.
  • the expander 115 may, for example, be a SAS expander utilized to enable the host bus adapter 114 to attach to storage drives in an implementation where the host bus adapter 114 is embodied as a SAS controller.
  • storage array controller 101 may include a switch 116 coupled to the processing device 104 via a data communications link 109.
  • the switch 116 may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint.
  • the switch 116 may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link 109) and presents multiple PCIe connection points to the midplane.
  • storage array controller 101 includes a data communications link 107 for coupling the storage array controller 101 to other storage array controllers.
  • data communications link 107 may be a QuickPath Interconnect (QPI) interconnect.
  • QPI QuickPath Interconnect
  • storage drive 171A-F may be one or more zoned storage devices.
  • the one or more zoned storage devices may be a shingled HDD.
  • the one or more storage devices may be a flash-based SSD.
  • a zoned namespace on the zoned storage device can be addressed by groups of blocks that are grouped and aligned by a natural size, forming a number of addressable zones.
  • the natural size may be based on the erase block size of the SSD.
  • the zones of the zoned storage device may be defined during initialization of the zoned storage device. In implementations, the zones may be defined dynamically as data is written to the zoned storage device.
  • zones may be heterogeneous, with some zones each being a page group and other zones being multiple page groups.
  • some zones may correspond to an erase block and other zones may correspond to multiple erase blocks.
  • zones may be any combination of differing numbers of pages in page groups and/or erase blocks, for heterogeneous mixes of programming modes, manufacturers, product types and/or product generations of storage devices, as applied to heterogeneous assemblies, upgrades, distributed storages, etc.
  • zones may be defined as having usage characteristics, such as a property of supporting data with particular kinds of longevity (very short lived or very long lived, for example). These properties could be used by a zoned storage device to determine how the zone will be managed over the zone’s expected lifetime.
  • a zone is a virtual construct. Any particular zone may not have a fixed location at a storage device. Until allocated, a zone may not have any location at a storage device.
  • a zone may correspond to a number representing a chunk of virtually allocatable space that is the size of an erase block or other block size in various implementations.
  • zones get allocated to flash or other solid-state storage memory and, as the system writes to the zone, pages are written to that mapped flash or other solid-state storage memory of the zoned storage device.
  • the system closes the zone, the associated erase block(s) or other sized block(s) are completed.
  • the system may delete a zone which will free up the zone's allocated space.
  • a zone may be moved around to different locations of the zoned storage device, e.g., as the zoned storage device does internal maintenance.
  • the zones of the zoned storage device may be in different states.
  • a zone may be in an empty state in which data has not been stored at the zone.
  • An empty zone may be opened explicitly, or implicitly by writing data to the zone. This is the initial state for zones on a fresh zoned storage device, but may also be the result of a zone reset.
  • an empty zone may have a designated location within the flash memory of the zoned storage device.
  • the location of the empty zone may be chosen when the zone is first opened or first written to (or later if writes are buffered into memory).
  • a zone may be in an open state either implicitly or explicitly, where a zone that is in an open state may be written to store data with write or append commands.
  • a zone may be in a full state either after writes have written data to the entirety of the zone or as a result of a zone finish operation. Prior to a finish operation, a zone may or may not have been completely written. After a finish operation, however, the zone may not be opened a written to further without first performing a zone reset operation.
  • the mapping from a zone to an erase block may be arbitrary, dynamic, and hidden from view.
  • the process of opening a zone may be an operation that allows a new zone to be dynamically mapped to underlying storage of the zoned storage device, and then allows data to be written through appending writes into the zone until the zone reaches capacity.
  • the zone can be finished at any point, after which further data may not be written into the zone.
  • the zone can be reset which effectively deletes the zone's content from the zoned storage device, making the physical storage held by that zone available for the subsequent storage of data.
  • the zoned storage device ensures that the data stored at the zone is not lost until the zone is reset.
  • the zone may be moved around between shingle tracks or erase blocks as part of maintenance operations within the zoned storage device, such as by copying data to keep the data refreshed or to handle memory cell aging in an SSD.
  • the resetting of the zone may allow the shingle tracks to be allocated to a new, opened zone that may be opened at some point in the future.
  • the resetting of the zone may cause the associated physical erase block(s) of the zone to be erased and subsequently reused for the storage of data.
  • the zoned storage device may have a limit on the number of open zones at a point in time to reduce the amount of overhead dedicated to keeping zones open.
  • the operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system.
  • the allocation units may be entire erase blocks or multiple erase blocks.
  • the operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system.
  • Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data.
  • the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system.
  • the operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data.
  • the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives.
  • Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process.
  • One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system.
  • the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive.
  • a storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection.
  • FIG. 1C illustrates a third example system 117 for data storage in accordance with some implementations.
  • System 117 also referred to as "storage system” herein
  • storage system includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 117 may include the same, more, or fewer elements configured in the same or different manner in other implementations.
  • system 117 includes a dual Peripheral Component Interconnect ('PCI') flash storage device 118 with separately addressable fast write storage.
  • System 117 may include a storage device controller 119.
  • storage device controller 119A-D may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure.
  • system 117 includes flash memory devices (e.g., including flash memory devices 120a-n), operatively coupled to various channels of the storage device controller 119.
  • Flash memory devices 120a-n may be presented to the controller 119A-D as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller 119A-D to program and retrieve various aspects of the Flash.
  • storage device controller 119A-D may perform operations on flash memory devices 120a-n including storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc.
  • system 117 may include RAM 121 to store separately addressable fast-write data.
  • RAM 121 may be one or more separate discrete devices.
  • RAM 121 may be integrated into storage device controller 119A-D or multiple storage device controllers.
  • the RAM 121 may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller 119.
  • system 117 may include a stored energy device 122, such as a rechargeable battery or a capacitor.
  • Stored energy device 122 may store energy sufficient to power the storage device controller 119, some amount of the RAM (e.g., RAM 121), and some amount of Flash memory (e.g., Flash memory 120a-120n) for sufficient time to write the contents of RAM to Flash memory.
  • storage device controller 119A-D may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power.
  • system 117 includes two data communications links 123a, 123b.
  • data communications links 123a, 123b may be PCI interfaces.
  • data communications links 123a, 123b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.).
  • Data communications links 123a, 123b may be based on non-volatile memory express ('NVMe') or NVMe over fabrics ('NVMf ) specifications that allow external connection to the storage device controller 119A-D from other components in the storage system 117.
  • 'NVMe' non-volatile memory express
  • 'NVMf NVMe over fabrics
  • System 117 may also include an external power source (not shown), which may be provided over one or both data communications links 123a, 123b, or which may be provided separately.
  • An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM 121.
  • the storage device controller 119A-D may present a logical device over a PCI bus which may include an addressable fastwrite logical device, or a distinct part of the logical address space of the storage device 118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM 121. On power failure, the storage device controller 119A-D may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory 120a-n) for long-term persistent storage.
  • Flash memory e.g., Flash memory 120a-n
  • the logical device may include some presentation of some or all of the content of the Flash memory devices 120a-n, where that presentation allows a storage system including a storage device 118 (e.g., storage system 117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus.
  • the presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc.
  • Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the stored energy device 122 to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy.
  • Figure ID illustrates a third example storage system 124 for data storage in accordance with some implementations.
  • storage system 124 includes storage controllers 125a, 125b.
  • storage controllers 125a, 125b are operatively coupled to Dual PCI storage devices.
  • Storage controllers 125a, 125b may be operatively coupled (e.g., via a storage network 130) to some number of host computers 127a-n.
  • two storage controllers provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc.
  • the storage controllers 125a, 125b may provide services through some number of network interfaces (e.g., 126a-d) to host computers 127a-n outside of the storage system 124.
  • Storage controllers 125a, 125b may provide integrated services or an application entirely within the storage system 124, forming a converged storage and compute system.
  • the storage controllers 125a, 125b may utilize the fast write memory within or across storage devices 119a-d to journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system 124.
  • storage controllers 125 a, 125b operate as PCI masters to one or the other PCI buses 128a, 128b.
  • 128a and 128b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.).
  • Other storage system embodiments may operate storage controllers 125 a, 125b as multi-masters for both PCI buses 128a, 128b.
  • a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers.
  • Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers.
  • a storage device controller 119a may be operable under direction from a storage controller 125a to synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM 121 of Figure 1C).
  • RAM e.g., RAM 121 of Figure 1C
  • a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse.
  • This mechanism may be used, for example, to avoid a second transfer over a bus (e.g., 128a, 128b) from the storage controllers 125a, 125b.
  • a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc.
  • a storage device controller 119a, 119b may be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM 121 of Figure 1C) without involvement of the storage controllers 125a, 125b.
  • This operation may be used to mirror data stored in one storage controller 125a to another storage controller 125b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface 129a, 129b to the PCI bus 128a, 128b.
  • a storage device controller 119A-D may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device 118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly.
  • a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself.
  • Flash pages which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time).
  • the storage controllers may first write data into the separately addressable fast write storage on one more storage devices.
  • the storage controllers 125a, 125b may initiate the use of erase blocks within and across storage devices (e.g., 118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics.
  • the storage controllers 125a, 125b may initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance.
  • the storage system 124 may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination.
  • FIG. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster.
  • the storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata.
  • Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations.
  • Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non- solid state memory.
  • Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations.
  • a storage node may be referred to as a cluster node, a blade, or a server.
  • Each storage node 150 can have multiple components.
  • the storage node 150 includes a printed circuit board 159 populated by a CPU 156, i.e., processor, a memory 154 coupled to the CPU 156, and anon-volatile solid state storage 152 coupled to the CPU 156, although other mountings and/or components could be used in further embodiments.
  • the memory 154 has instructions which are executed by the CPU 156 and/or data operated on by the CPU 156.
  • the nonvolatile solid state storage 152 includes flash or, in further embodiments, other types of solid- state memory.
  • a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow.
  • Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components.
  • the cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained.
  • the storage resources 308 depicted in Figure 3B may also include racetrack memory (also referred to as domain-wall memory).
  • racetrack memory may be embodied as a form of non-volatile, solid-state memory that relies on the intrinsic strength and orientation of the magnetic field created by an electron as it spins in addition to its electronic charge, in solid-state devices.
  • spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire, the domains may pass by magnetic read/write heads positioned near the wire as current is passed through the wire, which alter the domains to record patterns of bits.
  • many such wires and read/write elements may be packaged together.
  • the example storage system 306 depicted in Figure 3B may leverage the storage resources described above in a variety of different ways. For example, some portion of the storage resources may be utilized to serve as a write cache, storage resources within the storage system may be utilized as a read cache, or tiering may be achieved within the storage systems by placing data within the storage system in accordance with one or more tiering policies.
  • the software resources 314 may also include software that is useful in implementing software-defined storage ('SDS').
  • the software resources 314 may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware.
  • Such software resources 314 may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware.
  • Figure 3C sets forth an example of a cloud-based storage system 318 in accordance with some embodiments of the present disclosure.
  • the cloud-based storage system 318 is created entirely in a cloud computing environment 316 such as, for example, Amazon Web Services ('AWS')TM, Microsoft AzureTM, Google Cloud PlatformTM, IBM CloudTM, Oracle CloudTM, and others.
  • the cloud-based storage system 318 may be used to provide services similar to the services that may be provided by the storage systems described above.
  • the cloud-based storage system 318 depicted in Figure 3C includes two cloud computing instances 320, 322 that each are used to support the execution of a storage controller application 324, 326.
  • the cloud computing instances 320, 322 may be embodied, for example, as instances of cloud computing resources (e.g., virtual machines) that may be provided by the cloud computing environment 316 to support the execution of software applications such as the storage controller application 324, 326.
  • each of the cloud computing instances 320, 322 may execute on an Azure VM, where each Azure VM may include high speed temporary storage that may be leveraged as a cache (e.g., as a read cache).
  • cloud computing instances 320, 322 that each include the storage controller application 324, 326
  • one cloud computing instance 320 may operate as the primary controller as described above while the other cloud computing instance 322 may operate as the secondary controller as described above.
  • the storage controller application 324, 326 depicted in Figure 3C may include identical source code that is executed within different cloud computing instances 320, 322 such as distinct EC2 instances.
  • Readers will appreciate that other embodiments that do not include a primary and secondary controller are within the scope of the present disclosure.
  • the cloud-based storage system 318 depicted in Figure 3C includes cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338.
  • the cloud computing instances 340a, 340b, 340n may be embodied, for example, as instances of cloud computing resources that may be provided by the cloud computing environment 316 to support the execution of software applications.
  • the cloud computing instances 340a, 340b, 340n of Figure 3C may differ from the cloud computing instances 320, 322 described above as the cloud computing instances 340a, 340b, 340n of Figure 3C have local storage 330, 334, 338 resources whereas the cloud computing instances 320, 322 that support the execution of the storage controller application 324, 326 need not have local storage resources.
  • each of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 can include a software daemon 328, 332, 336 that, when executed by a cloud computing instance 340a, 340b, 340n can present itself to the storage controller applications 324, 326 as if the cloud computing instance 340a, 340b, 340n were a physical storage device (e.g., one or more SSDs).
  • the software daemon 328, 332, 336 may include computer program instructions similar to those that would normally be contained on a storage device such that the storage controller applications 324, 326 can send and receive the same commands that a storage controller would send to storage devices.
  • the storage controller applications 324, 326 may include code that is identical to (or substantially identical to) the code that would be executed by the controllers in the storage systems described above.
  • communications between the storage controller applications 324, 326 and the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 may utilize iSCSI, NVMe over TCP, messaging, a custom protocol, or in some other mechanism.
  • NVRAM block storage 342, 344, 346 that is offered by the cloud computing environment 316 as NVRAM
  • actual RAM on each of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 may be used as NVRAM, thereby decreasing network utilization costs that would be associated with using an EBS volume as the NVRAM.
  • high performance block storage resources such as one or more Azure Ultra Disks may be utilized as the NVRAM.
  • the local storage 330, 334, 338 resources and the block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n may support block-level access
  • the cloud-based object storage 348 that is attached to the particular cloud computing instance 340a, 340b, 340n supports only object-based access.
  • the software daemon 328, 332, 336 may therefore be configured to take blocks of data, package those blocks into objects, and write the objects to the cloud-based object storage 348 that is attached to the particular cloud computing instance 340a, 340b, 340n.
  • writing the data to the local storage 330, 334, 338 resources and the block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n is relatively straightforward as 5 blocks that are 1 MB in size are written to the local storage 330, 334, 338 resources and the block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n.
  • the software daemon 328, 332, 336 may also be configured to create five objects containing distinct 1 MB chunks of the data.
  • each object that is written to the cloud-based object storage 348 may be identical (or nearly identical) in size.
  • metadata that is associated with the data itself may be included in each object (e.g., the first 1 MB of the object is data and the remaining portion is metadata associated with the data).
  • the cloud-based object storage 348 may be incorporated into the cloud-based storage system 318 to increase the durability of the cloud-based storage system 318.
  • all data that is stored by the cloud-based storage system 318 may be stored in both: 1) the cloud-based object storage 348, and 2) at least one of the local storage 330, 334, 338 resources or block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n.
  • the local storage 330, 334, 338 resources and block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n may effectively operate as cache that generally includes all data that is also stored in S3, such that all reads of data may be serviced by the cloud computing instances 340a, 340b, 340n without requiring the cloud computing instances 340a, 340b, 340n to access the cloud-based object storage 348.
  • all data that is stored by the cloud-based storage system 318 may be stored in the cloud-based object storage 348, but less than all data that is stored by the cloud-based storage system 318 may be stored in at least one of the local storage 330, 334, 338 resources or block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n.
  • various policies may be utilized to determine which subset of the data that is stored by the cloud-based storage system 318 should reside in both: 1) the cloud-based object storage 348, and 2) at least one of the local storage 330, 334, 338 resources or block storage 342, 344, 346 resources that are utilized by the cloud computing instances 340a, 340b, 340n.
  • One or more modules of computer program instructions that are executing within the cloud-based storage system 318 may be designed to handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338.
  • the monitoring module may handle the failure of one or more of the cloud computing instances 340a, 340b, 340n with local storage 330, 334, 338 by creating one or more new cloud computing instances with local storage, retrieving data that was stored on the failed cloud computing instances 340a, 340b, 340n from the cloud-based object storage 348, and storing the data retrieved from the cloud-based object storage 348 in local storage on the newly created cloud computing instances. Readers will appreciate that many variants of this process may be implemented.
  • a monitoring module that is executing in an EC2 instance
  • the cloud-based storage system 318 can be scaled-up or scaled-out as needed.
  • a monitoring module may create anew, more powerful cloud computing instance (e.g., a cloud computing instance of a type that includes more processing power, more memory, etc...
  • the monitoring module may create a new, less powerful (and less expensive) cloud computing instance that includes the storage controller application such that the new, less powerful cloud computing instance can begin operating as the primary controller.
  • the storage systems described above may carry out intelligent data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe.
  • the storage systems described above may be configured to examine each backup to avoid restoring the storage system to an undesirable state.
  • the storage system may include software resources 314 that can scan each backup to identify backups that were captured before the malware infected the storage system and those backups that were captured after the malware infected the storage system.
  • the storage system may restore itself from a backup that does not include the malware - or at least not restore the portions of a backup that contained the malware.
  • the backups may also be utilized to perform rapid recovery of the storage system.
  • software resources 314 within the storage system may be configured to detect the presence of ransomware and may be further configured to restore the storage system to a point-in-time, using the retained backups, prior to the point-in-time at which the ransomware infected the storage system.
  • ransomware may be explicitly detected through the use of software tools utilized by the system, through the use of a key (e.g., a USB drive) that is inserted into the storage system, or in a similar way.
  • the presence of ransomware may be inferred in response to system activity meeting a predetermined fingerprint such as, for example, no reads or writes coming into the system for a predetermined period of time.
  • converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy- driven processes.
  • Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways.
  • the storage system 306 may be useful in supporting artificial intelligence ('Al') applications, database applications, XOps projects (e.g., DevOps projects, DataOps projects, MLOps projects, ModelOps projects, PlatformOps projects), electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems ('PACS') applications, software development applications, virtual reality applications, augmented reality applications, and many other types of applications by providing storage resources to such applications.
  • 'Al' artificial intelligence
  • database applications e.g., database applications, XOps projects (e.g., DevOps projects, DataOps projects, MLOps projects, ModelOps projects, PlatformOps projects), electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture
  • the storage systems may be well suited to support applications that are resource intensive such as, for example, Al applications.
  • Al applications may be deployed in a variety of fields, including: predictive maintenance in manufacturing and related fields, healthcare applications such as patient data & risk analytics, retail and marketing deployments (e.g., search advertising, social media advertising), supply chains solutions, fintech solutions such as business analytics & reporting tools, operational deployments such as real-time analytics tools, application performance management tools, IT infrastructure management tools, and many others.
  • Such Al applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal.
  • Examples of such Al applications can include IBM WatsonTM, Microsoft OxfordTM, Google DeepMindTM, Baidu MinwaTM, and others.
  • the storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications.
  • Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to leam without being explicitly programmed.
  • reinforcement learning One particular area of machine learning is referred to as reinforcement learning, which involves taking suitable actions to maximize reward in a particular situation.
  • the storage systems described above may also include graphics processing units ('GPUs'), occasionally referred to as visual processing unit ('VPUs').
  • Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
  • Such GPUs may be included within any of the computing devices that are part of the storage systems described above, including as one of many individually scalable components of a storage system, where other examples of individually scalable components of such storage system can include storage components, memory components, compute components (e.g., CPUs, FPGAs, ASICs), networking components, software components, and others.
  • the storage systems described above may also include neural network processors ('NNPs') for use in various aspects of neural network processing.
  • 'NNPs' neural network processors
  • Such NNPs may be used in place of (or in addition to) GPUs and may also be independently scalable.
  • the storage systems described herein may be configured to support artificial intelligence applications, machine learning applications, big data analytics applications, and many other types of applications.
  • the rapid growth in these sort of applications is being driven by three technologies: deep learning (DL), GPU processors, and Big Data.
  • Deep learning is a computing model that makes use of massively parallel neural networks inspired by the human brain. Instead of experts handcrafting software, a deep learning model writes its own software by learning from lots of examples.
  • Such GPUs may include thousands of cores that are well-suited to run algorithms that loosely represent the parallel nature of the human brain.
  • Applications of such techniques may include: machine and vehicular object detection, identification and avoidance; visual recognition, classification and tagging; algorithmic financial trading strategy performance management; simultaneous localization and mapping; predictive maintenance of high-value machinery; prevention against cyber security threats, expertise automation; image recognition and classification; question answering; robotics; text analytics (extraction, classification) and text generation and translation; and many others.
  • Al techniques has materialized in a wide array of products include, for example, Amazon Echo's speech recognition technology that allows users to talk to their machines, Google TranslateTM which allows for machine-based language translation, Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis, Quill's text generation offering that takes structured data and turns it into narrative stories, Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others.
  • Amazon Echo's speech recognition technology that allows users to talk to their machines
  • Google TranslateTM which allows for machine-based language translation
  • Spotify's Discover Weekly that provides recommendations on new songs and artists that a user may like based on the user's usage and traffic analysis
  • Quill's text generation offering that takes structured data and turns it into narrative stories
  • Chatbots that provide real-time, contextually specific answers to questions in a dialog format, and many others.
  • Data samples may undergo a series of processing steps including, but not limited to: 1) ingesting the data from an external source into the training system and storing the data in raw form, 2) cleaning and transforming the data in a format convenient for training, including linking data samples to the appropriate label, 3) exploring parameters and models, quickly testing with a smaller dataset, and iterating to converge on the most promising models to push into the production cluster, 4) executing training phases to select random batches of input data, including both new and older samples, and feeding those into production GPU servers for computation to update model parameters, and 5) evaluating including using a holdback portion of the data not used in training in order to evaluate model accuracy on the holdout data.
  • This lifecycle may apply for any type of parallelized machine learning, not just neural networks or deep learning.
  • each stage in the Al data pipeline may have varying requirements from the data hub (e.g., the storage system or collection of storage systems).
  • Scale-out storage systems must deliver uncompromising performance for all manner of access types and patterns - from small, metadata-heavy to large files, from random to sequential access patterns, and from low to high concurrency.
  • the storage systems described above may serve as an ideal Al data hub as the systems may service unstructured workloads.
  • data is ideally ingested and stored on to the same data hub that following stages will use, in order to avoid excess data copying.
  • the next two steps can be done on a standard compute server that optionally includes a GPU, and then in the fourth and last stage, full training production jobs are run on powerful GPU-accelerated servers.
  • the GPU-accelerated servers can be used independently for different models or joined together to train on one larger model, even spanning multiple systems for distributed training. If the shared storage tier is slow, then data must be copied to local storage for each phase, resulting in wasted time staging data onto different servers.
  • the ideal data hub for the Al training pipeline delivers performance similar to data stored locally on the server node while also having the simplicity and performance to enable all pipeline stages to operate concurrently.
  • the storage systems may be configured to provide DMA between storage devices that are included in the storage systems and one or more GPUs that are used in an Al or big data analytics pipeline.
  • the one or more GPUs may be coupled to the storage system, for example, viaNVMe-over-Fabrics ('NVMe-oF') such that bottlenecks such as the host CPU can be bypassed and the storage system (or one of the components contained therein) can directly access GPU memory.
  • the storage systems may leverage API hooks to the GPUs to transfer data directly to the GPUs.
  • the GPUs may be embodied as NvidiaTM GPUs and the storage systems may support GPUDirect Storage ('GDS') software, or have similar proprietary software, that enables the storage system to transfer data to the GPUs via RDMA or similar mechanism.
  • 'GDS' GPUDirect Storage
  • Neuromorphic computing is a form of computing that mimics brain cells.
  • an architecture of interconnected "neurons” replace traditional computing models with low-powered signals that go directly between neurons for more efficient computation.
  • Neuromorphic computing may make use of very- large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro- biological architectures present in the nervous system, as well as analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems for perception, motor control, or multisensory integration.
  • VLSI very- large-scale integration
  • the storage systems described above may be configured to support the storage or use of (among other types of data) blockchains and derivative items such as, for example, open source blockchains and related tools that are part of the IBMTM Hyperledger project, permissioned blockchains in which a certain number of trusted parties are allowed to access the block chain, blockchain products that enable developers to build their own distributed ledger projects, and others.
  • Blockchains and the storage systems described herein may be leveraged to support on-chain storage of data as well as off-chain storage of data.
  • Off-chain storage of data can be implemented in a variety of ways and can occur when the data itself is not stored within the blockchain.
  • a hash function may be utilized and the data itself may be fed into the hash function to generate a hash value.
  • the hashes of large pieces of data may be embedded within transactions, instead of the data itself.
  • alternatives to blockchains may be used to facilitate the decentralized storage of information.
  • a blockchain that may be used is a blockweave. While conventional blockchains store every transaction to achieve validation, a blockweave permits secure decentralization without the usage of the entire chain, thereby enabling low cost on-chain storage of data.
  • Such blockweaves may utilize a consensus mechanism that is based on proof of access (PoA) and proof of work (PoW).
  • the storage systems described above may, either alone or in combination with other computing devices, be used to support in-memory computing applications.
  • In-memory computing involves the storage of information in RAM that is distributed across a cluster of computers. Readers will appreciate that the storage systems described above, especially those that are configurable with customizable amounts of processing resources, storage resources, and memory resources (e.g., those systems in which blades that contain configurable amounts of each type of resource), may be configured in a way so as to provide an infrastructure that can support in-memory computing.
  • the storage systems described above may include component parts (e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent) that can actually provide for an improved in-memory computing environment as compared to in-memory computing environments that rely on RAM distributed across dedicated servers.
  • component parts e.g., NVDIMMs, 3D crosspoint storage that provide fast random access memory that is persistent
  • the storage systems described above may be configured to operate as a hybrid in-memory computing environment that includes a universal interface to all storage media (e.g., RAM, flash storage, 3D crosspoint storage).
  • storage media e.g., RAM, flash storage, 3D crosspoint storage.
  • users may have no knowledge regarding the details of where their data is stored but they can still use the same full, unified API to address data.
  • the storage system may (in the background) move data to the fastest layer available - including intelligently placing the data in dependence upon various characteristics of the data or in dependence upon some other heuristic.
  • the storage systems may even make use of existing products such as Apache Ignite and GridGain to move data between the various storage layers, or the storage systems may make use of custom software to move data between the various storage layers.
  • the storage systems described herein may implement various optimizations to improve the performance of in-memory computing such as, for example, having computations occur as close to the data as possible.
  • the storage systems described above may be paired with other resources to support the applications described above.
  • one infrastructure could include primary compute in the form of servers and workstations which specialize in using General-purpose computing on graphics processing units ('GPGPU') to accelerate deep learning applications that are interconnected into a computation engine to train parameters for deep neural networks.
  • 'GPGPU' General-purpose computing on graphics processing units
  • Each system may have Ethernet external connectivity, InfiniBand external connectivity, some other form of external connectivity, or some combination thereof.
  • the GPUs can be grouped for a single large training or used independently to train multiple models.
  • the infrastructure could also include a storage system such as those described above to provide, for example, a scale-out all-flash file or object store through which data can be accessed via high-performance protocols such as NFS, S3, and so on.
  • the infrastructure can also include, for example, redundant top-of-rack Ethernet switches connected to storage and compute via ports in MLAG port channels for redundancy.
  • the infrastructure could also include additional compute in the form of whitebox servers, optionally with GPUs, for data ingestion, pre-processing, and model debugging. Readers will appreciate that additional infrastructures are also be possible.
  • the storage systems described above may be configured to support other Al related tools.
  • the storage systems may make use of tools like ONXX or other open neural network exchange formats that make it easier to transfer models written in different Al frameworks.
  • the storage systems may be configured to support tools like Amazon's Gluon that allow developers to prototype, build, and train deep learning models.
  • the storage systems described above may be part of a larger platform, such as IBMTM Cloud Private for Data, that includes integrated data science, data engineering and application building services.
  • the storage systems described above may also be deployed as an edge solution.
  • Such an edge solution may be in place to optimize cloud computing systems by performing data processing at the edge of the network, near the source of the data.
  • Edge computing can push applications, data and computing power (i.e., services) away from centralized points to the logical extremes of a network.
  • computational tasks may be performed using the compute resources provided by such storage systems, data may be storage using the storage resources of the storage system, and cloud-based services may be accessed through the use of various resources of the storage system (including networking resources).
  • edge solution While many tasks may benefit from the utilization of an edge solution, some particular uses may be especially suited for deployment in such an environment. For example, devices like drones, autonomous cars, robots, and others may require extremely rapid processing - so fast, in fact, that sending data up to a cloud environment and back to receive data processing support may simply be too slow. As an additional example, some loT devices such as connected video cameras may not be well-suited for the utilization of cloud-based resources as it may be impractical (not only from a privacy perspective, security perspective, or a financial perspective) to send the data to the cloud simply because of the pure volume of data that is involved. As such, many tasks that really on data processing, storage, or communications may be better suited by platforms that include edge solutions such as the storage systems described above.
  • the storage systems described above may alone, or in combination with other computing resources, serves as a network edge platform that combines compute resources, storage resources, networking resources, cloud technologies and network virtualization technologies, and so on.
  • the edge may take on characteristics similar to other network facilities, from the customer premise and backhaul aggregation facilities to Points of Presence (PoPs) and regional data centers. Readers will appreciate that network workloads, such as Virtual Network Functions (VNFs) and others, will reside on the network edge platform. Enabled by a combination of containers and virtual machines, the network edge platform may rely on controllers and schedulers that are no longer geographically colocated with the data processing resources.
  • VNFs Virtual Network Functions
  • control planes may split into control planes, user and data planes, or even state machines, allowing for independent optimization and scaling techniques to be applied.
  • user and data planes may be enabled through increased accelerators, both those residing in server platforms, such as FPGAs and Smart NICs, and through SDN-enabled merchant silicon and programmable ASICs.
  • the storage systems described above may also support (including implementing as a system interface) applications that perform tasks in response to human speech.
  • the storage systems may support the execution intelligent personal assistant applications such as, for example, Amazon's AlexaTM, Apple SiriTM, Google VoiceTM, Samsung BixbyTM, Microsoft CortanaTM, and others.
  • the examples described in the previous sentence make use of voice as input
  • the storage systems described above may also support chatbots, talkbots, chatterbots, or artificial conversational entities or other applications that are configured to conduct a conversation via auditory or textual methods.
  • the storage system may actually execute such an application to enable a user such as a system administrator to interact with the storage system via speech.
  • Such applications are generally capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and providing weather, traffic, and other real time information, such as news, although in embodiments in accordance with the present disclosure, such applications may be utilized as interfaces to various system management operations.
  • the storage systems described above may support the serialized or simultaneous execution of artificial intelligence applications, machine learning applications, data analytics applications, data transformations, and other tasks that collectively may form an Al ladder.
  • Such an Al ladder may effectively be formed by combining such elements to form a complete data science pipeline, where exist dependencies between elements of the Al ladder.
  • Al may require that some form of machine learning has taken place
  • machine learning may require that some form of analytics has taken place
  • analytics may require that some form of data and information architecting has taken place
  • each element may be viewed as a rung in an Al ladder that collectively can form a complete and sophisticated Al solution.
  • the storage systems described above may also, either alone or in combination with other computing environments, be used to deliver an Al everywhere experience where Al permeates wide and expansive aspects of business and life.
  • Al may play an important role in the delivery of deep learning solutions, deep reinforcement learning solutions, artificial general intelligence solutions, autonomous vehicles, cognitive computing solutions, commercial UAVs or drones, conversational user interfaces, enterprise taxonomies, ontology management solutions, machine learning solutions, smart dust, smart robots, smart workplaces, and many others.
  • the storage systems described above may also, either alone or in combination with other computing environments, be used to deliver a wide range of transparently immersive experiences (including those that use digital twins of various "things” such as people, places, processes, systems, and so on) where technology can introduce transparency between people, businesses, and things.
  • transparently immersive experiences may be delivered as augmented reality technologies, connected homes, virtual reality technologies, brain-computer interfaces, human augmentation technologies, nanotube electronics, volumetric displays, 4D printing technologies, or others.
  • continuous development and continuous integration tools may be deployed to standardize processes around continuous integration and delivery, new feature rollout and provisioning cloud workloads. By standardizing these processes, a multi-cloud strategy may be implemented that enables the utilization of the best provider for each workload.
  • the storage systems described above may be used as a part of a platform to enable the use of crypto-anchors that may be used to authenticate a product's origins and contents to ensure that it matches a blockchain record associated with the product.
  • the storage systems described above may implement various encryption technologies and schemes, including lattice cryptography.
  • Lattice cryptography can involve constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Unlike public-key schemes such as the RSA, Diffie-Hellman or Elliptic-Curve cryptosystems, which are easily attacked by a quantum computer, some lattice-based constructions appear to be resistant to attack by both classical and quantum computers.
  • a quantum computer is a device that performs quantum computing. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. Quantum computers differ from traditional computers that are based on transistors, as such traditional computers require that data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1).
  • quantum computers In contrast to traditional computers, quantum computers use quantum bits, which can be in superpositions of states.
  • a quantum computer maintains a sequence of qubits, where a single qubit can represent a one, a zero, or any quantum superposition of those two qubit states.
  • a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states.
  • a quantum computer with n qubits can generally be in an arbitrary superposition of up to 2 A n different states simultaneously, whereas a traditional computer can only be in one of these states at any one time.
  • a quantum Turing machine is a theoretical model of such a computer.
  • the storage systems described above may also be paired with FPGA- accelerated servers as part of a larger Al or ML infrastructure.
  • FPGA-accelerated servers may reside near (e.g., in the same data center) the storage systems described above or even incorporated into an appliance that includes one or more storage systems, one or more FPGA-accelerated servers, networking infrastructure that supports communications between the one or more storage systems and the one or more FPGA-accelerated servers, as well as other hardware and software components.
  • FPGA-accelerated servers may reside within a cloud computing environment that may be used to perform compute-related tasks for Al and ML jobs. Any of the embodiments described above may be used to collectively serve as a FPGA-based Al or ML platform.
  • the FPGAs that are contained within the FPGA-accelerated servers may be reconfigured for different types of ML models (e.g., LSTMs, CNNs, GRUs).
  • ML models e.g., LSTMs, CNNs, GRUs.
  • the ability to reconfigure the FPGAs that are contained within the FPGA-accelerated servers may enable the acceleration of a ML or Al application based on the most optimal numerical precision and memory model being used.
  • Readers will appreciate that by treating the collection of FPGA-accelerated servers as a pool of FPGAs, any CPU in the data center may utilize the pool of FPGAs as a shared hardware microservice, rather than limiting a server to dedicated accelerators plugged into it.
  • the FPGA-accelerated servers and the GPU-accelerated servers described above may implement a model of computing where, rather than keeping a small amount of data in a CPU and running a long stream of instructions over it as occurred in more traditional computing models, the machine learning model and parameters are pinned into the high-bandwidth on-chip memory with lots of data streaming though the high-bandwidth on- chip memory.
  • FPGAs may even be more efficient than GPUs for this computing model, as the FPGAs can be programmed with only the instructions needed to run this kind of computing model.
  • the storage systems described above may be configured to provide parallel storage, for example, through the use of a parallel file system such as BeeGFS.
  • a parallel file system such as BeeGFS.
  • Such parallel files systems may include a distributed metadata architecture.
  • the parallel file system may include a plurality of metadata servers across which metadata is distributed, as well as components that include services for clients and storage servers.
  • Containerized applications can be managed using a variety of tools. For example, containerized applications may be managed using Docker Swarm, Kubemetes, and others. Containerized applications may be used to facilitate a serverless, cloud native computing deployment and management model for software applications. In support of a serverless, cloud native computing deployment and management model for software applications, containers may be used as part of an event handling mechanisms (e.g., AWS Lambdas) such that various events cause a containerized application to be spun up to operate as an event handler.
  • an event handling mechanisms e.g., AWS Lambdas
  • the systems described above may be deployed in a variety of ways, including being deployed in ways that support fifth generation ('5 G') networks.
  • 5G networks may support substantially faster data communications than previous generations of mobile communications networks and, as a consequence may lead to the disaggregation of data and computing resources as modem massive data centers may become less prominent and may be replaced, for example, by more-local, micro data centers that are close to the mobile-network towers.
  • the systems described above may be included in such local, micro data centers and may be part of or paired to multi-access edge computing ('MEC') systems.
  • 'MEC' multi-access edge computing
  • Such MEC systems may enable cloud computing capabilities and an IT service environment at the edge of the cellular network. By running applications and performing related processing tasks closer to the cellular customer, network congestion may be reduced and applications may perform better.
  • the storage systems described above may also be configured to implement NVMe Zoned Namespaces.
  • NVMe Zoned Namespaces Through the use of NVMe Zoned Namespaces, the logical address space of a namespace is divided into zones. Each zone provides a logical block address range that must be written sequentially and explicitly reset before rewriting, thereby enabling the creation of namespaces that expose the natural boundaries of the device and offload management of internal mapping tables to the host.
  • 'ZNS' ZNS SSDs or some other form of zoned block devices may be utilized that expose a namespace logical address space using zones. With the zones aligned to the internal physical properties of the device, several inefficiencies in the placement of data can be eliminated.
  • each zone may be mapped, for example, to a separate application such that functions like wear levelling and garbage collection could be performed on a per-zone or per-application basis rather than across the entire device.
  • the storage controllers described herein may be configured with to interact with zoned block devices through the usage of, for example, the LinuxTM kernel zoned block device interface or other tools.
  • the storage systems described above may also be configured to implement zoned storage in other ways such as, for example, through the usage of shingled magnetic recording (SMR) storage devices.
  • SMR shingled magnetic recording
  • device-managed embodiments may be deployed where the storage devices hide this complexity by managing it in the firmware, presenting an interface like any other storage device.
  • zoned storage may be implemented via a host-managed embodiment that depends on the operating system to know how to handle the drive, and only write sequentially to certain regions of the drive.
  • Zoned storage may similarly be implemented using a host-aware embodiment in which a combination of a drive managed and host managed implementation is deployed.
  • the storage systems described herein may be used to form a data lake.
  • a data lake may operate as the first place that an organization’s data flows to, where such data may be in a raw format. Metadata tagging may be implemented to facilitate searches of data elements in the data lake, especially in embodiments where the data lake contains multiple stores of data, in formats not easily accessible or readable (e.g., unstructured data, semistructured data, structured data). From the data lake, data may go downstream to a data warehouse where data may be stored in a more processed, packaged, and consumable format. The storage systems described above may also be used to implement such a data warehouse. In addition, a data mart or data hub may allow for data that is even more easily consumed, where the storage systems described above may also be used to provide the underlying storage resources necessary for a data mart or data hub. In embodiments, queries the data lake may require a schema-on-read approach, where data is applied to a plan or schema as it is pulled out of a stored location, rather than as it goes into the stored location.
  • the storage systems described herein may also be configured implement a recovery point objective (‘RPO’), which may be establish by a user, established by an administrator, established as a system default, established as part of a storage class or service that the storage system is participating in the delivery of, or in some other way.
  • RPO recovery point objective
  • a “recovery point objective” is a goal for the maximum time difference between the last update to a source dataset and the last recoverable replicated dataset update that would be correctly recoverable, given a reason to do so, from a continuously or frequently updated copy of the source dataset.
  • An update is correctly recoverable if it properly takes into account all updates that were processed on the source dataset prior to the last recoverable replicated dataset update.
  • replication can start to fall further behind which can extend the miss between the expected recovery point objective and the actual recovery point that is represented by the last correctly replicated update.
  • RAFT-based databases may operate like shared-nothing storage clusters where all RAFT nodes store all data.
  • the amount of data stored in a RAFT cluster may be limited so that extra copies don’t consume too much storage.
  • a container server cluster might also be able to replicate all data to all cluster nodes, presuming the containers don’t tend to be too large and their bulk data (the data manipulated by the applications that run in the containers) is stored elsewhere such as in an S3 cluster or an external file server.
  • the container storage may be provided by the cluster directly through its shared-nothing storage model, with those containers providing the images that form the execution environment for parts of an application or service.
  • Figure 3D illustrates an exemplary computing device 350 that may be specifically configured to perform one or more of the processes described herein.
  • computing device 350 may include a communication interface 352, a processor 354, a storage device 356, and an input/output ("I/O") module 358 communicatively connected one to another via a communication infrastructure 360.
  • I/O input/output
  • FIG. 3D the components illustrated in Figure 3D are not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device 350 shown in Figure 3D will now be described in additional detail.
  • Processor 354 generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein. Processor 354 may perform operations by executing computer-executable instructions 362 (e.g., an application, software, code, and/or other executable data instance) stored in storage device 356.
  • computer-executable instructions 362 e.g., an application, software, code, and/or other executable data instance
  • Storage device 356 may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device.
  • storage device 356 may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein.
  • Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device 356.
  • data representative of computer-executable instructions 362 configured to direct processor 354 to perform any of the operations described herein may be stored within storage device 356.
  • data may be arranged in one or more databases residing within storage device 356.
  • I/O module 358 may include one or more I/O modules configured to receive user input and provide user output.
  • I/O module 358 may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities.
  • I/O module 358 may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons.
  • I/O module 358 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers.
  • I/O module 358 is configured to provide graphical data to a display for presentation to a user.
  • the graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
  • any of the systems, computing devices, and/or other components described herein may be implemented by computing device 350.
  • Figure 3E illustrates an example of a fleet of storage systems 376 for providing storage services (also referred to herein as ‘data services’).
  • the fleet of storage systems 376 depicted in Figure 3 includes a plurality of storage systems 374a, 374b, 374c, 374d, 374n that may each be similar to the storage systems described herein.
  • the storage systems 374a, 374b, 374c, 374d, 374n in the fleet of storage systems 376 may be embodied as identical storage systems or as different types of storage systems.
  • two of the storage systems 374a, 374n depicted in Figure 3E are depicted as being cloudbased storage systems, as the resources that collectively form each of the storage systems 374a, 374n are provided by distinct cloud services providers 370, 372.
  • the first cloud services provider 370 may be Amazon AWSTM whereas the second cloud services provider 372 is Microsoft AzureTM, although in other embodiments one or more public clouds, private clouds, or combinations thereof may be used to provide the underlying resources that are used to form a particular storage system in the fleet of storage systems 376.
  • the example depicted in Figure 3E includes an edge management service 382 for delivering storage services in accordance with some embodiments of the present disclosure.
  • the storage services (also referred to herein as ‘data services’) that are delivered may include, for example, services to provide a certain amount of storage to a consumer, services to provide storage to a consumer in accordance with a predetermined service level agreement, services to provide storage to a consumer in accordance with predetermined regulatory requirements, and many others.
  • the edge management service 382 depicted in Figure 3E may be embodied, for example, as one or more modules of computer program instructions executing on computer hardware such as one or more computer processors.
  • the edge management service 382 may be embodied as one or more modules of computer program instructions executing on a virtualized execution environment such as one or more virtual machines, in one or more containers, or in some other way.
  • the edge management service 382 may be embodied as a combination of the embodiments described above, including embodiments where the one or more modules of computer program instructions that are included in the edge management service 382 are distributed across multiple physical or virtual execution environments.
  • the edge management service 382 may operate as a gateway for providing storage services to storage consumers, where the storage services leverage storage offered by one or more storage systems 374a, 374b, 374c, 374d, 374n.
  • the edge management service 382 may be configured to provide storage services to host devices 378a, 378b, 378c, 378d, 378n that are executing one or more applications that consume the storage services.
  • the edge management service 382 may operate as a gateway between the host devices 378a, 378b, 378c, 378d, 378n and the storage systems 374a, 374b, 374c, 374d, 374n, rather than requiring that the host devices 378a, 378b, 378c, 378d, 378n directly access the storage systems 374a, 374b, 374c, 374d, 374n.
  • the edge management service 382 of Figure 3E exposes a storage services module 380 to the host devices 378a, 378b, 378c, 378d, 378n of Figure 3E, although in other embodiments the edge management service 382 may expose the storage services module 380 to other consumers of the various storage services.
  • the various storage services may be presented to consumers via one or more user interfaces, via one or more APIs, or through some other mechanism provided by the storage services module 380.
  • the storage services module 380 depicted in Figure 3E may be embodied as one or more modules of computer program instructions executing on physical hardware, on a virtualized execution environment, or combinations thereof, where executing such modules causes enables a consumer of storage services to be offered, select, and access the various storage services.
  • the edge management service 382 of Figure 3E also includes a system management services module 384.
  • the system management services module 384 of Figure 3E includes one or more modules of computer program instructions that, when executed, perform various operations in coordination with the storage systems 374a, 374b, 374c, 374d, 374n to provide storage services to the host devices 378a, 378b, 378c, 378d, 378n.
  • the system management services module 384 may be configured, for example, to perform tasks such as provisioning storage resources from the storage systems 374a, 374b, 374c, 374d, 374n via one or more APIs exposed by the storage systems 374a, 374b, 374c, 374d, 374n, migrating datasets or workloads amongst the storage systems 374a, 374b, 374c, 374d, 374n via one or more APIs exposed by the storage systems 374a, 374b, 374c, 374d, 374n, setting one or more tunable parameters (i. e.
  • system management services module 384 may be responsible for using APIs (or some other mechanism) provided by the storage systems 374a, 374b, 374c, 374d, 374n to configure the storage systems 374a, 374b, 374c, 374d, 374n to operate in the ways described below.
  • the storage systems 374a, 374b, 374c, 374d, 374n may service reads by returning data that includes the PII, but the edge management service 382 itself may obfuscate the PII as the data is passed through the edge management service 382 on its way from the storage systems 374a, 374b, 374c, 374d, 374n to the host devices 378a, 378b, 378c, 378d, 378n.
  • the storage systems 374a, 374b, 374c, 374d, 374n depicted in Figure 3E may be embodied as one or more of the storage systems described above with reference to Figures 1A-3D, including variations thereof.
  • the storage systems 374a, 374b, 374c, 374d, 374n may serve as a pool of storage resources where the individual components in that pool have different performance characteristics, different storage characteristics, and so on.
  • one of the storage systems 374a may be a cloud-based storage system
  • another storage system 374b may be a storage system that provides block storage
  • another storage system 374c may be a storage system that provides file storage
  • another storage system 374d may be a relatively high-performance storage system while another storage system 374n may be a relatively low-performance storage system, and so on.
  • only a single storage system may be present.
  • the storage systems 374a, 374b, 374c, 374d, 374n depicted in Figure 3E may also be organized into different failure domains so that the failure of one storage system 374a should be totally unrelated to the failure of another storage system 374b.
  • each of the storage systems may receive power from independent power systems, each of the storage systems may be coupled for data communications over independent data communications networks, and so on.
  • the storage systems in a first failure domain may be accessed via a first gateway whereas storage systems in a second failure domain may be accessed via a second gateway.
  • the first gateway may be a first instance of the edge management service 382 and the second gateway may be a second instance of the edge management service 382, including embodiments where each instance is distinct, or each instance is part of a distributed edge management service 382.
  • storage services may be presented to a user that are associated with different levels of data protection.
  • storage services may be presented to the user that, when selected and enforced, guarantee the user that data associated with that user will be protected such that various recovery point objectives (‘RPO’) can be guaranteed.
  • RPO recovery point objectives
  • a first available storage service may ensure, for example, that some dataset associated with the user will be protected such that any data that is more than 5 seconds old can be recovered in the event of a failure of the primary data store whereas a second available storage service may ensure that the dataset that is associated with the user will be protected such that any data that is more than 5 minutes old can be recovered in the event of a failure of the primary data store.
  • An additional example of storage services that may be presented to a user, selected by a user, and ultimately applied to a dataset associated with the user can include one or more data compliance services.
  • data compliance services may be embodied, for example, as services that may be provided to consumers (i.e., a user) the data compliance services to ensure that the user’s datasets are managed in a way to adhere to various regulatory requirements.
  • a non-transitory computer-readable medium storing computer-readable instructions may be provided in accordance with the principles described herein.
  • the instructions when executed by a processor of a computing device, may direct the processor and/or computing device to perform one or more operations, including one or more of the operations described herein.
  • Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media.
  • a non-transitory computer-readable medium as referred to herein may include any non-transitory storage medium that participates in providing data (e.g., instructions) that may be read and/or executed by a computing device (e.g., by a processor of a computing device).
  • a non-transitory computer-readable medium may include, but is not limited to, any combination of non-volatile storage media and/or volatile storage media.
  • Exemplary non-volatile storage media include, but are not limited to, read-only memory, flash memory, a solid-state drive, a magnetic storage device (e.g., a hard disk, a floppy disk, magnetic tape, etc.), ferroelectric random-access memory ("RAM"), and an optical disc (e.g., a compact disc, a digital video disc, a Blu-ray disc, etc.).
  • Exemplary volatile storage media include, but are not limited to, RAM (e.g., dynamic RAM).
  • Blades of a storage system are divided into groups, referred to as resiliency groups, and the storage system enforces writes to select target blades that belong to the same group.
  • the embodiments are grouped into four mechanisms that are presented below in various embodiments.
  • Quorum groups and how to make boot upscale reliably with storage system expansion: blades of a quorum participate in a consensus algorithm. Quorum groups are used by boot up processes.
  • Each of the chassis (412, 418, 424) in Figure 4 may be similar to the chassis described above, as each chassis (412, 418, 424) may be configured to support multiple types of blades (414, 416, 420, 422, 426, 428). Each chassis (412, 418, 424) may be configured, for example, to support storage blades, compute blades, hybrid blades, or any combination thereof.
  • the example method depicted in Figure 4 includes identifying (406), in dependence upon a failure domain formation policy (402), an available configuration (408) for a failure domain.
  • a failure domain may represent a group of components within the storage system (402) that can be negatively impacted by the failure of another component in the storage system (402).
  • Such a failure domain may be embodied, for example, as a group of blades that are all connected to the same power source. In such an example, a failure of the power source would negatively impact the group of blades as power would be lost to the group of blades.
  • a failure domain may also be embodied, for example, as a group of blades that carry out data communications by connecting to one or more data communications networks via a data communications bus provided by a single chassis. In such an example, a failure of the chassis or the data communications bus would negatively impact the group of blades as the data communications bus would become unavailable and the group of blades would have no way to access the one or more data communications networks.
  • a failure domain may also be embodied as a group of devices that are logically dependent upon each other.
  • a failure domain may consist of a group of blades that some piece of data (e.g., all data in a database) is striped across.
  • a failure of one of the blades could negatively impact the group of blades that are logically dependent upon each other, as the portion of the piece of data that is stored on the failed blade could be lost.
  • creating (410) the failure domain in accordance with the available configuration (408) may be carried out by configuring an authority that is associated with the failure domain to write data to memory that is contained within the first blade (414) mounted in the first chassis (412), the second blade (422) mounted in the second chassis (412), and the second blade (428) mounted in the third chassis (412).
  • the authority may write data to such blades, and may also create redundancy data (e.g., parity data) in each of the blades in accordance with a data redundancy policy that may be specified in the failure domain formation policy (402).
  • the failure domain can include at least one blade mounted within a first chassis and another blade mounted within a second chassis.
  • the storage system (404) may consist of different sets of blades (414, 416, 420, 422, 426, 428) configured within one of a plurality of chassis (412, 418, 424).
  • the sets of blades (414, 416, 420, 422, 426, 428) may be different as the sets may include a different number of blades, blades of differing types, blades with non-'uniform storage capacities, blades with non-uniform processing capacities, and so on.
  • two blades within the same set may also be different as the two blades may have non-uniform amounts and types of storage resources within each blade, the two blades may have non-uniform amounts and types of processing resources within each blade, and so on.
  • identifying (406) an available configuration (408) for a failure domain in dependence upon a failure domain formation policy (402) may be carried out in response to affirmatively (504) determining that the topology of the storage system (404) has changed. Readers will appreciate that when the topology of the storage system (404) has changed, new configurations for the failure domain may become available, previously existing configurations for the failure domain may cease to be available, and so on.
  • Figure 6 sets forth a flowchart illustrating an additional example method of dynamically forming a failure domain in a storage system (404) according to embodiments of the present disclosure.
  • the example method depicted in Figure 6 is similar to the example method depicted in Figure 4, as the example method depicted in Figure 6 also includes identifying (406) an available configuration (408) for a failure domain in dependence upon a failure domain formation policy (402) and creating (410) the failure domain in accordance with the available configuration (408).
  • identifying (406) an available configuration (408) for a failure domain in dependence upon a failure domain formation policy (402) may be carried out in response to affirmatively (604) determining that the failure domain formation policy (402) has changed.
  • configurations that previously did not satisfy the rules set forth in the failure domain formation policy (402) may satisfy the rules set forth in the modified failure domain formation policy (402)
  • configurations that previously satisfied the rules set forth in the failure domain formation policy (402) may not satisfy the rules set forth in the modified failure domain formation policy (402)
  • the storage system (404) may be configured to identify (406) an available configuration (408) for the failure domain in dependence upon the modified failure domain formation policy (402) by identifying all possible configurations available in the new topology of the storage system (404) and identifying the configurations that best satisfy the rules set forth in the modified failure domain formation policy (402).
  • Figure 7 sets forth a flowchart illustrating an additional example method of dynamically forming a failure domain in a storage system (404) according to embodiments of the present disclosure.
  • the example method depicted in Figure 7 is similar to the example method depicted in Figure 4, as the example method depicted in Figure 7 also includes identifying (406) an available configuration (408) for a failure domain in dependence upon a failure domain formation policy (402) and creating (410) the failure domain in accordance with the available configuration (408).
  • the failure domain formation policy (402) may specify one or more types (702) of data that are subject to the failure domain formation policy (402).
  • only the specified types (702) of data may be subject to a particular failure domain formation policy (402), such that different failure domain formation policies may be applied to different types of data.
  • a first failure domain formation policy may require that a first type of data be striped across a group of blades such that the loss of any two blades or any chassis will not result in data loss
  • a second failure domain formation policy may require that a second type of data be striped across a group of blades such that the loss of any four blades or any two chassis will not result in data loss.
  • a failure domain that was in compliance with the second failure domain formation policy would require higher levels of data redundancy than a failure domain that was in compliance with the first failure domain formation policy.
  • each type (702) of data may be embodied, for example, as data that may be characterized by any attribute that will allow for data of a particular type (702) to be distinguished from all other data in the storage system (404).
  • a particular type (702) of data may therefore be embodied, for example, as data that is owned by a particular user or a particular type of user, as data that is owned by a particular application or a particular type of application, as data that has been deduplicated, as data that has resided within the storage system (404) for at least a predetermined amount of time, as data that resides on a particular type of blade, as data stored at a particular physical location (e.g., with the same storage device), as data stored at a particular logical location (e.g., within a particular volume or directory), and so on.
  • an available configuration (408) for a failure domain is identified (406) in dependence upon a failure domain formation policy (402)
  • the inclusion of the one or more types (702) of data in the failure domain formation policy (402) may cause the available configuration (408) for the failure domain to be identified (406) and a failure domain to be created (410), such that only data that is of the one or more types (702) of data specified in the failure domain formation policy (402) is stored in the failure domain.
  • the failure domain formation policy (402) may also specify a number of blades (704) and a number of chassis ( 06) in the failure domain that may be lost without causing a loss of data stored in the failure domain.
  • the failure domain formation policy (402) specifies that failure domains should be created such that user data is striped across the blades in the failure domain in such a way that two blades may be lost without causing a loss of the user data stored in the failure domain, and that the user data is also striped across the blades in the failure domain in such a way that one chassis may be lost without causing a loss of the user data stored in the failure domain.
  • a configuration that includes more than three blades in a particular chassis (412, 418, 424) would not adhere to the failure domain formation policy (402), as the failure of any chassis that includes three or more blades to be lost would result in the loss of user data.
  • the failure domain formation policy (402) may also specify a redundancy overhead threshold (708).
  • the redundancy overhead threshold (708) may be embodied, for example, as a value that specifies the maximum amount of storage resources within a failure domain that may be dedicated to storing redundancy data.
  • the redundancy overhead of a particular failure domain may be calculated, for example, by dividing the amount of storage resources that are utilized to store redundancy data by the amount of blades that are utilized to store non-redundancy data (e.g., user data).
  • a failure domain includes four blades, and that data is striped across the four blades using RAID level 6, such that redundancy data (e.g., parity data), must be contained within two of the blades for a particular data stripe.
  • redundancy data e.g., parity data
  • the redundancy overhead is 100%, as two blades are used to store user data and two blades are used to store redundancy data.
  • the failure domain includes ten blades where data is striped across the ten blades using RAID level 6, however, the redundancy overhead is only 25%, as eight blades are used to store user data and two blades are used to store redundancy data.
  • the failure domain formation policy (402) may specify a redundancy overhead threshold (708) in terms of a maximum percentage of storage resources in a given failure domain that may be used to store redundancy data, in terms of a minimum percentage of storage resources in a given failure domain that must be used to store nonredundancy data, and in other ways as will occur to those of skill in the art in view of the teachings of the present disclosure.
  • a redundancy overhead threshold (708) in terms of a maximum percentage of storage resources in a given failure domain that may be used to store redundancy data, in terms of a minimum percentage of storage resources in a given failure domain that must be used to store nonredundancy data, and in other ways as will occur to those of skill in the art in view of the teachings of the present disclosure.
  • the inclusion of information describing redundancy overhead threshold (708) may be taken into account when identifying (406) an available configuration (408) for the failure domain.
  • Figure 8 sets forth a flowchart illustrating an additional example method of dynamically forming a failure domain in a storage system (404) according to embodiments of the present disclosure.
  • the example method depicted in Figure 8 is similar to the example method depicted in Figure 4, as the example method depicted in Figure 8 also includes identifying (406) an available configuration (408) for a failure domain in dependence upon a failure domain formation policy (402) and creating (410) the failure domain in accordance with the available configuration (408).
  • the example method depicted in Figure 8 also includes moving (802) data stored on a set of blades that were included in a previously created failure domain to a set of blades in the failure domain.
  • moving (802) data stored on a set of blades that were included in a previously created failure domain to a set of blades in the failure domain may be carried out, for example, by writing the data to the set of blades in the failure domain that was created (410) in accordance with the available configuration (408) and erasing the data from the set of blades that were included in a previously created failure domain.
  • moving (802) data stored on a set of blades that were included in a previously created failure domain to a set of blades in the failure domain may be carried out, for example, in response to creating (410) the failure domain in accordance with the available configuration (408) after detecting a change to the topology of the storage system (404), in response to creating (410) the failure domain in accordance with the available configuration (408) after determining that the failure domain formation policy (402) had changed, and so on.
  • the newly created failure domain may include one or more of the blades that were part of the previously created failure domain, such that only a portion of the data that is stored on a set of blades that were included in the previously created failure domain needs to be moved (802), as some portion of the data may continue to be stored on the blades that were included in both the previously created failure domain and the newly created failure domain.
  • Figure 9 sets forth a block diagram of automated computing machinery comprising an example computer (952) useful in dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments of the present disclosure.
  • the computer (952) of Figure 9 includes at least one computer processor (956) or "CPU” as well as random access memory (“RAM”) (968) which is connected through a high speed memory bus (966) and bus adapter (958) to processor (956) and to other components of the computer (952).
  • RAM Stored in RAM (968) is a failure domain formation module (926), a module of computer program instructions for dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments of the present disclosure.
  • the failure domain formation module (926) may be configured for dynamically forming a failure domain in a storage system that includes a plurality of blades by: identifying, in dependence upon a failure domain formation policy, an available configuration for a failure domain that includes a first blade mounted within a first chassis and a second blade mounted within a second chassis, wherein each chassis is configured to support multiple types of blades; creating the failure domain in accordance with the available configuration; determining whether a topology of the storage system has changed, wherein identifying the available configuration for the failure domain is carried out responsive to affirmatively determining that the topology of the storage system has changed; determining whether the failure domain formation policy has changed, wherein identifying the available configuration for the failure domain is carried out responsive to affirmatively determining that the failure domain formation policy has changed; moving data stored on a set of blades that were included in a previously created failure domain to a set of blades in the failure domain, as was described in greater detail above.
  • RAM Also stored in RAM (968) is an operating system (954).
  • Operating systems useful in computers configured for dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments described herein include UNIX, LinuxTM, Microsoft XPTM, AIXTM, IBM's i5/OSTM, and others as will occur to those of skill in the art.
  • the operating system (954) and failure domain formation module (926) in the example of Figure 9 are shown in RAM (968), but many components of such software typically are stored in non-volatile memory also, such as, for example, on a disk drive (970).
  • the example computer (952) of Figure 9 also includes disk drive adapter (972) coupled through expansion bus (960) and bus adapter (958) to processor (956) and other components of the computer (952).
  • Disk drive adapter (972) connects non-volatile data storage to the computer (952) in the form of disk drive (970).
  • Disk drive adapters useful in computers configured for dynamically forming a failure domain in a storage system that includes a plurality of blades according to embodiments described herein include Integrated Drive Electronics (“IDE”) adapters, Small Computer System Interface (“SCSI”) adapters, and others as will occur to those of skill in the art.
  • IDE Integrated Drive Electronics
  • SCSI Small Computer System Interface
  • Non-volatile computer memory also may be implemented for as an optical disk drive, electrically erasable programmable read-only memory (so-called “EEPROM” or “Flash” memory), RAM drives, and so on, as will occur to those of skill in the art.
  • EEPROM electrically erasable programmable read-only memory
  • Flash RAM drives
  • the example computer (952) of Figure 9 includes one or more input/output (“I/O") adapters (978).
  • I/O adapters implement user-oriented input/output through, for example, software drivers and computer hardware for controlling output to display devices such as computer display screens, as well as user input from user input devices (982) such as keyboards and mice.
  • the example computer (952) of Figure 9 includes a video adapter (909), which is an example of an I/O adapter specially designed for graphic output to a display device (980) such as a display screen or computer monitor.
  • Video adapter (909) is connected to processor (956) through a high speed video bus (964), bus adapter (958), and the front side bus (962), which is also a high speed bus.
  • the computer (952) may implement certain instructions stored on RAM (968) for execution by processor (956) for dynamically forming a failure domain in a storage system that includes a plurality of blades.
  • dynamically forming a failure domain in a storage system that includes a plurality of blades may be implemented as part of a larger set of executable instructions.
  • the failure domain formation module (926) may be part of an overall system management process.
  • Figure 10 sets forth a storage system embodiment that evaluates storage system resources 1016 and rules 1018 in terms of data survivability versus data capacity efficiency, and produces an explicit trade-off determination 1006 to bias a resiliency groups generator 1008 in the formation of resiliency groups 1010 of storage system resources.
  • a resiliency group 1010 is a group of storage system resources 1016 that supports RAID stripes 1014, and has data survivability (in the face of failure(s)) and data capacity efficiency (for data storage in storage memory) as characteristics.
  • the various embodiments described herein have as goals to achieve desirable levels of data survivability and data capacity efficiency through various mechanisms and changes over time in storage system resources 1016, and especially during storage system expansion and scaling.
  • present embodiments of storage systems have awareness of tradeoffs and system particulars such as components and configurations, and do not simply leave levels of data survivability and data capacity efficiency to fall where they may.
  • the present embodiments have an evaluator 1004 that performs an explicit trade-off determination 1006, which informs and in some embodiments sets a bias in a resiliency groups generator 1008.
  • a RAID stripe 1014 is written across a subset of storage drives, portions of storage drives, blades, or other storage system resources of a resiliency group 1010, in various embodiments.
  • a write group 1012 can include a subset of the storage system resources of a resiliency group 1010 to which the write group belongs.
  • Further resources such as processors 1002, communication resources (not shown explicitly but readily understood), other types of memory, etc., could be in resiliency groups that include storage memory, or in separate resiliency groups, in various combinations in various embodiments.
  • each RAID stripe 1014 gives rise to a data capacity efficiency 1032 of the RAID stripe 1014.
  • Data capacity efficiency 1032 can also be determined of the write group 1012, and/or of the resiliency group 1010 in which the RAID stripe 1014 or a group of RAID stripes 1014 reside.
  • a resource failure 1036 i.e. a failure of one of the storage system resources 1016, in a resiliency group 1010, could happen in such a way that data is recoverable, or data is lost, depending on circumstances.
  • storage system resources 1016 could be represented to various levels of granularity of storage memory, storage devices, storage drives, blades, etc., including whole devices or portions of devices, homogeneous or heterogeneous resources, etc.
  • Examples of rules 1018 include minimum and maximum recommended or absolute amounts of numbers of resources in a group, directions of whether increasing or decreasing numbers of a specific type of resources increases or decreases data survivability and decreases or increases data capacity efficiency, rules for single-chassis arrangements, rules for multi-chassis arrangements, rules for preferred or ranges of redundancy, rules for preferred or ranges of spares, rules for storage system expansion, rules for different types of data, rules for error correction coding, rules for differing types of storage memory, etc. There could be rules specific to upgrades, replacements, aging of system or components, etc. Rules could be independent or interdependent, weighted or unweighted, activated or deactivated, system-supplied or developed, user-installed, etc.
  • the resiliency groups generator 1008 takes on a bias 1038, according to the explicit trade-off determination 1006 from the evaluator 1004, and generates resiliency groups 1010 composed of groups of storage system resources 1016 according to this bias 1038.
  • the bias 1038 is a hard bias
  • every resiliency group 1010 is precisely in conformance with the explicit trade-off determination 1006.
  • the bias 1038 is a soft bias, trending is established and/or exceptions may be granted.
  • the processor(s) 100 also termed a processing device, performs data I/O 1030 with the RAID stripes 1014 in the resiliency groups 1010, performing data recovery 1026 and/or data rebuild 1028 if the system experiences a resource failure 1036 that is within the data survivability capability of the resiliency group(s) 1010.
  • Figure 12 illustrates a trade-off between fewer and wider resiliency groups 1202, and more and narrower resiliency groups 1204, in terms of data capacity efficiency and mean time to data loss (MTTDL).
  • the fewer and wider resiliency groups 1202 have a greater data capacity efficiency, in comparison to the more and narrower resiliency groups 1204 which have a lower data capacity efficiency (see Figure 11).
  • a wider resiliency group has a statistically increased likelihood of experiencing multiple failures and thus has a lower mean time to data loss 1034 in comparison to a narrower resiliency group, which has a higher mean time to data loss 1034.
  • the data survivability versus data capacity efficiency evaluator 1004 produces an explicit trade-off determination 1006 favoring a higher mean time to data loss (e.g., higher data survivability) at a cost of lower data capacity efficiency, biasing the resiliency groups generator 1008 to produce more and narrower resiliency groups 1204. For example, this could occur in a scenario where the storage system initially defines resiliency groups of a specified width, and these have a defined data capacity efficiency 1108 and mean time to data loss 1034. As the system ages, or as the storage system undergoes expansion through addition of further resources (e.g.
  • the re-formed resiliency groups 1308 include the new resiliency group 1310, and stable resiliency groups 1312 that are preserved. Further scenarios are readily devised for blades or other storage system resources, and appropriate resiliency groups and rules 1018.
  • FIG. 15A illustrates a multi-chassis storage system, in which resiliency groups 1504, 1508 are formed with storage system resources inside of each chassis 1502, 1506, in various embodiments.
  • One chassis 1502 has one or more resiliency groups 1504 made up of storage system resources of that chassis 1502.
  • Another chassis 1506 has one or more resiliency groups 1508 made up of storage resources of that chassis 1506, etc.
  • Appropriate rules 1018 are readily developed for the evaluator 1004 and the bias 1038 in the resiliency groups generator 1008, to form resiliency groups 1504, 1508 in conformance with this scenario. For example, forming a resiliency group inside of a chassis may have advantages in terms of minimizing communication delays and data access latency.
  • Figure 15B illustrates a multi-chassis storage system, in which resiliency groups 1514 can span across multiple chassis 1510, 1512, in various embodiments.
  • Storage system resources of multiple chassis 1510, 1512 have membership in a resiliency group 1514.
  • Appropriate rules 1018 are readily developed for these scenarios. For example, forming a resiliency group that spans multiple chassis may have advantages in terms of width of RAID stripes and higher data capacity efficiency.
  • Figure 15C illustrates resiliency groups 1516, 1518 formed with blades 1520 as storage system resources having membership in resiliency groups.
  • One group of blades 1520 belongs to one resiliency group 1516, another group of blades 1520 belongs to another resiliency group 1518, etc.
  • These could be homogeneous blades 1520 or heterogeneous blades 1520, and this scenario could be combined with further scenarios, in various embodiments with appropriate rules 1018.
  • Figure 15E illustrates resiliency groups 1528, 1530 formed with storage drives 1526 as storage system resources having membership in resiliency groups.
  • One group of storage drives 1526 belongs to one resiliency group 1528, another group of storage drives belongs to another resiliency group 1530, etc.
  • Appropriate rules 1018 are readily developed for this and various further scenarios.
  • Figure 15F illustrates resiliency groups 1532, 1534 formed with portions of storage drives 1526 as storage system resources having membership in resiliency groups.
  • One group of portions of storage drives 1526 belongs to one resiliency group 1532, another group of portions of storage drives 1526 belongs to another resiliency group 1534, and these could be overlapping or non-overlapping, with various appropriate rules in various combinations in various embodiments.
  • different types of storage memory in storage drives could belong to different resiliency groups
  • heterogeneous storage drives (in amount or type of storage memory) 1526 could be supported, aging, upgrade or expansion storage memories could belong to different resiliency groups, etc.
  • Finer granularity of amounts of storage memory or storage system resources for assignment to resiliency groups may have advantages for data survivability, support for heterogeneous amounts or types of storage memory, distribution of computing resources, etc.
  • the widest possible write group 1608 in the resiliency group 1602 has the greatest data capacity efficiency 1108 among these write groups in the resiliency group 1602, but lowest data survivability (e.g., mean time to data loss 1034).
  • Various of the parameters N, R and S can be adjusted per the rules 1018 and explicit trade-off determination 1006 (see Figure 10) for various scenarios and embodiments. Further mechanisms for increasing data capacity efficiency and/or data survivability may be applicable in various embodiments.
  • Zigzag coding 1702 decreases the storage memory overhead for error correction coding data into RAID stripes 1704, 1706, and can be used to increase data capacity efficiency 1032 in these RAID stripes 1704, 1706 and corresponding resiliency group(s), also write group(s) if applicable.
  • Appropriate rules 1018 are readily developed for incorporating zigzag coding 1702 into the process(es) and mechanism(s) for the explicit trade-off determination 1006 and resultant resiliency group(s) 1010 for various embodiments.
  • Other values of R, or ranges of R are readily developed for various examples.
  • This may be a large amount of data to move into RAM or NVRAM and process, making data recovery or reconstruction slower and less efficient than would be the case for a smaller resiliency group and/or narrower data stripe.
  • the super-wide data stripe 1802 could have 20 of these locally repairable shard groups 1806, with each shard group having five shards, as four data shards 1812, 1814, 1816, 1818 and one parity shard 1820.
  • each repairable shard group 1806 has 4+1, or more generally N+l error correction coding 1810.
  • the method of statement 1 further comprising: increasing the data capacity efficiency in a resiliency group, through zigzag coding comprising multiple RAID stripes sharing at least one parity shard.
  • the storage system establishing the reformed plurality of resiliency groups comprises: responsive to a count of unassigned storage drives exceeding a defined split multiplier count, splitting the storage drives of the storage system to form a new resiliency group that does not decrease the data capacity efficiency of the storage system.
  • each of the re-formed resiliency groups comprises a group of storage drives across which a locally repairable shard group can be defined for a RAID stripe having a plurality of locally repairable shard groups.
  • the computer-readable media of statement 10 wherein the explicit determined trade-off comprises biasing storage system expansion to have more and narrower resiliency groups acting to increase mean time to data loss and decrease data capacity efficiency.
  • the plurality of resiliency groups is established across differing subsets of the storage system resources, each resiliency group is supportive of a plurality of write groups across differing subsets of members of the resiliency group, and each write group is supportive of a plurality of RAID stripes across a subset of members of the resiliency group.
  • a storage system comprising: a plurality of storage drives; and a processing device, to: determine an explicit trade-off between data survivability over resource failures and data capacity efficiency, for resiliency groups of storage system resources, comprising the plurality of storage drives; establish a plurality of resiliency groups of storage system resources according to the explicit determined trade-off; and responsive to adding at least one storage drive to the storage system, establish a re-formed plurality of resiliency groups that is according to the explicit determined trade-off, without decreasing the data survivability over resource failures of the storage system.
  • each resiliency group is supportive of a plurality of write groups across differing subsets of members of the resiliency group
  • each write group is supportive of a plurality of RAID stripes across a subset of members of the resiliency group.
  • Example embodiments of the present disclosure are described largely in the context of a fully functional computer system useful in dynamically forming a failure domain in a storage system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system.
  • Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media.
  • the present disclosure may be embodied as an apparatus, a method, a computer program product, and so on.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a LAN or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, FPGAs, or PLAs may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Un système de stockage avec des lecteurs de stockage et un dispositif de traitement établit des groupes de résilience de ressources de système de stockage. Le système de stockage détermine un compromis explicite entre la capacité de survie de données en cas de pannes de ressources et l'efficacité de capacité de données, pour les groupes de résilience. En réponse à l'ajout d'au moins un lecteur de stockage, le système de stockage établit des groupes de résilience reformés selon le compromis explicite, sans diminuer la capacité de survie des données. Le système de stockage peut être orienté pour avoir un plus grand nombre de groupes de résilience, plus restreints, en vue d'augmenter le temps moyen avant la perte de données.
PCT/US2022/040714 2021-08-20 2022-08-18 Partitionnement efficace pour groupes de résilience de système de stockage WO2023023223A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/407,806 US20210382800A1 (en) 2016-06-03 2021-08-20 Efficient partitioning for storage system resiliency groups
US17/407,806 2021-08-20

Publications (1)

Publication Number Publication Date
WO2023023223A1 true WO2023023223A1 (fr) 2023-02-23

Family

ID=83318744

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/040714 WO2023023223A1 (fr) 2021-08-20 2022-08-18 Partitionnement efficace pour groupes de résilience de système de stockage

Country Status (1)

Country Link
WO (1) WO2023023223A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0518603A2 (fr) * 1991-06-13 1992-12-16 International Business Machines Corporation Réserve distribuée dans les réseaux DASD
US7958304B1 (en) * 2008-04-30 2011-06-07 Network Appliance, Inc. Dynamically adapting the fault tolerance and performance characteristics of a raid-based storage system by merging and splitting raid groups
US10365836B1 (en) * 2015-01-27 2019-07-30 Western Digital Technologies, Inc. Electronic system with declustered data protection by parity based on reliability and method of operation thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0518603A2 (fr) * 1991-06-13 1992-12-16 International Business Machines Corporation Réserve distribuée dans les réseaux DASD
US7958304B1 (en) * 2008-04-30 2011-06-07 Network Appliance, Inc. Dynamically adapting the fault tolerance and performance characteristics of a raid-based storage system by merging and splitting raid groups
US10365836B1 (en) * 2015-01-27 2019-07-30 Western Digital Technologies, Inc. Electronic system with declustered data protection by parity based on reliability and method of operation thereof

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GRAWINKEL MATTHIAS GRAWINKEL@UNI-MAINZ DE ET AL: "LoneStar RAID", ACM TRANSACTIONS ON STORAGE, ASSOCIATION FOR COMPUTING MACHINERY, NEW YORK, NY, US, vol. 12, no. 1, 7 January 2016 (2016-01-07), pages 1 - 29, XP058691771, ISSN: 1553-3077, DOI: 10.1145/2840810 *
ILIADIS I ET AL: "Reliability Assurance of RAID Storage Systems for a Wide Range of Latent Sector Errors", NETWORKING, ARCHITECTURE, AND STORAGE, 2008. NAS '08. INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 12 June 2008 (2008-06-12), pages 10 - 19, XP031291925, ISBN: 978-0-7695-3187-8 *

Similar Documents

Publication Publication Date Title
US11842053B2 (en) Zone namespace
US11868622B2 (en) Application recovery across storage systems
US20220206696A1 (en) Storage system with selectable write modes
US20210382800A1 (en) Efficient partitioning for storage system resiliency groups
US11693604B2 (en) Administering storage access in a cloud-based storage system
US11630593B2 (en) Inline flash memory qualification in a storage system
US11816129B2 (en) Generating datasets using approximate baselines
US20230273865A1 (en) Restoring Lost Data
US20230236939A1 (en) Data Recovery Using Recovery Policies
US11914686B2 (en) Storage node security statement management in a distributed storage cluster
US20220382455A1 (en) Providing Storage Services And Managing A Pool Of Storage Resources
US20230353495A1 (en) Distributed Service Throttling in a Container System
US11922052B2 (en) Managing links between storage objects
US20230236755A1 (en) Data Resiliency Using Container Storage System Storage Pools
US11860780B2 (en) Storage cache management
US11995315B2 (en) Converting data formats in a storage system
US20220404997A1 (en) Intelligent Block Allocation In A Heterogeneous Storage System
US20240143207A1 (en) Handling semidurable writes in a storage system
US20230409396A1 (en) Volume Provisioning in a Distributed Storage System
US20230236764A1 (en) Edge accelerator card
US20230019628A1 (en) Build-time Scanning of Software Build Instances
US20230125030A1 (en) Context Driven User Interfaces For Storage Systems
US20230350570A1 (en) Intelligent I/O Throttling in a Container System
US20230359402A1 (en) Variable Redundancy For Metadata In Storage Systems
US20240004546A1 (en) IO Profiles in a Distributed Storage System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22769812

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022769812

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022769812

Country of ref document: EP

Effective date: 20240320