US20220091977A1 - Modifying A Synchronously Replicated Dataset - Google Patents

Modifying A Synchronously Replicated Dataset Download PDF

Info

Publication number
US20220091977A1
US20220091977A1 US17/537,976 US202117537976A US2022091977A1 US 20220091977 A1 US20220091977 A1 US 20220091977A1 US 202117537976 A US202117537976 A US 202117537976A US 2022091977 A1 US2022091977 A1 US 2022091977A1
Authority
US
United States
Prior art keywords
storage
dataset
storage system
request
modify
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/537,976
Inventor
David Grunwald
Steven Hodgson
Ronald Karr
Tabriz Holtz
Deepak Chawla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pure Storage Inc
Original Assignee
Pure Storage Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pure Storage Inc filed Critical Pure Storage Inc
Priority to US17/537,976 priority Critical patent/US20220091977A1/en
Assigned to PURE STORAGE, INC. reassignment PURE STORAGE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GRUNWALD, DAVID, HOLTZ, TABRIZ, CHAWLA, DEEPAK, HODGSON, STEVEN, KARR, RONALD
Publication of US20220091977A1 publication Critical patent/US20220091977A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0727Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2064Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2076Synchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • G06F12/0646Configuration or reconfiguration
    • G06F12/0684Configuration or reconfiguration with feedback, e.g. presence or absence of unit detected by addressing, overflow detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1072Decentralised address translation, e.g. in distributed shared memory systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/184Distributed file systems implemented as replicated file system
    • G06F16/1844Management specifically adapted to replicated file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0616Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • FIG. 1A illustrates a first example system for data storage in accordance with some implementations.
  • FIG. 1B illustrates a second example system for data storage in accordance with some implementations.
  • FIG. 1C illustrates a third example system for data storage in accordance with some implementations.
  • FIG. 1D illustrates a fourth example system for data storage in accordance with some implementations.
  • FIG. 2A is a perspective view of a storage cluster with multiple storage nodes and internal storage coupled to each storage node to provide network attached storage, in accordance with some embodiments.
  • FIG. 2B is a block diagram showing an interconnect switch coupling multiple storage nodes in accordance with some embodiments.
  • FIG. 2C is a multiple level block diagram, showing contents of a storage node and contents of one of the non-volatile solid state storage units in accordance with some embodiments.
  • FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes and storage units of some previous figures in accordance with some embodiments.
  • FIG. 2E is a blade hardware block diagram, showing a control plane, compute and storage planes, and authorities interacting with underlying physical resources, in accordance with some embodiments.
  • FIG. 2F depicts elasticity software layers in blades of a storage cluster, in accordance with some embodiments.
  • FIG. 2G depicts authorities and storage resources in blades of a storage cluster, in accordance with some embodiments.
  • FIG. 3A sets forth a diagram of a storage system that is coupled for data communications with a cloud services provider in accordance with some embodiments of the present disclosure.
  • FIG. 3B sets forth a diagram of a storage system in accordance with some embodiments of the present disclosure.
  • FIG. 4A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 4B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 5A sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 5B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 1A illustrates an example system for data storage, in accordance with some implementations.
  • System 100 also referred to as “storage system” herein
  • storage system includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 100 may include the same, more, or fewer elements configured in the same or different manner in other implementations.
  • System 100 includes a number of computing devices 164 A-B.
  • Computing devices may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like.
  • Computing devices 164 A-B may be coupled for data communications to one or more storage arrays 102 A-B through a storage area network (‘SAN’) 158 or a local area network (‘LAN’) 160 .
  • SAN storage area network
  • LAN local area network
  • the SAN 158 may be implemented with a variety of data communications fabrics, devices, and protocols.
  • the fabrics for SAN 158 may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like.
  • Data communications protocols for use with SAN 158 may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like.
  • SAN 158 is provided for illustration, rather than limitation.
  • Other data communication couplings may be implemented between computing devices 164 A-B and storage arrays 102 A-B.
  • the LAN 160 may also be implemented with a variety of fabrics, devices, and protocols.
  • the fabrics for LAN 160 may include Ethernet ( 802 . 3 ), wireless ( 802 . 11 ), or the like.
  • Data communication protocols for use in LAN 160 may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like.
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • IP Internet Protocol
  • HTTP HyperText Transfer Protocol
  • WAP Wireless Access Protocol
  • HDTP Handheld Device Transport Protocol
  • SIP Session Initiation Protocol
  • RTP Real Time Protocol
  • Storage arrays 102 A-B may provide persistent data storage for the computing devices 164 A-B.
  • Storage array 102 A may be contained in a chassis (not shown), and storage array 102 B may be contained in another chassis (not shown), in implementations.
  • Storage array 102 A and 102 B may include one or more storage array controllers 110 (also referred to as “controller” herein).
  • a storage array controller 110 may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers 110 may be configured to carry out various storage tasks.
  • Storage tasks may include writing data received from the computing devices 164 A-B to storage array 102 A-B, erasing data from storage array 102 A-B, retrieving data from storage array 102 A-B and providing data to computing devices 164 A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth.
  • redundancy operations such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations
  • Storage array controller 110 may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters.
  • Storage array controller 110 may include, for example, a data communications adapter configured to support communications via the SAN 158 or LAN 160 . In some implementations, storage array controller 110 may be independently coupled to the LAN 160 .
  • storage array controller 110 may include an I/O controller or the like that couples the storage array controller 110 for data communications, through a midplane (not shown), to a persistent storage resource 170 A-B (also referred to as a “storage resource” herein).
  • the persistent storage resource 170 A-B main include any number of storage drives 171 A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown).
  • NVRAM non-volatile Random Access Memory
  • the NVRAM devices of a persistent storage resource 170 A-B may be configured to receive, from the storage array controller 110 , data to be stored in the storage drives 171 A-F.
  • the data may originate from computing devices 164 A-B.
  • writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive 171 A-F.
  • the storage array controller 110 may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives 171 A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller 110 writes data directly to the storage drives 171 A-F.
  • the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM.
  • the NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device.
  • a power source may be a battery, one or more capacitors, or the like.
  • the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives 171 A-F.
  • storage drive 171 A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power.
  • storage drive 171 A-F may correspond to non-disk storage media.
  • the storage drive 171 A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device.
  • SSDs solid-state drives
  • storage drive 171 A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’).
  • the storage array controllers 110 may be configured for offloading device management responsibilities from storage drive 171 A-F in storage array 102 A-B.
  • storage array controllers 110 may manage control information that may describe the state of one or more memory blocks in the storage drives 171 A-F.
  • the control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller 110 , the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth.
  • the control information may be stored with an associated memory block as metadata.
  • control information for the storage drives 171 A-F may be stored in one or more particular memory blocks of the storage drives 171 A-F that are selected by the storage array controller 110 .
  • the selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information.
  • the identifier may be utilized by the storage array controllers 110 in conjunction with storage drives 171 A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers 110 may issue a command to locate memory blocks that contain control information.
  • control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive 171 A-F.
  • storage array controllers 110 may offload device management responsibilities from storage drives 171 A-F of storage array 102 A-B by retrieving, from the storage drives 171 A-F, control information describing the state of one or more memory blocks in the storage drives 171 A-F. Retrieving the control information from the storage drives 171 A-F may be carried out, for example, by the storage array controller 110 querying the storage drives 171 A-F for the location of control information for a particular storage drive 171 A-F.
  • the storage drives 171 A-F may be configured to execute instructions that enable the storage drive 171 A-F to identify the location of the control information.
  • the instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive 171 A-F and may cause the storage drive 171 A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives 171 A-F.
  • the storage drives 171 A-F may respond by sending a response message to the storage array controller 110 that includes the location of control information for the storage drive 171 A-F. Responsive to receiving the response message, storage array controllers 110 may issue a request to read data stored at the address associated with the location of control information for the storage drives 171 A-F.
  • the storage array controllers 110 may further offload device management responsibilities from storage drives 171 A-F by performing, in response to receiving the control information, a storage drive management operation.
  • a storage drive management operation may include, for example, an operation that is typically performed by the storage drive 171 A-F (e.g., the controller (not shown) associated with a particular storage drive 171 A-F).
  • a storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive 171 A-F, ensuring that data is written to memory blocks within the storage drive 171 A-F in such a way that adequate wear leveling is achieved, and so forth.
  • storage array 102 A-B may implement two or more storage array controllers 110 .
  • storage array 102 A may include storage array controllers 110 A and storage array controllers 110 B.
  • a single storage array controller 110 e.g., storage array controller 110 A
  • other storage array controllers 110 e.g., storage array controller 110 A
  • secondary controller also referred to as “secondary controller” herein.
  • the primary controller may have particular rights, such as permission to alter data in persistent storage resource 170 A-B (e.g., writing data to persistent storage resource 170 A-B).
  • At least some of the rights of the primary controller may supersede the rights of the secondary controller.
  • the secondary controller may not have permission to alter data in persistent storage resource 170 A-B when the primary controller has the right.
  • the status of storage array controllers 110 may change.
  • storage array controller 110 A may be designated with secondary status
  • storage array controller 110 B may be designated with primary status.
  • a primary controller such as storage array controller 110 A
  • a second controller such as storage array controller 110 B
  • storage array controller 110 A may be the primary controller for storage array 102 A and storage array 102 B
  • storage array controller 110 B may be the secondary controller for storage array 102 A and 102 B
  • storage array controllers 110 C and 110 D may neither have primary or secondary status.
  • Storage array controllers 110 C and 110 D may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers 110 A and 110 B, respectively) and storage array 102 B.
  • storage array controller 110 A of storage array 102 A may send a write request, via SAN 158 , to storage array 102 B.
  • the write request may be received by both storage array controllers 110 C and 110 D of storage array 102 B.
  • Storage array controllers 110 C and 110 D facilitate the communication, e.g., send the write request to the appropriate storage drive 171 A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers.
  • storage array controllers 110 are communicatively coupled, via a midplane (not shown), to one or more storage drives 171 A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array 102 A-B.
  • the storage array controllers 110 may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives 171 A-F and the NVRAM devices via one or more data communications links.
  • the data communications links described herein are collectively illustrated by data communications links 108 A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example.
  • PCIe Peripheral Component Interconnect Express
  • FIG. 1B illustrates an example system for data storage, in accordance with some implementations.
  • Storage array controller 101 illustrated in FIG. 1B may be similar to the storage array controllers 110 described with respect to FIG. 1A .
  • storage array controller 101 may be similar to storage array controller 110 A or storage array controller 110 B.
  • Storage array controller 101 includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller 101 may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements of FIG. 1A may be included below to help illustrate features of storage array controller 101 .
  • Storage array controller 101 may include one or more processing devices 104 and random access memory (‘RAM’) 111 .
  • Processing device 104 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 104 (or controller 101 ) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
  • CISC complex instruction set computing
  • RISC reduced instruction set computing
  • VLIW very long instruction word
  • the processing device 104 may also be one or more special-purpose processing devices such as an application specific integrated circuit (‘ASIC’), a field programmable gate array (‘FPGA’), a digital signal processor (‘DSP’), network processor, or the like.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • the processing device 104 may be connected to the RAM 111 via a data communications link 106 , which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus.
  • a data communications link 106 Stored in RAM 111 is an operating system 112 .
  • instructions 113 are stored in RAM 111 .
  • Instructions 113 may include computer program instructions for performing operations in in a direct-mapped flash storage system.
  • a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives.
  • storage array controller 101 includes one or more host bus adapters 103 A-C that are coupled to the processing device 104 via a data communications link 105 A-C.
  • host bus adapters 103 A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays.
  • host bus adapters 103 A-C may be a Fibre Channel adapter that enables the storage array controller 101 to connect to a SAN, an Ethernet adapter that enables the storage array controller 101 to connect to a LAN, or the like.
  • Host bus adapters 103 A-C may be coupled to the processing device 104 via a data communications link 105 A-C such as, for example, a PCIe bus.
  • storage array controller 101 may include a host bus adapter 114 that is coupled to an expander 115 .
  • the expander 115 may be used to attach a host system to a larger number of storage drives.
  • the expander 115 may, for example, be a SAS expander utilized to enable the host bus adapter 114 to attach to storage drives in an implementation where the host bus adapter 114 is embodied as a SAS controller.
  • storage array controller 101 may include a switch 116 coupled to the processing device 104 via a data communications link 109 .
  • the switch 116 may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint.
  • the switch 116 may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link 109 ) and presents multiple PCIe connection points to the midplane.
  • storage array controller 101 includes a data communications link 107 for coupling the storage array controller 101 to other storage array controllers.
  • data communications link 107 may be a QuickPath Interconnect (QPI) interconnect.
  • QPI QuickPath Interconnect
  • a traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed.
  • a higher level process e.g., initiated by the storage system
  • a lower level process e.g., initiated by a storage controller of the storage system
  • the flash storage system may include flash drives that do not include storage controllers that provide the process.
  • the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives.
  • the operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system.
  • the allocation units may be entire erase blocks or multiple erase blocks.
  • the operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system.
  • Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data.
  • the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system.
  • the operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data.
  • the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives.
  • Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process.
  • One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system.
  • the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive.
  • a storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection.
  • FIG. 1C illustrates a third example system 117 for data storage in accordance with some implementations.
  • System 117 also referred to as “storage system” herein
  • storage system includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 117 may include the same, more, or fewer elements configured in the same or different manner in other implementations.
  • system 117 includes a dual Peripheral Component Interconnect (PCP) flash storage device 118 with separately addressable fast write storage.
  • System 117 may include a storage controller 119 .
  • storage controller 119 may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure.
  • system 117 includes flash memory devices (e.g., including flash memory devices 120 a - n ), operatively coupled to various channels of the storage device controller 119 . Flash memory devices 120 a - n , may be presented to the controller 119 as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller 119 to program and retrieve various aspects of the Flash.
  • PCP Peripheral Component Interconnect
  • storage device controller 119 may perform operations on flash memory devices 120 A-N including storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc.
  • system 117 may include RAM 121 to store separately addressable fast-write data.
  • RAM 121 may be one or more separate discrete devices.
  • RAM 121 may be integrated into storage device controller 119 or multiple storage device controllers.
  • the RAM 121 may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller 119 .
  • system 119 may include a stored energy device 122 , such as a rechargeable battery or a capacitor.
  • Stored energy device 122 may store energy sufficient to power the storage device controller 119 , some amount of the RAM (e.g., RAM 121 ), and some amount of Flash memory (e.g., Flash memory 120 a - 120 n ) for sufficient time to write the contents of RAM to Flash memory.
  • storage device controller 119 may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power.
  • system 117 includes two data communications links 123 a , 123 b .
  • data communications links 123 a , 123 b may be PCI interfaces.
  • data communications links 123 a , 123 b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.).
  • Data communications links 123 a , 123 b may be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller 119 from other components in the storage system 117 .
  • NVMe non-volatile memory express
  • NVMf NVMe over fabrics
  • System 117 may also include an external power source (not shown), which may be provided over one or both data communications links 123 a , 123 b , or which may be provided separately.
  • An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM 121 .
  • the storage device controller 119 may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device 118 , which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM 121 . On power failure, the storage device controller 119 may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory 120 a - n ) for long-term persistent storage.
  • Flash memory e.g., Flash memory 120 a - n
  • the logical device may include some presentation of some or all of the content of the Flash memory devices 120 a - n , where that presentation allows a storage system including a storage device 118 (e.g., storage system 117 ) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus.
  • the presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc.
  • the stored energy device 122 may be sufficient to ensure completion of in-progress operations to the Flash memory devices 107 a - 120 n stored energy device 122 may power storage device controller 119 and associated Flash memory devices (e.g., 120 a - n ) for those operations, as well as for the storing of fast-write RAM to Flash memory.
  • Stored energy device 122 may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices 120 a - n and/or the storage device controller 119 .
  • Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein.
  • Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device 122 to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy.
  • FIG. 1D illustrates a third example system 124 for data storage in accordance with some implementations.
  • system 124 includes storage controllers 125 a , 125 b .
  • storage controllers 125 a , 125 b are operatively coupled to Dual PCI storage devices 119 a , 119 b and 119 c , 119 d , respectively.
  • Storage controllers 125 a , 125 b may be operatively coupled (e.g., via a storage network 130 ) to some number of host computers 127 a - n.
  • two storage controllers provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc.
  • the storage controllers 125 a , 125 b may provide services through some number of network interfaces (e.g., 126 a - d ) to host computers 127 a - n outside of the storage system 124 .
  • Storage controllers 125 a , 125 b may provide integrated services or an application entirely within the storage system 124 , forming a converged storage and compute system.
  • the storage controllers 125 a , 125 b may utilize the fast write memory within or across storage devices 119 a - d to journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system 124 .
  • controllers 125 a , 125 b operate as PCI masters to one or the other PCI buses 128 a , 128 b .
  • 128 a and 128 b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.).
  • Other storage system embodiments may operate storage controllers 125 a , 125 b as multi-masters for both PCI buses 128 a , 128 b .
  • a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers.
  • Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers.
  • a storage device controller 119 a may be operable under direction from a storage controller 125 a to synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM 121 of FIG. 1C ).
  • RAM e.g., RAM 121 of FIG. 1C
  • a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse.
  • This mechanism may be used, for example, to avoid a second transfer over a bus (e.g., 128 a , 128 b ) from the storage controllers 125 a , 125 b .
  • a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc.
  • a storage device controller 119 a , 119 b may be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM 121 of FIG. 1C ) without involvement of the storage controllers 125 a , 125 b .
  • This operation may be used to mirror data stored in one controller 125 a to another controller 125 b , or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface 129 a , 129 b to the PCI bus 128 a , 128 b.
  • a storage device controller 119 may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device 118 .
  • reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly.
  • a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself.
  • Flash pages which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time).
  • the storage controllers may first write data into the separately addressable fast write storage on one more storage devices.
  • the storage controllers 125 a , 125 b may initiate the use of erase blocks within and across storage devices (e.g., 118 ) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics.
  • the storage controllers 125 a , 125 b may initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance.
  • the storage system 124 may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination.
  • FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster.
  • the storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata.
  • Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations.
  • Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory.
  • Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations.
  • a storage node may be referred to as a cluster node, a blade, or a server.
  • the storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes.
  • a mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis.
  • the storage cluster can run as an independent system in one location according to some embodiments.
  • a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently.
  • the internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable.
  • the chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems.
  • the external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc.
  • the external communication bus uses different communication bus technologies for inter-chassis and client communication.
  • the switch may act as a translation between multiple protocols or technologies.
  • the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node.
  • multiple chassis may be coupled or connected to each other through an aggregator switch.
  • a portion and/or all of the coupled or connected chassis may be designated as a storage cluster.
  • each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments.
  • MAC media access control
  • Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices.
  • One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting.
  • the storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments.
  • the non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface.
  • the non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments.
  • An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit.
  • the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss.
  • the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus.
  • MRAM magnetoresistive random access memory
  • the storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage.
  • the storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster.
  • FIG. 2A is a perspective view of a storage cluster 161 , with multiple storage nodes 150 and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments.
  • a network attached storage, storage area network, or a storage cluster, or other storage memory could include one or more storage clusters 161 , each having one or more storage nodes 150 , in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby.
  • the storage cluster 161 is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory.
  • the storage cluster 161 has a chassis 138 having multiple slots 142 .
  • chassis 138 may be referred to as a housing, enclosure, or rack unit.
  • the chassis 138 has fourteen slots 142 , although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots.
  • Each slot 142 can accommodate one storage node 150 in some embodiments.
  • Chassis 138 includes flaps 148 that can be utilized to mount the chassis 138 on a rack.
  • Fans 144 provide air circulation for cooling of the storage nodes 150 and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components.
  • a switch fabric 146 couples storage nodes 150 within chassis 138 together and to a network for communication to the memory.
  • the slots 142 to the left of the switch fabric 146 and fans 144 are shown occupied by storage nodes 150 , while the slots 142 to the right of the switch fabric 146 and fans 144 are empty and available for insertion of storage node 150 for illustrative purposes.
  • This configuration is one example, and one or more storage nodes 150 could occupy the slots 142 in various further arrangements.
  • the storage node arrangements need not be sequential or adjacent in some embodiments.
  • Storage nodes 150 are hot pluggable, meaning that a storage node 150 can be inserted into a slot 142 in the chassis 138 , or removed from a slot 142 , without stopping or powering down the system.
  • the system Upon insertion or removal of storage node 150 from slot 142 , the system automatically reconfigures in order to recognize and adapt to the change.
  • Reconfiguration includes restoring redundancy and/or rebalancing data or load.
  • Each storage node 150 can have multiple components.
  • the storage node 150 includes a printed circuit board 159 populated by a CPU 156 , i.e., processor, a memory 154 coupled to the CPU 156 , and a non-volatile solid state storage 152 coupled to the CPU 156 , although other mountings and/or components could be used in further embodiments.
  • the memory 154 has instructions which are executed by the CPU 156 and/or data operated on by the CPU 156 .
  • the non-volatile solid state storage 152 includes flash or, in further embodiments, other types of solid-state memory.
  • storage cluster 161 is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above.
  • One or more storage nodes 150 can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments.
  • Plug-in storage nodes 150 whether installed in a chassis as delivered or later added, can have different sizes.
  • a storage node 150 can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc.
  • a storage node 150 could have any multiple of other storage amounts or capacities.
  • Storage capacity of each storage node 150 is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units 152 or storage nodes 150 within the chassis.
  • FIG. 2B is a block diagram showing a communications interconnect 171 A-F and power distribution bus 172 coupling multiple storage nodes 150 .
  • the communications interconnect 171 A-F can be included in or implemented with the switch fabric 146 in some embodiments. Where multiple storage clusters 161 occupy a rack, the communications interconnect 171 A-F can be included in or implemented with a top of rack switch, in some embodiments.
  • storage cluster 161 is enclosed within a single chassis 138 .
  • External port 176 is coupled to storage nodes 150 through communications interconnect 171 A-F, while external port 174 is coupled directly to a storage node.
  • External power port 178 is coupled to power distribution bus 172 .
  • Storage nodes 150 may include varying amounts and differing capacities of non-volatile solid state storage 152 as described with reference to FIG. 2A .
  • one or more storage nodes 150 may be a compute only storage node as illustrated in FIG. 2B .
  • Authorities 168 are implemented on the non-volatile solid state storages 152 , for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage 152 and supported by software executing on a controller or other processor of the non-volatile solid state storage 152 .
  • authorities 168 are implemented on the storage nodes 150 , for example as lists or other data structures stored in the memory 154 and supported by software executing on the CPU 156 of the storage node 150 .
  • authorities 168 control how and where data is stored in the non-volatile solid state storages 152 in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes 150 have which portions of the data.
  • Each authority 168 may be assigned to a non-volatile solid state storage 152 .
  • Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes 150 , or by the non-volatile solid state storage 152 , in various embodiments.
  • every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata.
  • authorities 168 there are redundant copies of authorities 168 .
  • Authorities 168 have a relationship to storage nodes 150 and non-volatile solid state storage 152 in some embodiments. Each authority 168 , covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage 152 .
  • the authorities 168 for all of such ranges are distributed over the non-volatile solid state storages 152 of a storage cluster.
  • Each storage node 150 has a network port that provides access to the non-volatile solid state storage(s) 152 of that storage node 150 .
  • Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments.
  • the assignment and use of the authorities 168 thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority 168 , in accordance with some embodiments.
  • a segment identifies a set of non-volatile solid state storage 152 and a local identifier into the set of non-volatile solid state storage 152 that may contain data.
  • the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused.
  • the offsets in the non-volatile solid state storage 152 are applied to locating data for writing to or reading from the non-volatile solid state storage 152 (in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage 152 , which may include or be different from the non-volatile solid state storage 152 having the authority 168 for a particular data segment.
  • the authority 168 for that data segment should be consulted, at that non-volatile solid state storage 152 or storage node 150 having that authority 168 .
  • embodiments calculate a hash value for a data segment or apply an inode number or a data segment number.
  • the output of this operation points to a non-volatile solid state storage 152 having the authority 168 for that particular piece of data.
  • the first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier.
  • ID entity identifier
  • This mapping may include a calculation such as a hash or a bit mask.
  • the second stage is mapping the authority identifier to a particular non-volatile solid state storage 152 , which may be done through an explicit mapping.
  • the operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage 152 having that authority 168 .
  • the operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes.
  • the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards.
  • This calculation may be used to determine the optimal non-volatile solid state storage 152 for an authority in the presence of a set of non-volatile solid state storage 152 that are reachable and constitute the same cluster.
  • the calculation also determines an ordered set of peer non-volatile solid state storage 152 that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable.
  • a duplicate or substitute authority 168 may be consulted if a specific authority 168 is unavailable in some embodiments.
  • two of the many tasks of the CPU 156 on a storage node 150 are to break up write data, and reassemble read data.
  • the authority 168 for that data is located as above.
  • the request to write is forwarded to the non-volatile solid state storage 152 currently determined to be the host of the authority 168 determined from the segment.
  • the host CPU 156 of the storage node 150 on which the non-volatile solid state storage 152 and corresponding authority 168 reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage 152 .
  • the transmitted data is written as a data stripe in accordance with an erasure coding scheme.
  • data is requested to be pulled, and in other embodiments, data is pushed.
  • the authority 168 for the segment ID containing the data is located as described above.
  • the host CPU 156 of the storage node 150 on which the non-volatile solid state storage 152 and corresponding authority 168 reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority.
  • the data is read from flash storage as a data stripe.
  • the host CPU 156 of storage node 150 then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage 152 . In some embodiments, the segment host requests the data be sent to storage node 150 by requesting pages from storage and then sending the data to the storage node making the original request.
  • data is handled with an index node or inode, which specifies a data structure that represents an object in a file system.
  • the object could be a file or a directory, for example.
  • Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes.
  • a segment number could be assigned to all or a portion of such an object in a file system.
  • data segments are handled with a segment number assigned elsewhere.
  • the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities.
  • a segment is a logical container of data in accordance with some embodiments.
  • a segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software.
  • an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable.
  • the data and parity shards are distributed, i.e., striped, across non-volatile solid state storage 152 coupled to the host CPUs 156 (See FIGS. 2E and 2G ) in accordance with an erasure coding scheme.
  • Usage of the term segments refers to the container and its place in the address space of segments in some embodiments.
  • Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments.
  • a series of address-space transformations takes place across an entire storage system.
  • the directory entries file names which link to an inode.
  • Inodes point into medium address space, where data is logically stored.
  • Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots.
  • Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments.
  • Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit 152 may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage 152 is able to allocate addresses without synchronization with other non-volatile solid state storage 152 .
  • Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data.
  • the redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures.
  • low density parity check (‘LDPC’) code is used within a single storage unit.
  • Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments.
  • Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout.
  • the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority.
  • the assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’).
  • pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments.
  • a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners.
  • a pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned.
  • Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority.
  • Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss.
  • rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines.
  • expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss.
  • a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss.
  • the placement of data for storage redundancy is independent of the placement of authorities for data consistency.
  • storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities.
  • the communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics.
  • non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster.
  • Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet.
  • Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data.
  • an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node.
  • Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location.
  • the ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine).
  • a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments.
  • the system transfers messages between the storage nodes and non-volatile solid state storage units.
  • persistent messages messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees.
  • the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies.
  • messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND.
  • Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement.
  • many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades.
  • the virtualized addresses are stored with sufficient redundancy.
  • a continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments.
  • FIG. 2C is a multiple level block diagram, showing contents of a storage node 150 and contents of a non-volatile solid state storage 152 of the storage node 150 .
  • Data is communicated to and from the storage node 150 by a network interface controller (‘NIC’) 202 in some embodiments.
  • NIC network interface controller
  • Each storage node 150 has a CPU 156 , and one or more non-volatile solid state storage 152 , as discussed above.
  • each non-volatile solid state storage 152 has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’) 204 , and flash memory 206 .
  • NVRAM nonvolatile random access memory
  • NVRAM 204 may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from.
  • the NVRAM 204 is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM) 216 , backed up by energy reserve 218 .
  • Energy reserve 218 provides sufficient electrical power to keep the DRAM 216 powered long enough for contents to be transferred to the flash memory 206 in the event of power failure.
  • energy reserve 218 is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM 216 to a stable storage medium in the case of power loss.
  • the flash memory 206 is implemented as multiple flash dies 222 , which may be referred to as packages of flash dies 222 or an array of flash dies 222 . It should be appreciated that the flash dies 222 could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc.
  • the non-volatile solid state storage 152 has a controller 212 or other processor, and an input output (I/O) port 210 coupled to the controller 212 .
  • I/O port 210 is coupled to the CPU 156 and/or the network interface controller 202 of the flash storage node 150 .
  • Flash input output (I/O) port 220 is coupled to the flash dies 222 , and a direct memory access unit (DMA) 214 is coupled to the controller 212 , the DRAM 216 and the flash dies 222 .
  • DMA direct memory access unit
  • the I/O port 210 , controller 212 , DMA unit 214 and flash I/O port 220 are implemented on a programmable logic device (‘PLD’) 208 , e.g., a field programmable gate array (FPGA).
  • PLD programmable logic device
  • FPGA field programmable gate array
  • each flash die 222 has pages, organized as sixteen kB (kilobyte) pages 224 , and a register 226 through which data can be written to or read from the flash die 222 .
  • other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die 222 .
  • Storage clusters 161 in various embodiments as disclosed herein, can be contrasted with storage arrays in general.
  • the storage nodes 150 are part of a collection that creates the storage cluster 161 .
  • Each storage node 150 owns a slice of data and computing required to provide the data.
  • Multiple storage nodes 150 cooperate to store and retrieve the data.
  • Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data.
  • Storage memory or storage devices in a storage array receive commands to read, write, or erase data.
  • the storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means.
  • Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc.
  • the storage units 152 described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node 150 is shifted into a storage unit 152 , transforming the storage unit 152 into a combination of storage unit 152 and storage node 150 . Placing computing (relative to storage data) into the storage unit 152 places this computing closer to the data itself.
  • the various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices.
  • multiple controllers in multiple storage units 152 and/or storage nodes 150 cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on).
  • FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C .
  • each storage unit 152 has a processor such as controller 212 (see FIG. 2C ), an FPGA (field programmable gate array), flash memory 206 , and NVRAM 204 (which is super-capacitor backed DRAM 216 , see FIGS. 2B and 2C ) on a PCIe (peripheral component interconnect express) board in a chassis 138 (see FIG. 2A ).
  • the storage unit 152 may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units 152 may fail and the device will continue with no data loss.
  • the physical storage is divided into named regions based on application usage in some embodiments.
  • the NVRAM 204 is a contiguous block of reserved memory in the storage unit 152 DRAM 216 , and is backed by NAND flash.
  • NVRAM 204 is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM 204 spools is managed by each authority 168 independently. Each device provides an amount of storage space to each authority 168 . That authority 168 further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions.
  • onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM 204 are flushed to flash memory 206 . On the next power-on, the contents of the NVRAM 204 are recovered from the flash memory 206 .
  • the responsibility of the logical “controller” is distributed across each of the blades containing authorities 168 .
  • This distribution of logical control is shown in FIG. 2D as a host controller 242 , mid-tier controller 244 and storage unit controller(s) 246 . Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade.
  • Each authority 168 effectively serves as an independent controller.
  • Each authority 168 provides its own data and metadata structures, its own background workers, and maintains its own lifecycle.
  • FIG. 2E is a blade 252 hardware block diagram, showing a control plane 254 , compute and storage planes 256 , 258 , and authorities 168 interacting with underlying physical resources, using embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C in the storage server environment of FIG. 2D .
  • the control plane 254 is partitioned into a number of authorities 168 which can use the compute resources in the compute plane 256 to run on any of the blades 252 .
  • the storage plane 258 is partitioned into a set of devices, each of which provides access to flash 206 and NVRAM 204 resources.
  • the authorities 168 interact with the underlying physical resources (i.e., devices). From the point of view of an authority 168 , its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities 168 , irrespective of where the authorities happen to run.
  • Each authority 168 has allocated or has been allocated one or more partitions 260 of storage memory in the storage units 152 , e.g. partitions 260 in flash memory 206 and NVRAM 204 . Each authority 168 uses those allocated partitions 260 that belong to it, for writing or reading user data.
  • authorities can be associated with differing amounts of physical storage of the system. For example, one authority 168 could have a larger number of partitions 260 or larger sized partitions 260 in one or more storage units 152 than one or more other authorities 168 .
  • FIG. 2F depicts elasticity software layers in blades 252 of a storage cluster, in accordance with some embodiments.
  • elasticity software is symmetric, i.e., each blade's compute module 270 runs the three identical layers of processes depicted in FIG. 2F .
  • Storage managers 274 execute read and write requests from other blades 252 for data and metadata stored in local storage unit 152 NVRAM 204 and flash 206 .
  • Authorities 168 fulfill client requests by issuing the necessary reads and writes to the blades 252 on whose storage units 152 the corresponding data or metadata resides.
  • Endpoints 272 parse client connection requests received from switch fabric 146 supervisory software, relay the client connection requests to the authorities 168 responsible for fulfillment, and relay the authorities' 168 responses to clients.
  • the symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking.
  • authorities 168 running in the compute modules 270 of a blade 252 perform the internal operations required to fulfill client requests.
  • authorities 168 are stateless, i.e., they cache active data and metadata in their own blades' 252 DRAMs for fast access, but the authorities store every update in their NVRAM 204 partitions on three separate blades 252 until the update has been written to flash 206 . All the storage system writes to NVRAM 204 are in triplicate to partitions on three separate blades 252 in some embodiments. With triple-mirrored NVRAM 204 and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades 252 with no loss of data, metadata, or access to either.
  • authorities 168 are stateless, they can migrate between blades 252 .
  • Each authority 168 has a unique identifier.
  • NVRAM 204 and flash 206 partitions are associated with authorities' 168 identifiers, not with the blades 252 on which they are running in some.
  • the authority 168 continues to manage the same storage partitions from its new location.
  • the system automatically rebalances load by: partitioning the new blade's 252 storage for use by the system's authorities 168 , migrating selected authorities 168 to the new blade 252 , starting endpoints 272 on the new blade 252 and including them in the switch fabric's 146 client connection distribution algorithm.
  • migrated authorities 168 persist the contents of their NVRAM 204 partitions on flash 206 , process read and write requests from other authorities 168 , and fulfill the client requests that endpoints 272 direct to them. Similarly, if a blade 252 fails or is removed, the system redistributes its authorities 168 among the system's remaining blades 252 . The redistributed authorities 168 continue to perform their original functions from their new locations.
  • FIG. 2G depicts authorities 168 and storage resources in blades 252 of a storage cluster, in accordance with some embodiments.
  • Each authority 168 is exclusively responsible for a partition of the flash 206 and NVRAM 204 on each blade 252 .
  • the authority 168 manages the content and integrity of its partitions independently of other authorities 168 .
  • Authorities 168 compress incoming data and preserve it temporarily in their NVRAM 204 partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash 206 partitions. As the authorities 168 write data to flash 206 , storage managers 274 perform the necessary flash translation to optimize write performance and maximize media longevity.
  • authorities 168 “garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities' 168 partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions.
  • the embodiments described herein may utilize various software, communication and/or networking protocols.
  • the configuration of the hardware and/or software may be adjusted to accommodate various protocols.
  • the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS' environment.
  • LDAP Lightweight Directory Access Protocol
  • a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network.
  • NLM network lock manager
  • SMB Server Message Block
  • CIFS Common Internet File System
  • SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network.
  • SMB also provides an authenticated inter-process communication mechanism.
  • AMAZONTM S3 Simple Storage Service
  • REST representational state transfer
  • SOAP simple object access protocol
  • BitTorrent BitTorrent
  • Each module addresses a particular underlying part of the transaction.
  • the control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’).
  • ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.
  • the systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet.
  • IPv6 Internet Protocol version 6
  • IPv4 Internet Protocol version 6
  • the routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations.
  • ECMP Equal-cost multi-path routing
  • Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router.
  • the software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments.
  • the embodiments may maintain audit logs.
  • An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations.
  • the embodiments may support various key management policies, such as encryption key rotation.
  • the system may support dynamic root passwords or some variation dynamically changing passwords.
  • FIG. 3A sets forth a diagram of a storage system 306 that is coupled for data communications with a cloud services provider 302 in accordance with some embodiments of the present disclosure.
  • the storage system 306 depicted in FIG. 3A may be similar to the storage systems described above with reference to FIGS. 1A-1D and FIGS. 2A-2G .
  • the storage system 306 depicted in FIG. 3A may be similar to the storage systems described above with reference to FIGS. 1A-1D and FIGS. 2A-2G .
  • 3A may be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments.
  • the storage system 306 is coupled to the cloud services provider 302 via a data communications link 304 .
  • the data communications link 304 may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system 306 and the cloud services provider 302 .
  • WAN wide area network
  • LAN local area network
  • Such a data communications link 304 may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways.
  • digital information may be exchanged between the storage system 306 and the cloud services provider 302 via the data communications link 304 using one or more data communications protocols.
  • digital information may be exchanged between the storage system 306 and the cloud services provider 302 via the data communications link 304 using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol.
  • HDTP handheld device transfer protocol
  • HTTP hypertext transfer protocol
  • IP internet protocol
  • RTP real-time transfer protocol
  • TCP transmission control protocol
  • UDP user datagram protocol
  • WAP wireless application protocol
  • the cloud services provider 302 depicted in FIG. 3A may be embodied, for example, as a system and computing environment that provides services to users of the cloud services provider 302 through the sharing of computing resources via the data communications link 304 .
  • the cloud services provider 302 may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on.
  • the shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider 302 with minimal management effort.
  • the user of the cloud services provider 302 is unaware of the exact computing resources utilized by the cloud services provider 302 to provide the services.
  • a cloud services provider 302 may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider 302 .
  • the cloud services provider 302 may be configured to provide a variety of services to the storage system 306 and users of the storage system 306 through the implementation of various service models.
  • the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider 302 offers computing infrastructure such as virtual machines and other resources as a service to subscribers.
  • the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider 302 offers a development environment to application developers.
  • IaaS infrastructure as a service
  • PaaS platform as a service
  • Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform.
  • the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306 , providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application.
  • SaaS software as a service
  • the cloud services provider 302 may be further configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an authentication as a service (‘AaaS’) service model where the cloud services provider 302 offers authentication services that can be used to secure access to applications, data sources, or other resources.
  • the cloud services provider 302 may also be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a storage as a service model where the cloud services provider 302 offers access to its storage infrastructure for use by the storage system 306 and users of the storage system 306 .
  • the cloud services provider 302 may be configured to provide additional services to the storage system 306 and users of the storage system 306 through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider 302 or a limitation as to the service models that may be implemented by the cloud services provider 302 .
  • the cloud services provider 302 may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud.
  • the cloud services provider 302 may be dedicated to providing services to a single organization rather than providing services to multiple organizations.
  • the cloud services provider 302 may provide services to multiple organizations.
  • Public cloud and private cloud deployment models may differ and may come with various advantages and disadvantages.
  • the cloud services provider 302 may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment.
  • the storage system 306 may be coupled to (or even include) a cloud storage gateway.
  • a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system 306 .
  • Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array 306 and remote, cloud-based storage that is utilized by the storage array 306 .
  • a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider 302 .
  • a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider 302 .
  • middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's 302 environment and an organization's environment.
  • cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider 302 , as well as addressing security concerns associated with sensitive data to the cloud services provider 302 over data communications networks.
  • a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow.
  • Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components.
  • the cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained.
  • the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the usage of a SaaS service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306 , providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application.
  • Such applications may take many forms in accordance with various embodiments of the present disclosure.
  • the cloud services provider 302 may be configured to provide access to data analytics applications to the storage system 306 and users of the storage system 306 .
  • data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system 306 .
  • Such telemetry data may describe various operating characteristics of the storage system 306 and may be analyzed, for example, to determine the health of the storage system 306 , to identify workloads that are executing on the storage system 306 , to predict when the storage system 306 will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system 306 .
  • the cloud services provider 302 may also be configured to provide access to virtualized computing environments to the storage system 306 and users of the storage system 306 .
  • virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others.
  • FIG. 3B sets forth a diagram of a storage system 306 in accordance with some embodiments of the present disclosure.
  • the storage system 306 depicted in FIG. 3B may be similar to the storage systems described above with reference to FIGS. 1A-1D and FIGS. 2A-2G as the storage system may include many of the components described above.
  • the storage system 306 depicted in FIG. 3B may include storage resources 308 , which may be embodied in many forms.
  • the storage resources 308 can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate.
  • the storage resources 308 may include 3D crosspoint non-volatile memory in which bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array.
  • the storage resources 308 may include flash memory, including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, and others.
  • the storage resources 308 may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM, in which data is stored through the use of magnetic storage elements.
  • MRAM non-volatile magnetoresistive random-access memory
  • STT spin transfer torque
  • the example storage resources 308 may include non-volatile phase-change memory (‘PCM’) that may have the ability to hold multiple bits in a single cell as cells can achieve a number of distinct intermediary states.
  • PCM non-volatile phase-change memory
  • the storage resources 308 may include quantum memory that allows for the storage and retrieval of photonic quantum information.
  • the example storage resources 308 may include resistive random-access memory (‘ReRAM’) in which data is stored by changing the resistance across a dielectric solid-state material.
  • the storage resources 308 may include storage class memory (‘SCM’) in which solid-state nonvolatile memory may be manufactured at a high density using some combination of sub-lithographic patterning techniques, multiple bits per cell, multiple layers of devices, and so on. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others.
  • 3A may be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others.
  • DIMMs dual in-line memory modules
  • NVDIMMs non-volatile dual in-line memory modules
  • the example storage system 306 depicted in FIG. 3B may implement a variety of storage architectures.
  • storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive.
  • Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level).
  • Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format.
  • the example storage system 306 depicted in FIG. 3B may be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof.
  • additional storage may be added by adding additional storage devices.
  • additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on.
  • the storage system 306 depicted in FIG. 3B also includes communications resources 310 that may be useful in facilitating data communications between components within the storage system 306 , as well as data communications between the storage system 306 and computing devices that are outside of the storage system 306 .
  • the communications resources 310 may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system.
  • the communications resources 310 can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC networks.
  • FCoE’ FC over ethernet
  • the communications resources 310 can also include InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters.
  • the communications resources 310 can also include NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed.
  • NVMe NVM Express
  • NVMeoF NVMe over fabrics
  • the communications resources 310 can also include mechanisms for accessing storage resources 308 within the storage system 306 utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources 308 within the storage system 306 to host bus adapters within the storage system 306 , internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources 308 within the storage system 306 , and other communications resources that that may be useful in facilitating data communications between components within the storage system 306 , as well as data communications between the storage system 306 and computing devices that are outside of the storage system 306 .
  • SAS serial attached SCSI
  • SATA serial ATA
  • iSCSI internet small computer systems interface
  • the storage system 306 depicted in FIG. 3B also includes processing resources 312 that may be useful in useful in executing computer program instructions and performing other computational tasks within the storage system 306 .
  • the processing resources 312 may include one or more application-specific integrated circuits (‘ASICs’) that are customized for some particular purpose as well as one or more central processing units (‘CPUs’).
  • the processing resources 312 may also include one or more digital signal processors (‘DSPs’), one or more field-programmable gate arrays (‘FPGAs’), one or more systems on a chip (‘SoCs’), or other form of processing resources 312 .
  • DSPs digital signal processors
  • FPGAs field-programmable gate arrays
  • SoCs systems on a chip
  • the storage system 306 may utilize the storage resources 312 to perform a variety of tasks including, but not limited to, supporting the execution of software resources 314 that will be described in greater detail below.
  • the storage system 306 depicted in FIG. 3B also includes software resources 314 that, when executed by processing resources 312 within the storage system 306 , may perform various tasks.
  • the software resources 314 may include, for example, one or more modules of computer program instructions that when executed by processing resources 312 within the storage system 306 are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways.
  • Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques.
  • data protection techniques business continuity and disaster recovery objectives may be met as a failure of the storage system may not result in the loss of data stored in the storage system.
  • the software resources 314 may also include software that is useful in implementing software-defined storage (‘SDS’).
  • the software resources 314 may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware.
  • Such software resources 314 may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware.
  • the software resources 314 may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources 308 in the storage system 306 .
  • the software resources 314 may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others.
  • the software resources 314 may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource 308 , software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions.
  • Such software resources 314 may be embodied as one or more software containers or in many other ways.
  • Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306 .
  • Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach, or in other ways.
  • the storage system 306 depicted in FIG. 3B may be useful for supporting various types of software applications.
  • the storage system 306 may be useful in supporting artificial intelligence applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, and many other types of applications by providing storage resources to such applications.
  • PACS picture archiving and communication systems
  • the storage systems described above may operate to support a wide variety of applications.
  • the storage systems may be well suited to support applications that are resource intensive such as, for example, artificial intelligence applications.
  • artificial intelligence applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal.
  • the storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications.
  • Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed.
  • GPUs graphics processing units
  • VPUs visual processing unit
  • Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
  • Such GPUs may be included within any of the computing devices that are part of the storage systems described above.
  • FIG. 4A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset ( 442 ) that is synchronized across a plurality of storage systems ( 438 , 440 ) according to some embodiments of the present disclosure.
  • the storage systems ( 438 , 440 ) depicted in FIG. 4A may be similar to the storage systems described above with reference to FIGS. 1A-1D , FIGS. 2A-2G , FIGS. 3A-3B , or any combination thereof.
  • the storage system depicted in FIG. 4A may include the same, fewer, additional components as the storage systems described above.
  • the dataset ( 442 ) depicted in FIG. 4A may be embodied, for example, as the contents of a particular volume, as the contents of a particular shared of a volume, or as any other collection of one or more data elements.
  • the dataset ( 442 ) may be synchronized across a plurality of storage systems ( 438 , 440 ) such that each storage system ( 438 , 440 ) retains a local copy of the dataset ( 442 ).
  • such a dataset ( 442 ) is synchronously replicated across the storage systems ( 438 , 440 ) in such a way that the dataset ( 442 ) can be accessed through any of the storage systems ( 438 , 440 ) with performance characteristics such that any one storage system in the cluster doesn't operate substantially more optimally any other storage system in the cluster, at least as long as the cluster and the particular storage system being accessed are running nominally.
  • modifications to the dataset ( 442 ) should be made to the copy of the dataset that resides on each storage system ( 438 , 440 ) in such a way that accessing the dataset ( 442 ) on any storage system ( 438 , 440 ) will yield consistent results.
  • a write request issued to the dataset must be serviced on all storage systems ( 438 , 440 ) or on none of the storage systems ( 438 , 440 ) that were running nominally at the beginning of the write and that remained running nominally through completion of the write.
  • some groups of operations e.g., two write operations that are directed to same location within the dataset
  • Modifications to the dataset ( 442 ) need not be made at the exact same time, but some actions (e.g., issuing an acknowledgement that a write request directed to the dataset, enabling read access to a location within the dataset that is targeted by a write request that has not yet been completed on both storage systems) may be delayed until the copy of the dataset on each storage system ( 438 , 440 ) has been modified.
  • the designation of one storage system ( 440 ) as the ‘leader’ and another storage system ( 438 ) as the ‘follower’ may refer to the respective relationships of each storage system for the purposes of synchronously replicating a particular dataset across the storage systems.
  • the leader storage system ( 440 ) may be responsible for performing some processing of an incoming I/O operation and passing such information along to the follower storage system ( 438 ) or performing other tasks that are not required of the follower storage system ( 440 ).
  • the leader storage system ( 440 ) may be responsible for performing tasks that are not required of the follower storage system ( 438 ) for all incoming I/O operations or, alternatively, the leader-follower relationship may be specific to only a subset of the I/O operations that are received by either storage system. For example, the leader-follower relationship may be specific to I/O operations that are directed towards a first volume, a first group of volumes, a first group of logical addresses, a first group of physical addresses, or some other logical or physical delineator.
  • a first storage system may serve as the leader storage system for I/O operations directed to a first set of volumes (or other delineator) while a second storage system may serve as the leader storage system for I/O operations directed to a second set of volumes (or other delineator).
  • FIG. 4A depicts an embodiment where synchronizing a plurality of storage systems ( 438 , 440 ) occurs in response to the receipt of a request ( 404 ) to modify a dataset ( 442 ) by the leader storage system ( 440 ), although synchronizing a plurality of storage systems ( 438 , 440 ) may also be carried out in response to the receipt of a request ( 404 ) to modify a dataset ( 442 ) by the follower storage system ( 438 ), as will be described in greater detail below.
  • the example method depicted in FIG. 4A includes receiving ( 406 ), by a leader storage system ( 440 ), a request ( 404 ) to modify the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be embodied, for example, as a request to write data to a location within the storage system ( 440 ) that contains data that is included in the dataset ( 442 ), as a request to write data to a volume that contains data that is included in the dataset ( 442 ), as a request to take a snapshot of the dataset ( 442 ), as a virtual range copy, as an UNMAP operation that essentially represents a deletion of some portion of the data in the dataset ( 442 ), as a modifying transformations of the dataset ( 442 ) (rather than a change to a portion of data within the dataset), or as some other operation that results in a change to some portion of the data that is included in the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) is issued by a host ( 402 ) that may be embodied, for example, as an application that is executing on a virtual machine, as an application that is executing on a computing device that is connected to the storage system ( 440 ), or as some other entity configured to access the storage system ( 440 ).
  • the example method depicted in FIG. 4A also includes generating ( 408 ), by the leader storage system ( 440 ), information ( 410 ) describing the modification to the dataset ( 442 ).
  • the leader storage system ( 440 ) may generate ( 408 ) the information ( 410 ) describing the modification to the dataset ( 442 ), for example, by determining ordering versus any other operations that are in progress, by determining the proper outcome of overlapping modifications (e.g., the appropriate outcome of two requests to modify the same storage location), calculating any distributed state changes such as to common elements of metadata across all members of the pod (e.g., all storage systems across which the dataset is synchronously replicated), and so on.
  • the information ( 410 ) describing the modification to the dataset ( 442 ) may be embodied, for example, as system-level information that is used to describe an I/O operation that is to be performed by a storage system.
  • the leader storage system ( 440 ) may generate ( 408 ) the information ( 410 ) describing the modification to the dataset ( 442 ) by processing the request ( 404 ) to modify the dataset ( 442 ) just enough to figure out what should happen in order to service the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may determine whether some ordering of the execution of the request ( 404 ) to modify the dataset ( 442 ) relative to other requests to modify the dataset ( 442 ) is required, or some other steps must be taken as described in greater detail below, to produce an equivalent result on each storage system ( 438 , 440 ).
  • the request ( 404 ) to modify the dataset ( 442 ) is embodied as a request to copy blocks from a first address range in the dataset ( 442 ) to a second address range in the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) is embodied as a request to copy blocks from a first address range in the dataset ( 442 ) to a second address range in the dataset ( 442 ).
  • three other write operations write A, write B, write C are directed to the first address range in the dataset ( 442 ).
  • the leader storage system ( 440 ) services write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset ( 442 ) to the second address range in the dataset ( 442 )
  • the follower storage system ( 438 ) must also service write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset ( 442 ) to the second address range in the dataset ( 442 ) in order to yield consistent results.
  • the leader storage system ( 440 ) when the leader storage system ( 440 ) generates ( 408 ) the information ( 410 ) describing the modification to the dataset ( 442 ), in this example, the leader storage system ( 440 ) could generate information (e.g., sequence numbers for write A and write B) that identifies other operations that must be completed before the follower storage system ( 438 ) can process the request ( 404 ) to modify the dataset ( 442 ).
  • information e.g., sequence numbers for write A and write B
  • the leader storage system ( 440 ) may generate ( 408 ) information ( 410 ) describing the modification to the dataset ( 442 ) that includes information that identifies the proper outcome of the two requests. For example, if write B logically follows write A (and overlaps with write A), the end result must be that the dataset ( 442 ) includes the parts of write B that overlap with write A, rather than including the parts of write A that overlap with write B.
  • Such an outcome could be facilitated by merging a result in memory and writing the result of such a merge to the dataset ( 442 ), rather than strictly requiring that a particular storage system ( 438 , 440 ) execute write A and then subsequently execute write B. Readers will appreciate that more subtle cases relate to snapshots and virtual address range copies.
  • Writes A, B, C, and D, coupled with a snapshot between A,B and C,D could commit and/or acknowledge some or all parts together as long as recovery cannot result in a snapshot inconsistency across arrays and as long as acknowledgement does not complete a later operation before an earlier operation has been persisted to the point that it is guaranteed to be recoverable.
  • the example method depicted in FIG. 4A also includes sending ( 412 ), from the leader storage system ( 440 ) to a follower storage system ( 438 ), information ( 410 ) describing the modification to the dataset ( 442 ).
  • Sending ( 412 ) information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ) to a follower storage system ( 438 ) may be carried out, for example, by the leader storage system ( 440 ) sending one or more messages to the follower storage system ( 438 ).
  • the leader storage system ( 440 ) may also send, in the same messages or in one or more different messages, I/O payload ( 414 ) for the request ( 404 ) to modify the dataset ( 442 ).
  • the I/O payload ( 414 ) may be embodied, for example, as data that is to be written to storage within the follower storage system ( 438 ) when the request ( 404 ) to modify the dataset ( 442 ) is embodied as a request to write data to the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) was received ( 406 ) by the leader storage system ( 440 )
  • the follower storage system ( 438 ) has not received the I/O payload ( 414 ) associated with the request ( 404 ) to modify the dataset ( 442 ).
  • the information ( 410 ) describing the modification to the dataset ( 442 ) and the I/O payload ( 414 ) that is associated with the request ( 404 ) to modify the dataset ( 442 ) may be sent ( 412 ) from the leader storage system ( 440 ) to the follower storage system ( 438 ) via one or more data communications networks that couple the leader storage system ( 440 ) to the follower storage system ( 438 ), via one or more dedicated data communications links (e.g., a first link for sending I/O payload and a second link for sending information describing modifications to datasets) that couples the leader storage system ( 440 ) to the follower storage system ( 438 ), or via some other mechanism.
  • one or more dedicated data communications links e.g., a first link for sending I/O payload and a second link for sending information describing modifications to datasets
  • the example method depicted in FIG. 4A also includes receiving ( 416 ), by the follower storage system ( 438 ), the information ( 410 ) describing the modification to the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 416 ) the information ( 410 ) describing the modification to the dataset ( 442 ) and I/O payload ( 414 ) from the leader storage system ( 440 ), for example, via one or more messages that are sent from the leader storage system ( 440 ) to the follower storage system ( 438 ).
  • the one or more messages may be sent from the leader storage system ( 440 ) to the follower storage system ( 438 ) via one or more dedicated data communications links between the two storage systems ( 438 , 440 ), by the leader storage system ( 440 ) writing the message to a predetermined memory location (e.g., the location of a queue) on the follower storage system ( 438 ) using RDMA or a similar mechanism, or in other ways.
  • a predetermined memory location e.g., the location of a queue
  • the follower storage system ( 438 ) may receive ( 416 ) the information ( 410 ) describing the modification to the dataset ( 442 ) and I/O payload ( 414 ) from the leader storage system ( 440 ) through the use of the use of SCSI requests (writes from sender to receiver, or reads from receiver to sender) as a communication mechanism.
  • a SCSI Write request is used to encode information that is intended to be sent (which includes whatever data and metadata), and which may be delivered to a special pseudo-device or over a specially configured SCSI network, or through any other agreed upon addressing mechanism.
  • the model can issue a set of open SCSI read requests from a receiver to a sender, also using special devices, specially configured SCSI networks, or other agreed upon mechanisms. Encoded information including data and metadata will be delivered to the receiver as a response to one or more of these open SCSI requests.
  • Such a model can be implemented over Fibre Channel SCSI networks, which are often deployed as the “dark fibre” storage network infrastructure between data centers. Such a model also allows the use of the same network lines for host-to-remote-array multipathing and bulk array-to-array communications.
  • the example method depicted in FIG. 4A also includes processing ( 418 ), by the follower storage system ( 438 ), the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may process ( 418 ) the request ( 404 ) to modify the dataset ( 442 ) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the follower storage system ( 438 ) in dependence upon the information ( 410 ) describing the modification to the dataset ( 442 ) as well as the I/O payload ( 414 ) that was received from the leader storage system ( 440 ).
  • one or more storage devices e.g., an NVRAM device, an SSD, an HDD
  • processing ( 418 ) the request ( 404 ) to modify the dataset ( 442 ) may be carried out by the follower storage system ( 438 ) first verifying that the previously issued write operation has been processed on the follower storage system ( 438 ) and subsequently writing I/O payload ( 414 ) associated with the write operation to one or more storage devices that are included in the follower storage system ( 438 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be considered to have been completed and successfully processed, for example, when the I/O payload ( 414 ) has been committed to persistent storage within the follower storage system ( 438 ).
  • the example method depicted in FIG. 4A also includes acknowledging ( 420 ), by the follower storage system ( 438 ) to the leader storage system ( 440 ), completion of the request ( 404 ) to modify the dataset ( 442 ).
  • acknowledging ( 420 ), by the follower storage system ( 438 ) to the leader storage system ( 440 ), completion of the request ( 404 ) to modify the dataset ( 442 ) may be carried out by the follower storage system ( 438 ) sending an acknowledgment ( 422 ) message to the leader storage system ( 440 ).
  • Such messages may include, for example, information identifying the particular request ( 404 ) to modify the dataset ( 442 ) that was completed as well as any additional information useful in acknowledging ( 420 ) the completion of the request ( 404 ) to modify the dataset ( 442 ) by the follower storage system ( 438 ).
  • acknowledging ( 420 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the leader storage system ( 440 ) is illustrated by the follower storage system ( 438 ) issuing an acknowledgment ( 422 ) message to the leader storage system ( 438 ).
  • the example method depicted in FIG. 4A also includes processing ( 424 ), by the leader storage system ( 440 ), the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may process ( 424 ) the request ( 404 ) to modify the dataset ( 442 ) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the leader storage system ( 440 ) in dependence upon the information ( 410 ) describing the modification to the dataset ( 442 ) as well as the I/O payload ( 414 ) that was received as part of the request ( 404 ) to modify the dataset ( 442 ).
  • one or more storage devices e.g., an NVRAM device, an SSD, an HDD
  • processing ( 424 ) the request ( 404 ) to modify the dataset ( 442 ) may be carried out by the leader storage system ( 440 ) first verifying that the previously issued write operation has been processed by the leader storage system ( 440 ) and subsequently writing I/O payload ( 414 ) associated with the write operation to one or more storage devices that are included in the leader storage system ( 440 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be considered to have been completed and successfully processed, for example, when the I/O payload ( 414 ) has been committed to persistent storage within the leader storage system ( 440 ).
  • the example method depicted in FIG. 4A also includes receiving ( 426 ), from the follower storage system ( 438 ), an indication that the follower storage system ( 438 ) has processed the request ( 404 ) to modify the dataset ( 436 ).
  • the indication that the follower storage system ( 438 ) has processed the request ( 404 ) to modify the dataset ( 436 ) is embodied as an acknowledgement ( 422 ) message sent from the follower storage system ( 438 ) to the leader storage system ( 440 ). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required.
  • each storage system may be performing some of the steps described above in parallel.
  • the follower storage system ( 438 ) may receive ( 416 ) the information ( 410 ) describing the modification to the dataset ( 442 ), process ( 418 ) the request ( 404 ) to modify the dataset ( 442 ), or acknowledge ( 420 ) completion of the request ( 404 ) to modify the dataset ( 442 ) before the leader storage system ( 440 ) has processed ( 424 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may have processed ( 424 ) the request ( 404 ) to modify the dataset ( 442 ) before the follower storage system ( 438 ) has received ( 416 ) the information ( 410 ) describing the modification to the dataset ( 442 ), processed ( 418 ) the request ( 404 ) to modify the dataset ( 442 ), or acknowledged ( 420 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the example method depicted in FIG. 4A also includes acknowledging ( 434 ), by the leader storage system ( 440 ), completion of the request ( 404 ) to modify the dataset ( 442 ).
  • acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) may be carried out through the use of one or more acknowledgement ( 436 ) messages that are sent from the leader storage system ( 440 ) to the host ( 402 ) or via some other appropriate mechanism.
  • acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) may be carried out through the use of one or more acknowledgement ( 436 ) messages that are sent from the leader storage system ( 440 ) to the host ( 402 ) or via some other appropriate mechanism.
  • the leader storage system ( 440 ) may determine ( 428 ) whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 ) by the follower storage system ( 438 ) prior to acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may determine ( 428 ) whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 ) by the follower storage system ( 438 ), for example, by determining whether the leader storage system ( 440 ) has received an acknowledgment message or other message from the follower storage system ( 438 ) indicating that the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 ) by the follower storage system ( 438 ).
  • the leader storage system ( 440 ) may proceed by acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may not yet acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ), as the leader storage system ( 440 ) may only acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ) when the request ( 404 ) to modify the dataset ( 442 ) has been successfully processed on all storage systems ( 438 , 440 ) across which a dataset ( 442 ) is synchronously replicated.
  • sending ( 412 ), from the leader storage system ( 440 ) to a follower storage system ( 438 ), information ( 410 ) describing the modification to the dataset ( 442 ) and acknowledging ( 420 ), by the follower storage system ( 438 ) to the leader storage system ( 440 ), completion of the request ( 404 ) to modify the dataset ( 442 ) can be carried out using single roundtrip messaging.
  • Single roundtrip messaging may be used, for example, through the use of Fibre Channel as a data interconnect. Typically, SCSI protocols are used with Fibre Channel.
  • Such interconnects are commonly provisioned between data centers because some older replication technologies may be built to essentially replicate data as SCSI transactions over Fibre Channel networks. Also, historically Fibre Channel SCSI infrastructure had less overhead and lower latencies than networks based on Ethernet and TCP/IP. Further, when data centers are internally connected to block storage arrays using Fibre Channel, the Fibre Channel networks may be stretched to other data centers so that hosts in one data center can switch to accessing storage arrays in a remote data center when local storage arrays fail.
  • SCSI could be used as a general communication mechanism, even though it is normally designed for use with block storage protocols for storing and retrieving data in block-oriented volumes (or for tape).
  • SCSI READ or SCSI WRITE could be used to deliver or retrieve message data between storage controllers in paired storage systems.
  • a typical implementation of SCSI WRITE requires two message round trips: a SCSI initiator sends a SCSI CDB describing the SCSI WRITE operation, a SCSI target receives that CDB and the SCSI target sends a “Ready to Receive” message to the SCSI initiator. The SCSI initiator then sends data to the SCSI target and when SCSI WRITE is complete the SCSI target responds to the SCSI initiator with a Success completion.
  • a SCSI READ request requires only one round trip: the SCSI initiator sends a SCSI CDB describing the SCSI READ operation, a SCSI target receives that CDB and responds with data and then a Success completion.
  • a SCSI READ incurs half of the distance-related latency as a SCSI WRITE. Because of this, it may be faster for a data communications receiver to use SCSI READ requests to receive messages than for a sender of messages to use SCSI WRITE requests to send data.
  • Using SCSI READ simply requires a message sender to operate as a SCSI target, and for a message receiver to operate as a SCSI initiator.
  • a message receiver may send some number of SCSI CDB READ requests to any message sender, and the message sender would respond to one of the outstanding CDB READ requests when message data is available. Since SCSI subsystems may timeout if a READ request is outstanding for too long (e.g., 10 seconds), READ requests should be responded to within a few seconds even if there is no message data to be sent.
  • SCSI tape requests as described in the SCSI Stream Commands standard from the T10 Technical Committee of the InterNational Committee on Information Technology Standards, support variable response data, which can be more flexible for returning variable-sized message data.
  • the SCSI standard also supports an Immediate mode for SCSI WRITE requests, which could allow single-round-trip SCSI WRITE commands. Readers will appreciate that many of the embodiments described below also utilize single roundtrip messaging.
  • FIG. 4B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset ( 442 ) that is synchronized across a plurality of storage systems ( 438 , 440 , 450 ) according to some embodiments of the present disclosure.
  • the storage systems ( 438 , 440 , 450 ) depicted in FIG. 4A may be similar to the storage systems described above with reference to FIGS. 1A-1D , FIGS. 2A-2G , FIGS. 3A-3B , or any combination thereof.
  • the storage system depicted in FIG. 4A may include the same, fewer, additional components as the storage systems described above.
  • FIG. 4B is similar to the example method depicted in FIG. 4A , as the example method depicted in FIG. 4B also includes: receiving ( 406 ), by a leader storage system ( 440 ), a request ( 404 ) to modify the dataset ( 442 ); generating ( 408 ), by the leader storage system ( 440 ), information ( 410 ) describing the modification to the dataset ( 442 ); sending ( 412 ), from the leader storage system ( 440 ) to a follower storage system ( 438 ), information ( 410 ) describing the modification to the dataset ( 442 ); receiving ( 416 ), by the follower storage system ( 438 ), the information ( 410 ) describing the modification to the dataset ( 442 ); processing ( 418 ), by the follower storage system ( 438 ), the request ( 404 ) to modify the dataset ( 442 ); acknowledging ( 420 ), by the follower storage system ( 438 ) to the leader storage system ( 440 ), completion of the request ( 404
  • the example method depicted in FIG. 4B differs from the example method depicted in FIG. 4A , however, as the example method depicted in FIG. 4B depicts an embodiment in which the dataset ( 442 ) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system ( 440 ) and the remaining storage systems are follower storage systems ( 438 , 450 ).
  • the additional follower storage system ( 450 ) carries out many of the same steps as the follower storage system ( 438 ) that was depicted in FIG.
  • the additional follower storage system ( 450 ) can: receive ( 442 ), from the leader storage system ( 440 ), information ( 410 ) describing the modification to the data set ( 442 ); process ( 442 ) the request ( 404 ) to modify the data set ( 442 ) in dependence upon the information ( 410 ) describing the modification to the data set ( 442 ); acknowledge ( 446 ), to the leader storage system ( 440 ), completion of the request ( 404 ) to modify the dataset ( 442 ) through the use of an acknowledgement ( 448 ) message or other appropriate mechanism; and so on.
  • the information ( 410 ) describing the modification to the data set ( 442 ) can include ordering information ( 452 ) for the request ( 404 ) to modify the dataset ( 442 ).
  • the ordering information ( 452 ) for the request ( 404 ) to modify the dataset ( 442 ) can represent descriptions of relationships between operations (e.g., requests to modify the dataset) and common metadata updates that can be described by the leader storage system ( 440 ) as a set of interdependencies between separate requests to modify the dataset and possibly between requests to modify the dataset and various metadata changes. These interdependencies can be described as a set of precursors that one request to modify the dataset depends on in some way, as predicates that must be true for that request to modify the dataset to complete.
  • a queue predicate is one example of predicates that must be true for that request to modify the dataset to complete.
  • a queue predicate can stipulate that a particular request to modify the dataset cannot complete until a previous request to modify the dataset completes.
  • Queue predicates can be used, for example, for overlapping write-type operations.
  • the leader storage system ( 440 ) can declare that a second write-type operation logically follows a first such operation, so the second write-type operation can't complete until the first write-type operation completes.
  • the second write-type operation may not even be made durable until it is ensured that the first such write-type operation is durable (the two operations can be made durable together).
  • Queue predicates could also be used for snapshot operations and virtual block range copy operations, by declaring that a known set of incomplete precursor (e.g., a set of write-type) operations must each complete before a snapshot can complete, and as further operations are identified as following the snapshot (prior to the snapshot being complete) each of these operations can be predicated on the snapshot operation itself completing. This predicate could also indicate that those following operations apply to the post-snapshot image of a volume rather than included in the snapshot.
  • a known set of incomplete precursor e.g., a set of write-type
  • An alternative predicate that could be used for snapshots is to assign an identifier to every snapshot, and to associate all modifying operations that can be included in a particular snapshot with that identifier. Then, the snapshot can complete when all of the included modifying operations complete. This can be done with a counting predicate.
  • Each storage system across which a dataset is synchronously replicated can implement its own count of operations associated with time since the last snapshot or since some other relatively infrequent operation (or for embodiments that implement multiple leader storage systems, with those operations organized by a particular leader storage system, a count can be established by that leader storage system for the parts of a dataset it controls).
  • the snapshot operation itself can then include a counting predicate that depends on that number of operations being received and made durable before the snapshot can itself be made durable or be signaled as completed. Modifying operations that should follow the snapshot (prior to the snapshot completing) can either be delayed, given a queue predicate dependent on the snapshot, or the snapshot identity can be used as an indication that the modifying operation should be excluded from the snapshot.
  • Virtual block range copies SCSI EXTENDED COPY or similar operations
  • the request ( 404 ) to modify the dataset ( 442 ) can include a request to take a snapshot of the dataset ( 442 ) and the ordering information ( 452 ) for the request ( 404 ) to modify the dataset ( 442 ) can therefore include an identification of one or more other requests to modify the dataset that must be completed prior to taking the snapshot of the dataset ( 442 ).
  • the information ( 410 ) describing the modification to the data set ( 442 ) can include common metadata information ( 454 ) associated with the request ( 404 ) to modify the dataset ( 442 ).
  • the common metadata information ( 454 ) associated with the request ( 404 ) to modify the dataset ( 442 ) may be used to ensure common metadata that is associated with the dataset ( 442 ) in a storage system ( 438 , 440 , 450 ) that a dataset ( 442 ) is synchronously replicated across.
  • Common metadata in this context may be embodied, for example, as any data other than the content stored into the dataset ( 442 ) by one or more requests (e.g., one or more write requests issued by a host).
  • the common metadata may include data that a synchronous replication implementation keeps in some way consistent across storage systems ( 438 , 440 , 450 ) that a dataset ( 442 ) is synchronously replicated across, particularly if that common metadata relates to how the stored content is managed, recovered, resynchronized, snapshotted, or asynchronously replicated. Readers will appreciate that two or more modifying operations may depend on the same common metadata, where ordering of the modifying operations themselves is unnecessary, but consistent application of the common metadata once rather than twice is necessary.
  • One way to handle multiple dependence on common metadata is to define the metadata in a separate operation instantiated and described from a leader storage system. Then, two modifying operations that depend on that common metadata can be given a queue predicate that depends on that modifying operation.
  • Another way to handle multiple dependence on common metadata is to associate the common metadata with a first of two operations, and make the second operation depend on the first.
  • a variation makes the second operation dependent only on the common metadata aspects of the first, such that only that part of the first operation has to be made durable before the second operation can be processed.
  • Yet another way of handling multiple dependence on common metadata is to include the common metadata in all operation descriptions that depend on that common metadata.
  • receiving ( 426 ) an indication that the follower storage system has processed the request ( 404 ) to modify the dataset ( 442 ) can include receiving ( 456 ), from each of the follower storage systems ( 438 , 450 ), an indication that the follower storage system ( 438 , 450 ) has processed the request ( 404 ) to modify the dataset ( 442 ).
  • the indication that each follower storage system ( 438 , 450 ) has processed the request ( 404 ) to modify the dataset ( 436 ) is embodied as distinct acknowledgement ( 422 , 448 ) messages sent from each follower storage system ( 438 , 450 ) to the leader storage system ( 440 ).
  • one or more of the follower storage systems ( 438 , 450 ) may receive ( 416 , 442 ) the information ( 410 ) describing the modification to the dataset ( 442 ), process ( 418 , 444 ) the request ( 404 ) to modify the dataset ( 442 ), or acknowledge ( 420 , 446 ) completion of the request ( 404 ) to modify the dataset ( 442 ) before the leader storage system ( 440 ) has processed ( 424 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may have processed ( 424 ) the request ( 404 ) to modify the dataset ( 442 ) before one or more of the follower storage systems ( 438 , 450 ) have received ( 416 , 442 ) the information ( 410 ) describing the modification to the dataset ( 442 ), processed ( 418 , 444 ) the request ( 404 ) to modify the dataset ( 442 ), or acknowledged ( 420 , 446 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the example method depicted in FIG. 4B also includes determining ( 458 ), by the leader storage system ( 440 ), whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 , 444 ) by each of the follower storage systems ( 438 , 450 ) prior to acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may determine ( 458 ) whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 , 444 ) by each of the follower storage systems ( 438 , 450 ), for example, by determining whether the leader storage system ( 440 ) has received an acknowledgment messages or other messages from each of the follower storage systems ( 438 , 450 ) indicating that the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 418 , 444 ) by each of the follower storage systems ( 438 , 450 ).
  • the leader storage system ( 440 ) may proceed by acknowledging ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may not yet acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ), as the leader storage system ( 440 ) may only acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ) when the request ( 404 ) to modify the dataset ( 442 ) has been successfully processed on all storage systems ( 438 , 440 , 450 ) across which a dataset
  • FIG. 4B depicts an embodiment in which the dataset ( 442 ) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system ( 440 ) and the remaining storage systems are follower storage systems ( 438 , 450 ), other embodiments may include even additional storage systems. In such other embodiments, additional follower storage systems may operate in the same way as the follower storage systems ( 438 , 450 ) depicted in FIG. 4B .
  • FIG. 5A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset ( 442 ) that is synchronized across a plurality of storage systems ( 438 , 440 ) according to some embodiments of the present disclosure.
  • the storage systems ( 438 , 440 ) depicted in FIG. 5A may be similar to the storage systems described above with reference to FIGS. 1A-1D , FIGS. 2A-2G , FIGS. 3A-3B , or any combination thereof.
  • the storage system depicted in FIG. 5A may include the same, fewer, additional components as the storage systems described above.
  • the example method depicted in FIG. 5A includes receiving ( 502 ), by a follower storage system ( 438 ), a request ( 404 ) to modify the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be embodied, for example, as a request to write data to a location within the storage system ( 438 ) that contains data that is included in the dataset ( 442 ), as a request to write data to a volume that contains data that is included in the dataset ( 442 ), or as some other operation that results in a change to some portion of the data that is included in the dataset ( 442 ).
  • FIG. 5A includes receiving ( 502 ), by a follower storage system ( 438 ), a request ( 404 ) to modify the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be embodied, for example, as a request to write data to a location within the storage system ( 438 ) that contains data that
  • the request ( 404 ) to modify the dataset ( 442 ) is issued by a host ( 402 ) that may be embodied, for example, as an application that is executing on a virtual machine, as an application that is executing on a computing device that is connected to the storage system ( 438 ), or as some other entity configured to access the storage system ( 438 ).
  • the example method depicted in FIG. 5A also includes sending ( 504 ), from the follower storage system ( 438 ) to a leader storage system ( 440 ), a logical description ( 506 ) of the request ( 404 ) to modify the dataset ( 442 ).
  • the logical description ( 506 ) of the request ( 404 ) to modify the dataset ( 442 ) may be formatted in a way that is understood by the leader storage system ( 438 ) and may contain information describing the type of operation (e.g.
  • the follower storage system ( 438 ) may simply forward some portion (including all of) the request ( 404 ) to modify the dataset ( 442 ) to the leader storage system ( 440 ).
  • the example method depicted in FIG. 5A also includes generating ( 508 ), by the leader storage system ( 440 ), information ( 510 ) describing the modification to the dataset ( 442 ).
  • the leader storage system ( 440 ) may generate ( 508 ) the information ( 510 ) describing the modification to the dataset ( 442 ), for example, by determining ordering versus any other operations that are in progress, calculating any distributed state changes such as to common elements of metadata across all members of the pod (e.g., all storage systems across which the dataset is synchronously replicated), and so on.
  • the information ( 510 ) describing the modification to the dataset ( 442 ) may be embodied, for example, as system-level information that is used to describe an I/O operation that is to be performed by a storage system.
  • the leader storage system ( 440 ) may generate ( 508 ) the information ( 510 ) describing the modification to the dataset ( 442 ) by processing the request ( 404 ) to modify the dataset ( 442 ) just enough to figure out what should happen in order to service the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may determine whether some ordering of the execution of the request ( 404 ) to modify the dataset ( 442 ) relative to other requests to modify the dataset ( 442 ) is required to produce an equivalent result on each storage system ( 438 , 440 ).
  • the request ( 404 ) to modify the dataset ( 442 ) is embodied as a request to copy blocks from a first address range in the dataset ( 442 ) to a second address range in the dataset ( 442 ).
  • the request ( 404 ) to modify the dataset ( 442 ) is embodied as a request to copy blocks from a first address range in the dataset ( 442 ) to a second address range in the dataset ( 442 ).
  • three other write operations write A, write B, write C are directed to the first address range in the dataset ( 442 ).
  • the leader storage system ( 440 ) orders write A and write B (but does not order write C) prior to copying the blocks from the first address range in the dataset ( 442 ) to the second address range in the dataset ( 442 )
  • the follower storage system ( 438 ) must also order write A and write B (but not order write C) prior to copying the blocks from the first address range in the dataset ( 442 ) to the second address range in the dataset ( 442 ) in order to yield consistent results.
  • the leader storage system ( 440 ) when the leader storage system ( 440 ) generates ( 508 ) the information ( 510 ) describing the modification to the dataset ( 442 ), in this example, the leader storage system ( 440 ) could generate information (e.g., sequence numbers for write A and write B) that identifies other operations that must be ordered before the follower storage system ( 438 ) can process the request ( 404 ) to modify the dataset ( 442 ).
  • information e.g., sequence numbers for write A and write B
  • Writes A, B, C, and D, coupled with a snapshot between A,B and C,D could commit and/or acknowledge some or all parts together as long as recovery cannot result in a snapshot inconsistency across arrays and as long as acknowledgement does not complete a later operation before an earlier operation has been persisted to the point that it is guaranteed to be recoverable.
  • the example method depicted in FIG. 5A also includes sending ( 512 ), from the leader storage system ( 440 ) to the follower storage system ( 538 ), the information ( 510 ) describing the modification to the dataset ( 442 ).
  • Sending ( 512 ) the information ( 510 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ) to a follower storage system ( 438 ) may be carried out, for example, by the leader storage system ( 440 ) sending one or more messages to the follower storage system ( 438 ).
  • the leader storage system ( 440 ) may not need to send I/O payload for the request ( 404 ) to modify the dataset ( 442 ), however, in view of the fact that the follower storage system ( 438 ) was the original recipient of the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may extract the I/O payload from the request ( 404 ) to modify the dataset ( 442 ), the follower storage system ( 438 ) may receive the I/O payload as part of one or more other messages associated with the request ( 404 ) to modify the dataset ( 442 ), the follower storage system ( 438 ) may have access to the I/O payload as the I/O payload may have been stored by the host ( 404 ) in a known location (e.g., a buffer in the follower storage system ( 438 ) that was accessed via an RDMA or RDMA-like access), or in some other way.
  • a known location e.g., a buffer in the follower storage system ( 438 ) that was accessed via an RDMA or RDMA-like access
  • the example method depicted in FIG. 5A also includes processing ( 518 ), by the leader storage system ( 440 ), the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may process ( 518 ) the request ( 404 ) to modify the dataset ( 442 ), for example, by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the leader storage system ( 440 ) in dependence upon the information ( 410 ) describing the modification to the dataset ( 442 ) as well as the I/O payload that was received from the follower storage system ( 438 ).
  • one or more storage devices e.g., an NVRAM device, an SSD, an HDD
  • processing ( 518 ) the request ( 404 ) to modify the dataset ( 442 ) may be carried out by the leader storage system ( 440 ) first verifying that the previously issued write operation has been processed on the leader storage system ( 440 ) and subsequently writing I/O payload associated with the write operation to one or more storage devices that are included in the leader storage system ( 440 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be considered to have been completed and successfully processed, for example, when the I/O payload has been committed to persistent storage within the leader storage system ( 440 ).
  • the example method depicted in FIG. 5A also includes acknowledging ( 520 ), by the leader storage system ( 440 ) to the follower storage system ( 438 ), completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) may acknowledge ( 520 ) completion of the request ( 404 ) to modify the dataset ( 442 ), for example, through the use of one or more acknowledgement ( 522 ) messages that are sent from the leader storage system ( 440 ) to the follower storage system ( 438 ), or via some other appropriate mechanism.
  • the example method depicted in FIG. 5A also includes receiving ( 514 ), from the leader storage system ( 440 ), the information ( 510 ) describing the modification to the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 514 ) the information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ), for example, via one or more messages that are sent from the leader storage system ( 440 ) to the follower storage system ( 438 ).
  • the one or more messages may be sent from the leader storage system ( 440 ) to the follower storage system ( 438 ) via one or more dedicated data communications links between the two storage systems ( 438 , 440 ), by the leader storage system ( 440 ) writing the message to a predetermined memory location (e.g., the location of a queue) on the follower storage system ( 438 ) using RDMA or a similar mechanism, or in other ways.
  • RDMA RDMA
  • the leader storage system ( 440 ) does not need to send I/O payload associated with the request ( 404 ) to modify the dataset ( 442 ) to the follower storage system ( 438 ), as the follower storage system ( 438 ) can extract such I/O payload from the request ( 404 ) to modify the dataset ( 442 ) that was received by the follower storage system ( 438 ), the follower storage system ( 438 ) can extract such I/O payload from one or more other messages that were received from the host ( 402 ), or the follower storage system ( 438 ) can obtain the I/O payload in some other way by virtue of the fact that the follower storage system ( 438 ) was the target of the request ( 404 ) to modify the dataset ( 442 ) that was issued by the host ( 402 ).
  • the follower storage system ( 438 ) may receive ( 514 ) the information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ) through the use of the use of SCSI requests (writes from sender to receiver, or reads from receiver to sender) as a communication mechanism.
  • a SCSI Write request is used to encode information that we intend to send (which includes whatever data and metadata), and which may be delivered to a special pseudo-device or over a specially configured SCSI network, or through any other agreed upon addressing mechanism.
  • the model can issue a set of open SCSI read requests from a receiver to a sender, also using special devices, specially configured SCSI networks, or other agreed upon mechanisms.
  • Encoded information including data and metadata will be delivered to the receiver as a response to one or more of these open SCSI requests.
  • Such a model can be implemented over Fibre Channel SCSI networks, which are often deployed as the “dark fibre” storage network infrastructure between data centers.
  • Such a model also allows the use of the same network lines for host-to-remote-array multipathing and bulk array-to-array communications.
  • the example method depicted in FIG. 5A also includes processing ( 516 ), by the follower storage system ( 438 ), the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may process ( 516 ) the request ( 404 ) to modify the dataset ( 442 ) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the follower storage system ( 438 ) in dependence upon the information ( 410 ) describing the modification to the dataset ( 442 ).
  • storage devices e.g., an NVRAM device, an SSD, an HDD
  • processing ( 516 ) the request ( 404 ) to modify the dataset ( 442 ) may be carried out by the follower storage system ( 438 ) first verifying that the previously issued write operation has been processed on the follower storage system ( 438 ) and subsequently writing I/O payload associated with the write operation to one or more storage devices that are included in the follower storage system ( 438 ).
  • the request ( 404 ) to modify the dataset ( 442 ) may be considered to have been completed and successfully processed, for example, when the I/O payload associated with the request ( 404 ) to modify the dataset ( 442 ) has been committed to persistent storage within the follower storage system ( 438 ).
  • the example method depicted in FIG. 5A also includes receiving ( 524 ), from the leader storage system ( 440 ), an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ).
  • the indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ) is embodied as an acknowledgement ( 522 ) message sent from the leader storage system ( 440 ) to the follower storage system ( 438 ). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required.
  • each storage system may be performing some of the steps described above in parallel.
  • the follower storage system ( 438 ) may receive ( 524 ), from the leader storage system ( 440 ), an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ) prior to processing ( 516 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 524 ), from the leader storage system ( 440 ), an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ) prior to receiving ( 514 ) the information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ).
  • the example method depicted in FIG. 5A also includes acknowledging ( 526 ), by the follower storage system ( 438 ), completion of the request ( 404 ) to modify the dataset ( 442 ). Acknowledging ( 526 ) completion of the request ( 404 ) to modify the dataset ( 442 ) may be carried out, for example, by the follower storage system ( 438 ) issuing an acknowledgement ( 528 ) message to the host ( 402 ) that issued the request ( 404 ) to modify the dataset ( 442 ). In the example method depicted in FIG.
  • the follower storage system ( 438 ) may determine whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 ) by the leader storage system ( 440 ) prior to acknowledging ( 528 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may determine whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 ) by the leader storage system ( 440 ), for example, by determining whether the follower storage system ( 438 ) has received an acknowledgment message or other message from the leader storage system ( 440 ) indicating that the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 ) by the leader storage system ( 440 ).
  • the follower storage system ( 438 ) may proceed by acknowledging ( 526 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) determines that the request ( 404 ) to modify the dataset ( 442 ) has not been processed ( 518 ) by the leader storage system ( 440 ) or the follower storage system ( 438 ) has not yet processed ( 516 ) the request ( 404 ) to modify the dataset ( 442 ), however, the follower storage system ( 438 ) may not yet acknowledge ( 526 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ), as the follower storage system ( 438 ) may only acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ) when the request ( 404 ) to modify the dataset ( 442 ) has been successfully processed on all storage systems ( 438 , 440 ) across which the dataset ( 442 )
  • FIG. 5B sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset ( 442 ) that is synchronized across a plurality of storage systems ( 438 , 440 , 534 ) according to some embodiments of the present disclosure.
  • the storage systems ( 438 , 440 , 534 ) depicted in FIG. 5A may be similar to the storage systems described above with reference to FIGS. 1A-1D , FIGS. 2A-2G , FIGS. 3A-3B , or any combination thereof.
  • the storage system depicted in FIG. 5A may include the same, fewer, additional components as the storage systems described above.
  • the example method depicted in FIG. 5B may be similar to the example method depicted in FIG. 5A , as the example method depicted in FIG. 5B also includes: receiving ( 502 ), by a follower storage system ( 438 ), a request ( 404 ) to modify the dataset ( 442 ); sending ( 504 ), from the follower storage system ( 438 ) to a leader storage system ( 440 ), a logical description ( 506 ) of the request ( 404 ) to modify the dataset ( 442 ); generating ( 508 ), by the leader storage system ( 440 ), information ( 510 ) describing the modification to the dataset ( 442 ); processing ( 518 ), by the leader storage system ( 440 ), the request ( 404 ) to modify the dataset ( 442 ); acknowledging ( 520 ), by the leader storage system ( 440 ) to the follower storage system ( 438 ), completion of the request ( 404 ) to modify the dataset ( 442 ); receiving ( 514 ), from the
  • the example method depicted in FIG. 5B differs from the example method depicted in FIG. 5A , however, as the example method depicted in FIG. 5B depicts an embodiment in which the dataset ( 442 ) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system ( 440 ) and the remaining storage systems are follower storage systems ( 438 , 534 ).
  • the additional follower storage system ( 534 ) carries out many of the same steps as the follower storage system ( 438 ) that was depicted in FIG.
  • the additional follower storage system ( 534 ) can: receive ( 442 ), from the leader storage system ( 440 ), information ( 410 ) describing the modification to the data set ( 442 ) and also process ( 442 ) the request ( 404 ) to modify the data set ( 442 ) in dependence upon the information ( 410 ) describing the modification to the data set ( 442 ).
  • the leader storage system ( 440 ) can send ( 538 ) the information ( 410 ) describing the modification to the data set ( 442 ) to all of the follower storage systems ( 438 , 534 ).
  • the additional follower storage system ( 534 ) can also acknowledge ( 530 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) can send ( 538 ) the information ( 410 ) describing the modification to the data set ( 442 ) to all of the follower storage systems ( 438 , 534 ).
  • the additional follower storage system ( 534 ) can also acknowledge ( 530 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the additional follower storage system ( 534 ) can acknowledge ( 530 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ), for example, through the use of one or more acknowledgement ( 532 ) messages that are sent from the additional follower storage system ( 534 ) to the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ), or via some other appropriate mechanism.
  • the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ) may also receive ( 536 ) an indication that all other follower storage systems ( 534 ) have processed the request ( 404 ) to modify the dataset ( 442 ).
  • the indication all other follower storage systems ( 534 ) have processed the request ( 404 ) to modify the dataset ( 442 ) is embodied as an acknowledgement ( 532 ) message sent from the other follower storage system ( 534 ) to the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 442 ).
  • each storage system may be performing some of the steps described above in parallel.
  • the follower storage system ( 438 ) may receive ( 524 ), from the leader storage system ( 440 ), an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ) prior to processing ( 516 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 536 ) an indication that all other follower storage systems ( 534 ) have processed the request ( 404 ) to modify the dataset ( 442 ) prior to receiving ( 524 ) an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 536 ) an indication that all other follower storage systems ( 534 ) have processed the request ( 404 ) to modify the dataset ( 442 ) prior to processing ( 516 ) the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may receive ( 524 ), from the leader storage system ( 440 ), an indication that the leader storage system ( 440 ) has processed the request ( 404 ) to modify the dataset ( 442 ) prior to receiving ( 514 ) the information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ).
  • the follower storage system ( 438 ) may receive ( 536 ) an indication that all other follower storage systems ( 534 ) have processed the request ( 404 ) to modify the dataset ( 442 ) prior to receiving ( 514 ) the information ( 410 ) describing the modification to the dataset ( 442 ) from the leader storage system ( 440 ).
  • the follower storage system ( 438 ) may determine whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 ) by the leader storage system ( 440 ) and also processed ( 444 ) by all other follower storage systems ( 534 ) prior to acknowledging ( 528 ) completion of the request ( 404 ) to modify the dataset ( 442 ).
  • the follower storage system ( 438 ) may determine whether the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 ) by the leader storage system ( 440 ) and also processed ( 444 ) by all other follower storage systems ( 534 ), for example, by determining whether the follower storage system ( 438 ) has received an acknowledgment messages from the leader storage system ( 440 ) and all other follower storage systems ( 534 ) indicating that the request ( 404 ) to modify the dataset ( 442 ) has been processed ( 518 , 444 ) by each storage system ( 440 , 534 ).
  • the follower storage system ( 438 ) may proceed by acknowledging ( 526 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ).
  • the leader storage system ( 440 ) determines that the request ( 404 ) to modify the dataset ( 442 ) has not been processed by at least one of the leader storage system ( 440 ), all other follower storage systems ( 534 ), or the follower storage system ( 438 ), however, the follower storage system ( 438 ) may not yet acknowledge ( 526 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ), as the follower storage system ( 438 ) may only acknowledge ( 434 ) completion of the request ( 404 ) to modify the dataset ( 442 ) to the host ( 402 ) that initiated the request ( 404 ) to modify the dataset ( 442 ) when the request ( 404 ) to modify the dataset ( 442 ) has been successfully processed on all storage systems ( 438 , 440 , 534 ) across which the dataset ( 442 ) is synchronously replicated.
  • the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 422 ) can send a message back to the leader storage system ( 440 ) and to other follower storage systems ( 534 ) to signal that the modifying operation has completed everywhere.
  • the follower storage system ( 438 ) that received ( 502 ) the request ( 404 ) to modify the dataset ( 422 ) could send that message to the leader storage system ( 438 ) and the leader storage system ( 438 ) could send a message to propagate the completion and unblock reads elsewhere.
  • FIG. 5B depicts an embodiment in which the dataset ( 442 ) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system ( 440 ) and the remaining storage systems are follower storage systems ( 438 , 534 ), other embodiments may include even additional storage systems. In such other embodiments, additional follower storage systems may operate in the same way as the other follower storage system ( 534 ) depicted in FIG. 5B .
  • the information ( 510 ) describing the modification to the dataset ( 442 ) includes ordering information ( 452 ) for the request ( 404 ) to modify the dataset ( 442 ), common metadata information ( 454 ) associated with the request ( 404 ) to modify the dataset ( 442 ), and I/O payload ( 414 ) associated with the request ( 404 ) to modify the dataset ( 442 ), the information ( 510 ) describing the modification to the dataset ( 442 ) can include all of (or a subset) of such information in the examples depicted in the remaining figures.
  • the request ( 404 ) to modify the dataset ( 442 ) includes a request to take a snapshot of the dataset ( 442 )
  • the information ( 510 ) describing the modification to the dataset ( 442 ) can also include an identification of one or more other requests to modify the dataset ( 442 ) that are to be included in the content of the snapshot of the dataset ( 442 ) in each of the figures described above.
  • Example embodiments are described largely in the context of a fully functional system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system.
  • Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art.
  • Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure.
  • Embodiments can include be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

Modifying a synchronously replicated dataset, including: receiving, by a leader storage system, a request to modify a dataset that is synchronized across a plurality of storage systems; sending, from the leader storage system to a follower storage system, information describing the request to modify the dataset, wherein the leader storage system and the follower storage system each store a copy of the dataset; processing, by the leader storage system on the copy of the dataset that is stored on the leader storage system, the request to modify the dataset; receiving, from the follower storage system, an indication that the follower storage system has processed the request to modify the dataset on the copy of the dataset that is stored on the follower storage system; and acknowledging, by the leader storage system, completion of the request to modify the dataset.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation application for patent entitled to a filing date and claiming the benefit of earlier-filed U.S. patent application Ser. No. 16/680,746, filed Nov. 12, 2019, herein incorporated by reference in its entirety, which is a continuation of U.S. Pat. No. 10,521,344, issued Dec. 31, 2019, which claims priority from: U.S. Provisional Patent Application No. 62/470,172, filed Mar. 10, 2017, and U.S. Provisional Patent Application No. 62/518,071, filed Jun. 12, 2017.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1A illustrates a first example system for data storage in accordance with some implementations.
  • FIG. 1B illustrates a second example system for data storage in accordance with some implementations.
  • FIG. 1C illustrates a third example system for data storage in accordance with some implementations.
  • FIG. 1D illustrates a fourth example system for data storage in accordance with some implementations.
  • FIG. 2A is a perspective view of a storage cluster with multiple storage nodes and internal storage coupled to each storage node to provide network attached storage, in accordance with some embodiments.
  • FIG. 2B is a block diagram showing an interconnect switch coupling multiple storage nodes in accordance with some embodiments.
  • FIG. 2C is a multiple level block diagram, showing contents of a storage node and contents of one of the non-volatile solid state storage units in accordance with some embodiments.
  • FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes and storage units of some previous figures in accordance with some embodiments.
  • FIG. 2E is a blade hardware block diagram, showing a control plane, compute and storage planes, and authorities interacting with underlying physical resources, in accordance with some embodiments.
  • FIG. 2F depicts elasticity software layers in blades of a storage cluster, in accordance with some embodiments.
  • FIG. 2G depicts authorities and storage resources in blades of a storage cluster, in accordance with some embodiments.
  • FIG. 3A sets forth a diagram of a storage system that is coupled for data communications with a cloud services provider in accordance with some embodiments of the present disclosure.
  • FIG. 3B sets forth a diagram of a storage system in accordance with some embodiments of the present disclosure.
  • FIG. 4A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 4B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 5A sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • FIG. 5B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems according to some embodiments of the present disclosure.
  • DESCRIPTION OF EMBODIMENTS
  • Example methods, apparatus, and products for servicing I/O operations directed to a dataset that is synchronized across a plurality of storage systems in accordance with embodiments of the present disclosure are described with reference to the accompanying drawings, beginning with FIG. 1A. FIG. 1A illustrates an example system for data storage, in accordance with some implementations. System 100 (also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 100 may include the same, more, or fewer elements configured in the same or different manner in other implementations.
  • System 100 includes a number of computing devices 164A-B. Computing devices (also referred to as “client devices” herein) may be embodied, for example, a server in a data center, a workstation, a personal computer, a notebook, or the like. Computing devices 164A-B may be coupled for data communications to one or more storage arrays 102A-B through a storage area network (‘SAN’) 158 or a local area network (‘LAN’) 160.
  • The SAN 158 may be implemented with a variety of data communications fabrics, devices, and protocols. For example, the fabrics for SAN 158 may include Fibre Channel, Ethernet, Infiniband, Serial Attached Small Computer System Interface (‘SAS’), or the like. Data communications protocols for use with SAN 158 may include Advanced Technology Attachment (‘ATA’), Fibre Channel Protocol, Small Computer System Interface (‘SCSI’), Internet Small Computer System Interface (‘iSCSI’), HyperSCSI, Non-Volatile Memory Express (‘NVMe’) over Fabrics, or the like. It may be noted that SAN 158 is provided for illustration, rather than limitation. Other data communication couplings may be implemented between computing devices 164A-B and storage arrays 102A-B.
  • The LAN 160 may also be implemented with a variety of fabrics, devices, and protocols. For example, the fabrics for LAN 160 may include Ethernet (802.3), wireless (802.11), or the like. Data communication protocols for use in LAN 160 may include Transmission Control Protocol (‘TCP’), User Datagram Protocol (‘UDP’), Internet Protocol (‘IP’), HyperText Transfer Protocol (‘HTTP’), Wireless Access Protocol (‘WAP’), Handheld Device Transport Protocol (‘HDTP’), Session Initiation Protocol (‘SIP’), Real Time Protocol (‘RTP’), or the like.
  • Storage arrays 102A-B may provide persistent data storage for the computing devices 164A-B. Storage array 102A may be contained in a chassis (not shown), and storage array 102B may be contained in another chassis (not shown), in implementations. Storage array 102A and 102B may include one or more storage array controllers 110 (also referred to as “controller” herein). A storage array controller 110 may be embodied as a module of automated computing machinery comprising computer hardware, computer software, or a combination of computer hardware and software. In some implementations, the storage array controllers 110 may be configured to carry out various storage tasks. Storage tasks may include writing data received from the computing devices 164A-B to storage array 102A-B, erasing data from storage array 102A-B, retrieving data from storage array 102A-B and providing data to computing devices 164A-B, monitoring and reporting of disk utilization and performance, performing redundancy operations, such as Redundant Array of Independent Drives (‘RAID’) or RAID-like data redundancy operations, compressing data, encrypting data, and so forth.
  • Storage array controller 110 may be implemented in a variety of ways, including as a Field Programmable Gate Array (‘FPGA’), a Programmable Logic Chip (‘PLC’), an Application Specific Integrated Circuit (‘ASIC’), System-on-Chip (‘SOC’), or any computing device that includes discrete components such as a processing device, central processing unit, computer memory, or various adapters. Storage array controller 110 may include, for example, a data communications adapter configured to support communications via the SAN 158 or LAN 160. In some implementations, storage array controller 110 may be independently coupled to the LAN 160. In implementations, storage array controller 110 may include an I/O controller or the like that couples the storage array controller 110 for data communications, through a midplane (not shown), to a persistent storage resource 170A-B (also referred to as a “storage resource” herein). The persistent storage resource 170A-B main include any number of storage drives 171A-F (also referred to as “storage devices” herein) and any number of non-volatile Random Access Memory (‘NVRAM’) devices (not shown).
  • In some implementations, the NVRAM devices of a persistent storage resource 170A-B may be configured to receive, from the storage array controller 110, data to be stored in the storage drives 171A-F. In some examples, the data may originate from computing devices 164A-B. In some examples, writing data to the NVRAM device may be carried out more quickly than directly writing data to the storage drive 171A-F. In implementations, the storage array controller 110 may be configured to utilize the NVRAM devices as a quickly accessible buffer for data destined to be written to the storage drives 171A-F. Latency for write requests using NVRAM devices as a buffer may be improved relative to a system in which a storage array controller 110 writes data directly to the storage drives 171A-F. In some implementations, the NVRAM devices may be implemented with computer memory in the form of high bandwidth, low latency RAM. The NVRAM device is referred to as “non-volatile” because the NVRAM device may receive or include a unique power source that maintains the state of the RAM after main power loss to the NVRAM device. Such a power source may be a battery, one or more capacitors, or the like. In response to a power loss, the NVRAM device may be configured to write the contents of the RAM to a persistent storage, such as the storage drives 171A-F.
  • In implementations, storage drive 171A-F may refer to any device configured to record data persistently, where “persistently” or “persistent” refers as to a device's ability to maintain recorded data after loss of power. In some implementations, storage drive 171A-F may correspond to non-disk storage media. For example, the storage drive 171A-F may be one or more solid-state drives (‘SSDs’), flash memory based storage, any type of solid-state non-volatile memory, or any other type of non-mechanical storage device. In other implementations, storage drive 171A-F may include mechanical or spinning hard disk, such as hard-disk drives (‘HDD’).
  • In some implementations, the storage array controllers 110 may be configured for offloading device management responsibilities from storage drive 171A-F in storage array 102A-B. For example, storage array controllers 110 may manage control information that may describe the state of one or more memory blocks in the storage drives 171A-F. The control information may indicate, for example, that a particular memory block has failed and should no longer be written to, that a particular memory block contains boot code for a storage array controller 110, the number of program-erase (‘P/E’) cycles that have been performed on a particular memory block, the age of data stored in a particular memory block, the type of data that is stored in a particular memory block, and so forth. In some implementations, the control information may be stored with an associated memory block as metadata. In other implementations, the control information for the storage drives 171A-F may be stored in one or more particular memory blocks of the storage drives 171A-F that are selected by the storage array controller 110. The selected memory blocks may be tagged with an identifier indicating that the selected memory block contains control information. The identifier may be utilized by the storage array controllers 110 in conjunction with storage drives 171A-F to quickly identify the memory blocks that contain control information. For example, the storage controllers 110 may issue a command to locate memory blocks that contain control information. It may be noted that control information may be so large that parts of the control information may be stored in multiple locations, that the control information may be stored in multiple locations for purposes of redundancy, for example, or that the control information may otherwise be distributed across multiple memory blocks in the storage drive 171A-F.
  • In implementations, storage array controllers 110 may offload device management responsibilities from storage drives 171A-F of storage array 102A-B by retrieving, from the storage drives 171A-F, control information describing the state of one or more memory blocks in the storage drives 171A-F. Retrieving the control information from the storage drives 171A-F may be carried out, for example, by the storage array controller 110 querying the storage drives 171A-F for the location of control information for a particular storage drive 171A-F. The storage drives 171A-F may be configured to execute instructions that enable the storage drive 171A-F to identify the location of the control information. The instructions may be executed by a controller (not shown) associated with or otherwise located on the storage drive 171A-F and may cause the storage drive 171A-F to scan a portion of each memory block to identify the memory blocks that store control information for the storage drives 171A-F. The storage drives 171A-F may respond by sending a response message to the storage array controller 110 that includes the location of control information for the storage drive 171A-F. Responsive to receiving the response message, storage array controllers 110 may issue a request to read data stored at the address associated with the location of control information for the storage drives 171A-F.
  • In other implementations, the storage array controllers 110 may further offload device management responsibilities from storage drives 171A-F by performing, in response to receiving the control information, a storage drive management operation. A storage drive management operation may include, for example, an operation that is typically performed by the storage drive 171A-F (e.g., the controller (not shown) associated with a particular storage drive 171A-F). A storage drive management operation may include, for example, ensuring that data is not written to failed memory blocks within the storage drive 171A-F, ensuring that data is written to memory blocks within the storage drive 171A-F in such a way that adequate wear leveling is achieved, and so forth.
  • In implementations, storage array 102A-B may implement two or more storage array controllers 110. For example, storage array 102A may include storage array controllers 110A and storage array controllers 110B. At a given instance, a single storage array controller 110 (e.g., storage array controller 110A) of a storage system 100 may be designated with primary status (also referred to as “primary controller” herein), and other storage array controllers 110 (e.g., storage array controller 110A) may be designated with secondary status (also referred to as “secondary controller” herein). The primary controller may have particular rights, such as permission to alter data in persistent storage resource 170A-B (e.g., writing data to persistent storage resource 170A-B). At least some of the rights of the primary controller may supersede the rights of the secondary controller. For instance, the secondary controller may not have permission to alter data in persistent storage resource 170A-B when the primary controller has the right. The status of storage array controllers 110 may change. For example, storage array controller 110A may be designated with secondary status, and storage array controller 110B may be designated with primary status.
  • In some implementations, a primary controller, such as storage array controller 110A, may serve as the primary controller for one or more storage arrays 102A-B, and a second controller, such as storage array controller 110B, may serve as the secondary controller for the one or more storage arrays 102A-B. For example, storage array controller 110A may be the primary controller for storage array 102A and storage array 102B, and storage array controller 110B may be the secondary controller for storage array 102A and 102B. In some implementations, storage array controllers 110C and 110D (also referred to as “storage processing modules”) may neither have primary or secondary status. Storage array controllers 110C and 110D, implemented as storage processing modules, may act as a communication interface between the primary and secondary controllers (e.g., storage array controllers 110A and 110B, respectively) and storage array 102B. For example, storage array controller 110A of storage array 102A may send a write request, via SAN 158, to storage array 102B. The write request may be received by both storage array controllers 110C and 110D of storage array 102B. Storage array controllers 110C and 110D facilitate the communication, e.g., send the write request to the appropriate storage drive 171A-F. It may be noted that in some implementations storage processing modules may be used to increase the number of storage drives controlled by the primary and secondary controllers.
  • In implementations, storage array controllers 110 are communicatively coupled, via a midplane (not shown), to one or more storage drives 171A-F and to one or more NVRAM devices (not shown) that are included as part of a storage array 102A-B. The storage array controllers 110 may be coupled to the midplane via one or more data communication links and the midplane may be coupled to the storage drives 171A-F and the NVRAM devices via one or more data communications links. The data communications links described herein are collectively illustrated by data communications links 108A-D and may include a Peripheral Component Interconnect Express (‘PCIe’) bus, for example.
  • FIG. 1B illustrates an example system for data storage, in accordance with some implementations. Storage array controller 101 illustrated in FIG. 1B may be similar to the storage array controllers 110 described with respect to FIG. 1A. In one example, storage array controller 101 may be similar to storage array controller 110A or storage array controller 110B. Storage array controller 101 includes numerous elements for purposes of illustration rather than limitation. It may be noted that storage array controller 101 may include the same, more, or fewer elements configured in the same or different manner in other implementations. It may be noted that elements of FIG. 1A may be included below to help illustrate features of storage array controller 101.
  • Storage array controller 101 may include one or more processing devices 104 and random access memory (‘RAM’) 111. Processing device 104 (or controller 101) represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 104 (or controller 101) may be a complex instruction set computing (‘CISC’) microprocessor, reduced instruction set computing (‘RISC’) microprocessor, very long instruction word (‘VLIW’) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 104 (or controller 101) may also be one or more special-purpose processing devices such as an application specific integrated circuit (‘ASIC’), a field programmable gate array (‘FPGA’), a digital signal processor (‘DSP’), network processor, or the like.
  • The processing device 104 may be connected to the RAM 111 via a data communications link 106, which may be embodied as a high speed memory bus such as a Double-Data Rate 4 (‘DDR4’) bus. Stored in RAM 111 is an operating system 112. In some implementations, instructions 113 are stored in RAM 111. Instructions 113 may include computer program instructions for performing operations in in a direct-mapped flash storage system. In one embodiment, a direct-mapped flash storage system is one that that addresses data blocks within flash drives directly and without an address translation performed by the storage controllers of the flash drives.
  • In implementations, storage array controller 101 includes one or more host bus adapters 103A-C that are coupled to the processing device 104 via a data communications link 105A-C. In implementations, host bus adapters 103A-C may be computer hardware that connects a host system (e.g., the storage array controller) to other network and storage arrays. In some examples, host bus adapters 103A-C may be a Fibre Channel adapter that enables the storage array controller 101 to connect to a SAN, an Ethernet adapter that enables the storage array controller 101 to connect to a LAN, or the like. Host bus adapters 103A-C may be coupled to the processing device 104 via a data communications link 105A-C such as, for example, a PCIe bus.
  • In implementations, storage array controller 101 may include a host bus adapter 114 that is coupled to an expander 115. The expander 115 may be used to attach a host system to a larger number of storage drives. The expander 115 may, for example, be a SAS expander utilized to enable the host bus adapter 114 to attach to storage drives in an implementation where the host bus adapter 114 is embodied as a SAS controller.
  • In implementations, storage array controller 101 may include a switch 116 coupled to the processing device 104 via a data communications link 109. The switch 116 may be a computer hardware device that can create multiple endpoints out of a single endpoint, thereby enabling multiple devices to share a single endpoint. The switch 116 may, for example, be a PCIe switch that is coupled to a PCIe bus (e.g., data communications link 109) and presents multiple PCIe connection points to the midplane.
  • In implementations, storage array controller 101 includes a data communications link 107 for coupling the storage array controller 101 to other storage array controllers. In some examples, data communications link 107 may be a QuickPath Interconnect (QPI) interconnect.
  • A traditional storage system that uses traditional flash drives may implement a process across the flash drives that are part of the traditional storage system. For example, a higher level process of the storage system may initiate and control a process across the flash drives. However, a flash drive of the traditional storage system may include its own storage controller that also performs the process. Thus, for the traditional storage system, a higher level process (e.g., initiated by the storage system) and a lower level process (e.g., initiated by a storage controller of the storage system) may both be performed.
  • To resolve various deficiencies of a traditional storage system, operations may be performed by higher level processes and not by the lower level processes. For example, the flash storage system may include flash drives that do not include storage controllers that provide the process. Thus, the operating system of the flash storage system itself may initiate and control the process. This may be accomplished by a direct-mapped flash storage system that addresses data blocks within the flash drives directly and without an address translation performed by the storage controllers of the flash drives.
  • The operating system of the flash storage system may identify and maintain a list of allocation units across multiple flash drives of the flash storage system. The allocation units may be entire erase blocks or multiple erase blocks. The operating system may maintain a map or address range that directly maps addresses to erase blocks of the flash drives of the flash storage system.
  • Direct mapping to the erase blocks of the flash drives may be used to rewrite data and erase data. For example, the operations may be performed on one or more allocation units that include a first data and a second data where the first data is to be retained and the second data is no longer being used by the flash storage system. The operating system may initiate the process to write the first data to new locations within other allocation units and erasing the second data and marking the allocation units as being available for use for subsequent data. Thus, the process may only be performed by the higher level operating system of the flash storage system without an additional lower level process being performed by controllers of the flash drives.
  • Advantages of the process being performed only by the operating system of the flash storage system include increased reliability of the flash drives of the flash storage system as unnecessary or redundant write operations are not being performed during the process. One possible point of novelty here is the concept of initiating and controlling the process at the operating system of the flash storage system. In addition, the process can be controlled by the operating system across multiple flash drives. This is contrast to the process being performed by a storage controller of a flash drive.
  • A storage system can consist of two storage array controllers that share a set of drives for failover purposes, or it could consist of a single storage array controller that provides a storage service that utilizes multiple drives, or it could consist of a distributed network of storage array controllers each with some number of drives or some amount of Flash storage where the storage array controllers in the network collaborate to provide a complete storage service and collaborate on various aspects of a storage service including storage allocation and garbage collection.
  • FIG. 1C illustrates a third example system 117 for data storage in accordance with some implementations. System 117 (also referred to as “storage system” herein) includes numerous elements for purposes of illustration rather than limitation. It may be noted that system 117 may include the same, more, or fewer elements configured in the same or different manner in other implementations.
  • In one embodiment, system 117 includes a dual Peripheral Component Interconnect (PCP) flash storage device 118 with separately addressable fast write storage. System 117 may include a storage controller 119. In one embodiment, storage controller 119 may be a CPU, ASIC, FPGA, or any other circuitry that may implement control structures necessary according to the present disclosure. In one embodiment, system 117 includes flash memory devices (e.g., including flash memory devices 120 a-n), operatively coupled to various channels of the storage device controller 119. Flash memory devices 120 a-n, may be presented to the controller 119 as an addressable collection of Flash pages, erase blocks, and/or control elements sufficient to allow the storage device controller 119 to program and retrieve various aspects of the Flash. In one embodiment, storage device controller 119 may perform operations on flash memory devices 120A-N including storing and retrieving data content of pages, arranging and erasing any blocks, tracking statistics related to the use and reuse of Flash memory pages, erase blocks, and cells, tracking and predicting error codes and faults within the Flash memory, controlling voltage levels associated with programming and retrieving contents of Flash cells, etc.
  • In one embodiment, system 117 may include RAM 121 to store separately addressable fast-write data. In one embodiment, RAM 121 may be one or more separate discrete devices. In another embodiment, RAM 121 may be integrated into storage device controller 119 or multiple storage device controllers. The RAM 121 may be utilized for other purposes as well, such as temporary program memory for a processing device (e.g., a CPU) in the storage device controller 119.
  • In one embodiment, system 119 may include a stored energy device 122, such as a rechargeable battery or a capacitor. Stored energy device 122 may store energy sufficient to power the storage device controller 119, some amount of the RAM (e.g., RAM 121), and some amount of Flash memory (e.g., Flash memory 120 a-120 n) for sufficient time to write the contents of RAM to Flash memory. In one embodiment, storage device controller 119 may write the contents of RAM to Flash Memory if the storage device controller detects loss of external power.
  • In one embodiment, system 117 includes two data communications links 123 a, 123 b. In one embodiment, data communications links 123 a, 123 b may be PCI interfaces. In another embodiment, data communications links 123 a, 123 b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Data communications links 123 a, 123 b may be based on non-volatile memory express (‘NVMe’) or NVMe over fabrics (‘NVMf’) specifications that allow external connection to the storage device controller 119 from other components in the storage system 117. It should be noted that data communications links may be interchangeably referred to herein as PCI buses for convenience.
  • System 117 may also include an external power source (not shown), which may be provided over one or both data communications links 123 a, 123 b, or which may be provided separately. An alternative embodiment includes a separate Flash memory (not shown) dedicated for use in storing the content of RAM 121. The storage device controller 119 may present a logical device over a PCI bus which may include an addressable fast-write logical device, or a distinct part of the logical address space of the storage device 118, which may be presented as PCI memory or as persistent storage. In one embodiment, operations to store into the device are directed into the RAM 121. On power failure, the storage device controller 119 may write stored content associated with the addressable fast-write logical storage to Flash memory (e.g., Flash memory 120 a-n) for long-term persistent storage.
  • In one embodiment, the logical device may include some presentation of some or all of the content of the Flash memory devices 120 a-n, where that presentation allows a storage system including a storage device 118 (e.g., storage system 117) to directly address Flash memory pages and directly reprogram erase blocks from storage system components that are external to the storage device through the PCI bus. The presentation may also allow one or more of the external components to control and retrieve other aspects of the Flash memory including some or all of: tracking statistics related to use and reuse of Flash memory pages, erase blocks, and cells across all the Flash memory devices; tracking and predicting error codes and faults within and across the Flash memory devices; controlling voltage levels associated with programming and retrieving contents of Flash cells; etc.
  • In one embodiment, the stored energy device 122 may be sufficient to ensure completion of in-progress operations to the Flash memory devices 107 a-120 n stored energy device 122 may power storage device controller 119 and associated Flash memory devices (e.g., 120 a-n) for those operations, as well as for the storing of fast-write RAM to Flash memory. Stored energy device 122 may be used to store accumulated statistics and other parameters kept and tracked by the Flash memory devices 120 a-n and/or the storage device controller 119. Separate capacitors or stored energy devices (such as smaller capacitors near or embedded within the Flash memory devices themselves) may be used for some or all of the operations described herein.
  • Various schemes may be used to track and optimize the life span of the stored energy component, such as adjusting voltage levels over time, partially discharging the storage energy device 122 to measure corresponding discharge characteristics, etc. If the available energy decreases over time, the effective available capacity of the addressable fast-write storage may be decreased to ensure that it can be written safely based on the currently available stored energy.
  • FIG. 1D illustrates a third example system 124 for data storage in accordance with some implementations. In one embodiment, system 124 includes storage controllers 125 a, 125 b. In one embodiment, storage controllers 125 a, 125 b are operatively coupled to Dual PCI storage devices 119 a, 119 b and 119 c, 119 d, respectively. Storage controllers 125 a, 125 b may be operatively coupled (e.g., via a storage network 130) to some number of host computers 127 a-n.
  • In one embodiment, two storage controllers (e.g., 125 a and 125 b) provide storage services, such as a SCS) block storage array, a file server, an object server, a database or data analytics service, etc. The storage controllers 125 a, 125 b may provide services through some number of network interfaces (e.g., 126 a-d) to host computers 127 a-n outside of the storage system 124. Storage controllers 125 a, 125 b may provide integrated services or an application entirely within the storage system 124, forming a converged storage and compute system. The storage controllers 125 a, 125 b may utilize the fast write memory within or across storage devices 119 a-d to journal in progress operations to ensure the operations are not lost on a power failure, storage controller removal, storage controller or storage system shutdown, or some fault of one or more software or hardware components within the storage system 124.
  • In one embodiment, controllers 125 a, 125 b operate as PCI masters to one or the other PCI buses 128 a, 128 b. In another embodiment, 128 a and 128 b may be based on other communications standards (e.g., HyperTransport, InfiniBand, etc.). Other storage system embodiments may operate storage controllers 125 a, 125 b as multi-masters for both PCI buses 128 a, 128 b. Alternately, a PCI/NVMe/NVMf switching infrastructure or fabric may connect multiple storage controllers. Some storage system embodiments may allow storage devices to communicate with each other directly rather than communicating only with storage controllers. In one embodiment, a storage device controller 119 a may be operable under direction from a storage controller 125 a to synthesize and transfer data to be stored into Flash memory devices from data that has been stored in RAM (e.g., RAM 121 of FIG. 1C). For example, a recalculated version of RAM content may be transferred after a storage controller has determined that an operation has fully committed across the storage system, or when fast-write memory on the device has reached a certain used capacity, or after a certain amount of time, to ensure improve safety of the data or to release addressable fast-write capacity for reuse. This mechanism may be used, for example, to avoid a second transfer over a bus (e.g., 128 a, 128 b) from the storage controllers 125 a, 125 b. In one embodiment, a recalculation may include compressing data, attaching indexing or other metadata, combining multiple data segments together, performing erasure code calculations, etc.
  • In one embodiment, under direction from a storage controller 125 a, 125 b, a storage device controller 119 a, 119 b may be operable to calculate and transfer data to other storage devices from data stored in RAM (e.g., RAM 121 of FIG. 1C) without involvement of the storage controllers 125 a, 125 b. This operation may be used to mirror data stored in one controller 125 a to another controller 125 b, or it could be used to offload compression, data aggregation, and/or erasure coding calculations and transfers to storage devices to reduce load on storage controllers or the storage controller interface 129 a, 129 b to the PCI bus 128 a, 128 b.
  • A storage device controller 119 may include mechanisms for implementing high availability primitives for use by other parts of a storage system external to the Dual PCI storage device 118. For example, reservation or exclusion primitives may be provided so that, in a storage system with two storage controllers providing a highly available storage service, one storage controller may prevent the other storage controller from accessing or continuing to access the storage device. This could be used, for example, in cases where one controller detects that the other controller is not functioning properly or where the interconnect between the two storage controllers may itself not be functioning properly.
  • In one embodiment, a storage system for use with Dual PCI direct mapped storage devices with separately addressable fast write storage includes systems that manage erase blocks or groups of erase blocks as allocation units for storing data on behalf of the storage service, or for storing metadata (e.g., indexes, logs, etc.) associated with the storage service, or for proper management of the storage system itself. Flash pages, which may be a few kilobytes in size, may be written as data arrives or as the storage system is to persist data for long intervals of time (e.g., above a defined threshold of time). To commit data more quickly, or to reduce the number of writes to the Flash memory devices, the storage controllers may first write data into the separately addressable fast write storage on one more storage devices.
  • In one embodiment, the storage controllers 125 a, 125 b may initiate the use of erase blocks within and across storage devices (e.g., 118) in accordance with an age and expected remaining lifespan of the storage devices, or based on other statistics. The storage controllers 125 a, 125 b may initiate garbage collection and data migration data between storage devices in accordance with pages that are no longer needed as well as to manage Flash page and erase block lifespans and to manage overall system performance.
  • In one embodiment, the storage system 124 may utilize mirroring and/or erasure coding schemes as part of storing data into addressable fast write storage and/or as part of writing data into allocation units associated with erase blocks. Erasure codes may be used across storage devices, as well as within erase blocks or allocation units, or within and across Flash memory devices on a single storage device, to provide redundancy against single or multiple storage device failures or to protect against internal corruptions of Flash memory pages resulting from Flash memory operations or from degradation of Flash memory cells. Mirroring and erasure coding at various levels may be used to recover from multiple types of failures that occur separately or in combination.
  • The embodiments depicted with reference to FIGS. 2A-G illustrate a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, or across multiple chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server.
  • The storage cluster may be contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as PCIe, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (‘NFS’), common internet file system (‘CIFS’), small computer system interface (‘SCSI’) or hypertext transfer protocol (‘HTTP’). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node. In some embodiments, multiple chassis may be coupled or connected to each other through an aggregator switch. A portion and/or all of the coupled or connected chassis may be designated as a storage cluster. As discussed above, each chassis can have multiple blades, each blade has a media access control (‘MAC’) address, but the storage cluster is presented to an external network as having a single cluster IP address and a single MAC address in some embodiments.
  • Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, DRAM and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded CPU, solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (‘TB’) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (‘MRAM’) that substitutes for DRAM and enables a reduced power hold-up apparatus.
  • One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below.
  • FIG. 2A is a perspective view of a storage cluster 161, with multiple storage nodes 150 and internal solid-state memory coupled to each storage node to provide network attached storage or storage area network, in accordance with some embodiments. A network attached storage, storage area network, or a storage cluster, or other storage memory, could include one or more storage clusters 161, each having one or more storage nodes 150, in a flexible and reconfigurable arrangement of both the physical components and the amount of storage memory provided thereby. The storage cluster 161 is designed to fit in a rack, and one or more racks can be set up and populated as desired for the storage memory. The storage cluster 161 has a chassis 138 having multiple slots 142. It should be appreciated that chassis 138 may be referred to as a housing, enclosure, or rack unit. In one embodiment, the chassis 138 has fourteen slots 142, although other numbers of slots are readily devised. For example, some embodiments have four slots, eight slots, sixteen slots, thirty-two slots, or other suitable number of slots. Each slot 142 can accommodate one storage node 150 in some embodiments. Chassis 138 includes flaps 148 that can be utilized to mount the chassis 138 on a rack. Fans 144 provide air circulation for cooling of the storage nodes 150 and components thereof, although other cooling components could be used, or an embodiment could be devised without cooling components. A switch fabric 146 couples storage nodes 150 within chassis 138 together and to a network for communication to the memory. In an embodiment depicted in herein, the slots 142 to the left of the switch fabric 146 and fans 144 are shown occupied by storage nodes 150, while the slots 142 to the right of the switch fabric 146 and fans 144 are empty and available for insertion of storage node 150 for illustrative purposes. This configuration is one example, and one or more storage nodes 150 could occupy the slots 142 in various further arrangements. The storage node arrangements need not be sequential or adjacent in some embodiments. Storage nodes 150 are hot pluggable, meaning that a storage node 150 can be inserted into a slot 142 in the chassis 138, or removed from a slot 142, without stopping or powering down the system. Upon insertion or removal of storage node 150 from slot 142, the system automatically reconfigures in order to recognize and adapt to the change. Reconfiguration, in some embodiments, includes restoring redundancy and/or rebalancing data or load.
  • Each storage node 150 can have multiple components. In the embodiment shown here, the storage node 150 includes a printed circuit board 159 populated by a CPU 156, i.e., processor, a memory 154 coupled to the CPU 156, and a non-volatile solid state storage 152 coupled to the CPU 156, although other mountings and/or components could be used in further embodiments. The memory 154 has instructions which are executed by the CPU 156 and/or data operated on by the CPU 156. As further explained below, the non-volatile solid state storage 152 includes flash or, in further embodiments, other types of solid-state memory.
  • Referring to FIG. 2A, storage cluster 161 is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes 150 can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments. Plug-in storage nodes 150, whether installed in a chassis as delivered or later added, can have different sizes. For example, in one embodiment a storage node 150 can have any multiple of 4 TB, e.g., 8 TB, 12 TB, 16 TB, 32 TB, etc. In further embodiments, a storage node 150 could have any multiple of other storage amounts or capacities. Storage capacity of each storage node 150 is broadcast, and influences decisions of how to stripe the data. For maximum storage efficiency, an embodiment can self-configure as wide as possible in the stripe, subject to a predetermined requirement of continued operation with loss of up to one, or up to two, non-volatile solid state storage units 152 or storage nodes 150 within the chassis.
  • FIG. 2B is a block diagram showing a communications interconnect 171A-F and power distribution bus 172 coupling multiple storage nodes 150. Referring back to FIG. 2A, the communications interconnect 171A-F can be included in or implemented with the switch fabric 146 in some embodiments. Where multiple storage clusters 161 occupy a rack, the communications interconnect 171A-F can be included in or implemented with a top of rack switch, in some embodiments. As illustrated in FIG. 2B, storage cluster 161 is enclosed within a single chassis 138. External port 176 is coupled to storage nodes 150 through communications interconnect 171A-F, while external port 174 is coupled directly to a storage node. External power port 178 is coupled to power distribution bus 172. Storage nodes 150 may include varying amounts and differing capacities of non-volatile solid state storage 152 as described with reference to FIG. 2A. In addition, one or more storage nodes 150 may be a compute only storage node as illustrated in FIG. 2B. Authorities 168 are implemented on the non-volatile solid state storages 152, for example as lists or other data structures stored in memory. In some embodiments the authorities are stored within the non-volatile solid state storage 152 and supported by software executing on a controller or other processor of the non-volatile solid state storage 152. In a further embodiment, authorities 168 are implemented on the storage nodes 150, for example as lists or other data structures stored in the memory 154 and supported by software executing on the CPU 156 of the storage node 150. Authorities 168 control how and where data is stored in the non-volatile solid state storages 152 in some embodiments. This control assists in determining which type of erasure coding scheme is applied to the data, and which storage nodes 150 have which portions of the data. Each authority 168 may be assigned to a non-volatile solid state storage 152. Each authority may control a range of inode numbers, segment numbers, or other data identifiers which are assigned to data by a file system, by the storage nodes 150, or by the non-volatile solid state storage 152, in various embodiments.
  • Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities 168. Authorities 168 have a relationship to storage nodes 150 and non-volatile solid state storage 152 in some embodiments. Each authority 168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage 152. In some embodiments the authorities 168 for all of such ranges are distributed over the non-volatile solid state storages 152 of a storage cluster. Each storage node 150 has a network port that provides access to the non-volatile solid state storage(s) 152 of that storage node 150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities 168 thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority 168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage 152 and a local identifier into the set of non-volatile solid state storage 152 that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage 152 are applied to locating data for writing to or reading from the non-volatile solid state storage 152 (in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage 152, which may include or be different from the non-volatile solid state storage 152 having the authority 168 for a particular data segment.
  • If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority 168 for that data segment should be consulted, at that non-volatile solid state storage 152 or storage node 150 having that authority 168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage 152 having the authority 168 for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage 152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage 152 having that authority 168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage 152 for an authority in the presence of a set of non-volatile solid state storage 152 that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage 152 that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority 168 may be consulted if a specific authority 168 is unavailable in some embodiments.
  • With reference to FIGS. 2A and 2B, two of the many tasks of the CPU 156 on a storage node 150 are to break up write data, and reassemble read data. When the system has determined that data is to be written, the authority 168 for that data is located as above. When the segment ID for data is already determined the request to write is forwarded to the non-volatile solid state storage 152 currently determined to be the host of the authority 168 determined from the segment. The host CPU 156 of the storage node 150, on which the non-volatile solid state storage 152 and corresponding authority 168 reside, then breaks up or shards the data and transmits the data out to various non-volatile solid state storage 152. The transmitted data is written as a data stripe in accordance with an erasure coding scheme. In some embodiments, data is requested to be pulled, and in other embodiments, data is pushed. In reverse, when data is read, the authority 168 for the segment ID containing the data is located as described above. The host CPU 156 of the storage node 150 on which the non-volatile solid state storage 152 and corresponding authority 168 reside requests the data from the non-volatile solid state storage and corresponding storage nodes pointed to by the authority. In some embodiments the data is read from flash storage as a data stripe. The host CPU 156 of storage node 150 then reassembles the read data, correcting any errors (if present) according to the appropriate erasure coding scheme, and forwards the reassembled data to the network. In further embodiments, some or all of these tasks can be handled in the non-volatile solid state storage 152. In some embodiments, the segment host requests the data be sent to storage node 150 by requesting pages from storage and then sending the data to the storage node making the original request.
  • In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities.
  • A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain meta-data, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage 152 coupled to the host CPUs 156 (See FIGS. 2E and 2G) in accordance with an erasure coding scheme. Usage of the term segments refers to the container and its place in the address space of segments in some embodiments. Usage of the term stripe refers to the same set of shards as a segment and includes how the shards are distributed along with redundancy or parity information in accordance with some embodiments.
  • A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit 152 may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage 152 is able to allocate addresses without synchronization with other non-volatile solid state storage 152.
  • Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (‘LDPC’) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout.
  • In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (‘RUSH’) family of hashes, including Controlled Replication Under Scalable Hashing (‘CRUSH’). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines.
  • Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss.
  • In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet.
  • Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments.
  • As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND.
  • Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing, i.e., the system supports non-disruptive upgrades.
  • In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments.
  • FIG. 2C is a multiple level block diagram, showing contents of a storage node 150 and contents of a non-volatile solid state storage 152 of the storage node 150. Data is communicated to and from the storage node 150 by a network interface controller (‘NIC’) 202 in some embodiments. Each storage node 150 has a CPU 156, and one or more non-volatile solid state storage 152, as discussed above. Moving down one level in FIG. 2C, each non-volatile solid state storage 152 has a relatively fast non-volatile solid state memory, such as nonvolatile random access memory (‘NVRAM’) 204, and flash memory 206. In some embodiments, NVRAM 204 may be a component that does not require program/erase cycles (DRAM, MRAM, PCM), and can be a memory that can support being written vastly more often than the memory is read from. Moving down another level in FIG. 2C, the NVRAM 204 is implemented in one embodiment as high speed volatile memory, such as dynamic random access memory (DRAM) 216, backed up by energy reserve 218. Energy reserve 218 provides sufficient electrical power to keep the DRAM 216 powered long enough for contents to be transferred to the flash memory 206 in the event of power failure. In some embodiments, energy reserve 218 is a capacitor, super-capacitor, battery, or other device, that supplies a suitable supply of energy sufficient to enable the transfer of the contents of DRAM 216 to a stable storage medium in the case of power loss. The flash memory 206 is implemented as multiple flash dies 222, which may be referred to as packages of flash dies 222 or an array of flash dies 222. It should be appreciated that the flash dies 222 could be packaged in any number of ways, with a single die per package, multiple dies per package (i.e. multichip packages), in hybrid packages, as bare dies on a printed circuit board or other substrate, as encapsulated dies, etc. In the embodiment shown, the non-volatile solid state storage 152 has a controller 212 or other processor, and an input output (I/O) port 210 coupled to the controller 212. I/O port 210 is coupled to the CPU 156 and/or the network interface controller 202 of the flash storage node 150. Flash input output (I/O) port 220 is coupled to the flash dies 222, and a direct memory access unit (DMA) 214 is coupled to the controller 212, the DRAM 216 and the flash dies 222. In the embodiment shown, the I/O port 210, controller 212, DMA unit 214 and flash I/O port 220 are implemented on a programmable logic device (‘PLD’) 208, e.g., a field programmable gate array (FPGA). In this embodiment, each flash die 222 has pages, organized as sixteen kB (kilobyte) pages 224, and a register 226 through which data can be written to or read from the flash die 222. In further embodiments, other types of solid-state memory are used in place of, or in addition to flash memory illustrated within flash die 222.
  • Storage clusters 161, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes 150 are part of a collection that creates the storage cluster 161. Each storage node 150 owns a slice of data and computing required to provide the data. Multiple storage nodes 150 cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units 152 described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node 150 is shifted into a storage unit 152, transforming the storage unit 152 into a combination of storage unit 152 and storage node 150. Placing computing (relative to storage data) into the storage unit 152 places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster 161, as described herein, multiple controllers in multiple storage units 152 and/or storage nodes 150 cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on).
  • FIG. 2D shows a storage server environment, which uses embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C. In this version, each storage unit 152 has a processor such as controller 212 (see FIG. 2C), an FPGA (field programmable gate array), flash memory 206, and NVRAM 204 (which is super-capacitor backed DRAM 216, see FIGS. 2B and 2C) on a PCIe (peripheral component interconnect express) board in a chassis 138 (see FIG. 2A). The storage unit 152 may be implemented as a single board containing storage, and may be the largest tolerable failure domain inside the chassis. In some embodiments, up to two storage units 152 may fail and the device will continue with no data loss.
  • The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM 204 is a contiguous block of reserved memory in the storage unit 152 DRAM 216, and is backed by NAND flash. NVRAM 204 is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM 204 spools is managed by each authority 168 independently. Each device provides an amount of storage space to each authority 168. That authority 168 further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit 152 fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM 204 are flushed to flash memory 206. On the next power-on, the contents of the NVRAM 204 are recovered from the flash memory 206.
  • As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities 168. This distribution of logical control is shown in FIG. 2D as a host controller 242, mid-tier controller 244 and storage unit controller(s) 246. Management of the control plane and the storage plane are treated independently, although parts may be physically co-located on the same blade. Each authority 168 effectively serves as an independent controller. Each authority 168 provides its own data and metadata structures, its own background workers, and maintains its own lifecycle.
  • FIG. 2E is a blade 252 hardware block diagram, showing a control plane 254, compute and storage planes 256, 258, and authorities 168 interacting with underlying physical resources, using embodiments of the storage nodes 150 and storage units 152 of FIGS. 2A-C in the storage server environment of FIG. 2D. The control plane 254 is partitioned into a number of authorities 168 which can use the compute resources in the compute plane 256 to run on any of the blades 252. The storage plane 258 is partitioned into a set of devices, each of which provides access to flash 206 and NVRAM 204 resources.
  • In the compute and storage planes 256, 258 of FIG. 2E, the authorities 168 interact with the underlying physical resources (i.e., devices). From the point of view of an authority 168, its resources are striped over all of the physical devices. From the point of view of a device, it provides resources to all authorities 168, irrespective of where the authorities happen to run. Each authority 168 has allocated or has been allocated one or more partitions 260 of storage memory in the storage units 152, e.g. partitions 260 in flash memory 206 and NVRAM 204. Each authority 168 uses those allocated partitions 260 that belong to it, for writing or reading user data. Authorities can be associated with differing amounts of physical storage of the system. For example, one authority 168 could have a larger number of partitions 260 or larger sized partitions 260 in one or more storage units 152 than one or more other authorities 168.
  • FIG. 2F depicts elasticity software layers in blades 252 of a storage cluster, in accordance with some embodiments. In the elasticity structure, elasticity software is symmetric, i.e., each blade's compute module 270 runs the three identical layers of processes depicted in FIG. 2F. Storage managers 274 execute read and write requests from other blades 252 for data and metadata stored in local storage unit 152 NVRAM 204 and flash 206. Authorities 168 fulfill client requests by issuing the necessary reads and writes to the blades 252 on whose storage units 152 the corresponding data or metadata resides. Endpoints 272 parse client connection requests received from switch fabric 146 supervisory software, relay the client connection requests to the authorities 168 responsible for fulfillment, and relay the authorities' 168 responses to clients. The symmetric three-layer structure enables the storage system's high degree of concurrency. Elasticity scales out efficiently and reliably in these embodiments. In addition, elasticity implements a unique scale-out technique that balances work evenly across all resources regardless of client access pattern, and maximizes concurrency by eliminating much of the need for inter-blade coordination that typically occurs with conventional distributed locking.
  • Still referring to FIG. 2F, authorities 168 running in the compute modules 270 of a blade 252 perform the internal operations required to fulfill client requests. One feature of elasticity is that authorities 168 are stateless, i.e., they cache active data and metadata in their own blades' 252 DRAMs for fast access, but the authorities store every update in their NVRAM 204 partitions on three separate blades 252 until the update has been written to flash 206. All the storage system writes to NVRAM 204 are in triplicate to partitions on three separate blades 252 in some embodiments. With triple-mirrored NVRAM 204 and persistent storage protected by parity and Reed-Solomon RAID checksums, the storage system can survive concurrent failure of two blades 252 with no loss of data, metadata, or access to either.
  • Because authorities 168 are stateless, they can migrate between blades 252. Each authority 168 has a unique identifier. NVRAM 204 and flash 206 partitions are associated with authorities' 168 identifiers, not with the blades 252 on which they are running in some. Thus, when an authority 168 migrates, the authority 168 continues to manage the same storage partitions from its new location. When a new blade 252 is installed in an embodiment of the storage cluster, the system automatically rebalances load by: partitioning the new blade's 252 storage for use by the system's authorities 168, migrating selected authorities 168 to the new blade 252, starting endpoints 272 on the new blade 252 and including them in the switch fabric's 146 client connection distribution algorithm.
  • From their new locations, migrated authorities 168 persist the contents of their NVRAM 204 partitions on flash 206, process read and write requests from other authorities 168, and fulfill the client requests that endpoints 272 direct to them. Similarly, if a blade 252 fails or is removed, the system redistributes its authorities 168 among the system's remaining blades 252. The redistributed authorities 168 continue to perform their original functions from their new locations.
  • FIG. 2G depicts authorities 168 and storage resources in blades 252 of a storage cluster, in accordance with some embodiments. Each authority 168 is exclusively responsible for a partition of the flash 206 and NVRAM 204 on each blade 252. The authority 168 manages the content and integrity of its partitions independently of other authorities 168. Authorities 168 compress incoming data and preserve it temporarily in their NVRAM 204 partitions, and then consolidate, RAID-protect, and persist the data in segments of the storage in their flash 206 partitions. As the authorities 168 write data to flash 206, storage managers 274 perform the necessary flash translation to optimize write performance and maximize media longevity. In the background, authorities 168 “garbage collect,” or reclaim space occupied by data that clients have made obsolete by overwriting the data. It should be appreciated that since authorities' 168 partitions are disjoint, there is no need for distributed locking to execute client and writes or to perform background functions.
  • The embodiments described herein may utilize various software, communication and/or networking protocols. In addition, the configuration of the hardware and/or software may be adjusted to accommodate various protocols. For example, the embodiments may utilize Active Directory, which is a database based system that provides authentication, directory, policy, and other services in a WINDOWS' environment. In these embodiments, LDAP (Lightweight Directory Access Protocol) is one example application protocol for querying and modifying items in directory service providers such as Active Directory. In some embodiments, a network lock manager (‘NLM’) is utilized as a facility that works in cooperation with the Network File System (‘NFS’) to provide a System V style of advisory file and record locking over a network. The Server Message Block (‘SMB’) protocol, one version of which is also known as Common Internet File System (‘CIFS’), may be integrated with the storage systems discussed herein. SMP operates as an application-layer network protocol typically used for providing shared access to files, printers, and serial ports and miscellaneous communications between nodes on a network. SMB also provides an authenticated inter-process communication mechanism. AMAZON™ S3 (Simple Storage Service) is a web service offered by Amazon Web Services, and the systems described herein may interface with Amazon S3 through web services interfaces (REST (representational state transfer), SOAP (simple object access protocol), and BitTorrent). A RESTful API (application programming interface) breaks down a transaction to create a series of small modules. Each module addresses a particular underlying part of the transaction. The control or permissions provided with these embodiments, especially for object data, may include utilization of an access control list (‘ACL’). The ACL is a list of permissions attached to an object and the ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. The systems may utilize Internet Protocol version 6 (‘IPv6’), as well as IPv4, for the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. The routing of packets between networked systems may include Equal-cost multi-path routing (‘ECMP’), which is a routing strategy where next-hop packet forwarding to a single destination can occur over multiple “best paths” which tie for top place in routing metric calculations. Multi-path routing can be used in conjunction with most routing protocols, because it is a per-hop decision limited to a single router. The software may support Multi-tenancy, which is an architecture in which a single instance of a software application serves multiple customers. Each customer may be referred to as a tenant. Tenants may be given the ability to customize some parts of the application, but may not customize the application's code, in some embodiments. The embodiments may maintain audit logs. An audit log is a document that records an event in a computing system. In addition to documenting what resources were accessed, audit log entries typically include destination and source addresses, a timestamp, and user login information for compliance with various regulations. The embodiments may support various key management policies, such as encryption key rotation. In addition, the system may support dynamic root passwords or some variation dynamically changing passwords.
  • FIG. 3A sets forth a diagram of a storage system 306 that is coupled for data communications with a cloud services provider 302 in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system 306 depicted in FIG. 3A may be similar to the storage systems described above with reference to FIGS. 1A-1D and FIGS. 2A-2G. In some embodiments, the storage system 306 depicted in FIG. 3A may be embodied as a storage system that includes imbalanced active/active controllers, as a storage system that includes balanced active/active controllers, as a storage system that includes active/active controllers where less than all of each controller's resources are utilized such that each controller has reserve resources that may be used to support failover, as a storage system that includes fully active/active controllers, as a storage system that includes dataset-segregated controllers, as a storage system that includes dual-layer architectures with front-end controllers and back-end integrated storage controllers, as a storage system that includes scale-out clusters of dual-controller arrays, as well as combinations of such embodiments.
  • In the example depicted in FIG. 3A, the storage system 306 is coupled to the cloud services provider 302 via a data communications link 304. The data communications link 304 may be embodied as a dedicated data communications link, as a data communications pathway that is provided through the use of one or data communications networks such as a wide area network (‘WAN’) or local area network (‘LAN’), or as some other mechanism capable of transporting digital information between the storage system 306 and the cloud services provider 302. Such a data communications link 304 may be fully wired, fully wireless, or some aggregation of wired and wireless data communications pathways. In such an example, digital information may be exchanged between the storage system 306 and the cloud services provider 302 via the data communications link 304 using one or more data communications protocols. For example, digital information may be exchanged between the storage system 306 and the cloud services provider 302 via the data communications link 304 using the handheld device transfer protocol (‘HDTP’), hypertext transfer protocol (‘HTTP’), internet protocol (‘IP’), real-time transfer protocol (‘RTP’), transmission control protocol (‘TCP’), user datagram protocol (‘UDP’), wireless application protocol (‘WAP’), or other protocol.
  • The cloud services provider 302 depicted in FIG. 3A may be embodied, for example, as a system and computing environment that provides services to users of the cloud services provider 302 through the sharing of computing resources via the data communications link 304. The cloud services provider 302 may provide on-demand access to a shared pool of configurable computing resources such as computer networks, servers, storage, applications and services, and so on. The shared pool of configurable resources may be rapidly provisioned and released to a user of the cloud services provider 302 with minimal management effort. Generally, the user of the cloud services provider 302 is unaware of the exact computing resources utilized by the cloud services provider 302 to provide the services. Although in many cases such a cloud services provider 302 may be accessible via the Internet, readers of skill in the art will recognize that any system that abstracts the use of shared resources to provide services to a user through any data communications link may be considered a cloud services provider 302.
  • In the example depicted in FIG. 3A, the cloud services provider 302 may be configured to provide a variety of services to the storage system 306 and users of the storage system 306 through the implementation of various service models. For example, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an infrastructure as a service (‘IaaS’) service model where the cloud services provider 302 offers computing infrastructure such as virtual machines and other resources as a service to subscribers. In addition, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a platform as a service (‘PaaS’) service model where the cloud services provider 302 offers a development environment to application developers. Such a development environment may include, for example, an operating system, programming-language execution environment, database, web server, or other components that may be utilized by application developers to develop and run software solutions on a cloud platform. Furthermore, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a software as a service (‘SaaS’) service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306, providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. The cloud services provider 302 may be further configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of an authentication as a service (‘AaaS’) service model where the cloud services provider 302 offers authentication services that can be used to secure access to applications, data sources, or other resources. The cloud services provider 302 may also be configured to provide services to the storage system 306 and users of the storage system 306 through the implementation of a storage as a service model where the cloud services provider 302 offers access to its storage infrastructure for use by the storage system 306 and users of the storage system 306. Readers will appreciate that the cloud services provider 302 may be configured to provide additional services to the storage system 306 and users of the storage system 306 through the implementation of additional service models, as the service models described above are included only for explanatory purposes and in no way represent a limitation of the services that may be offered by the cloud services provider 302 or a limitation as to the service models that may be implemented by the cloud services provider 302.
  • In the example depicted in FIG. 3A, the cloud services provider 302 may be embodied, for example, as a private cloud, as a public cloud, or as a combination of a private cloud and public cloud. In an embodiment in which the cloud services provider 302 is embodied as a private cloud, the cloud services provider 302 may be dedicated to providing services to a single organization rather than providing services to multiple organizations. In an embodiment where the cloud services provider 302 is embodied as a public cloud, the cloud services provider 302 may provide services to multiple organizations. Public cloud and private cloud deployment models may differ and may come with various advantages and disadvantages. For example, because a public cloud deployment involves the sharing of a computing infrastructure across different organization, such a deployment may not be ideal for organizations with security concerns, mission-critical workloads, uptime requirements demands, and so on. While a private cloud deployment can address some of these issues, a private cloud deployment may require on-premises staff to manage the private cloud. In still alternative embodiments, the cloud services provider 302 may be embodied as a mix of a private and public cloud services with a hybrid cloud deployment.
  • Although not explicitly depicted in FIG. 3A, readers will appreciate that additional hardware components and additional software components may be necessary to facilitate the delivery of cloud services to the storage system 306 and users of the storage system 306. For example, the storage system 306 may be coupled to (or even include) a cloud storage gateway. Such a cloud storage gateway may be embodied, for example, as hardware-based or software-based appliance that is located on premise with the storage system 306. Such a cloud storage gateway may operate as a bridge between local applications that are executing on the storage array 306 and remote, cloud-based storage that is utilized by the storage array 306. Through the use of a cloud storage gateway, organizations may move primary iSCSI or NAS to the cloud services provider 302, thereby enabling the organization to save space on their on-premises storage systems. Such a cloud storage gateway may be configured to emulate a disk array, a block-based device, a file server, or other storage system that can translate the SCSI commands, file server commands, or other appropriate command into REST-space protocols that facilitate communications with the cloud services provider 302.
  • In order to enable the storage system 306 and users of the storage system 306 to make use of the services provided by the cloud services provider 302, a cloud migration process may take place during which data, applications, or other elements from an organization's local systems (or even from another cloud environment) are moved to the cloud services provider 302. In order to successfully migrate data, applications, or other elements to the cloud services provider's 302 environment, middleware such as a cloud migration tool may be utilized to bridge gaps between the cloud services provider's 302 environment and an organization's environment. Such cloud migration tools may also be configured to address potentially high network costs and long transfer times associated with migrating large volumes of data to the cloud services provider 302, as well as addressing security concerns associated with sensitive data to the cloud services provider 302 over data communications networks. In order to further enable the storage system 306 and users of the storage system 306 to make use of the services provided by the cloud services provider 302, a cloud orchestrator may also be used to arrange and coordinate automated tasks in pursuit of creating a consolidated process or workflow. Such a cloud orchestrator may perform tasks such as configuring various components, whether those components are cloud components or on-premises components, as well as managing the interconnections between such components. The cloud orchestrator can simplify the inter-component communication and connections to ensure that links are correctly configured and maintained.
  • In the example depicted in FIG. 3A, and as described briefly above, the cloud services provider 302 may be configured to provide services to the storage system 306 and users of the storage system 306 through the usage of a SaaS service model where the cloud services provider 302 offers application software, databases, as well as the platforms that are used to run the applications to the storage system 306 and users of the storage system 306, providing the storage system 306 and users of the storage system 306 with on-demand software and eliminating the need to install and run the application on local computers, which may simplify maintenance and support of the application. Such applications may take many forms in accordance with various embodiments of the present disclosure. For example, the cloud services provider 302 may be configured to provide access to data analytics applications to the storage system 306 and users of the storage system 306. Such data analytics applications may be configured, for example, to receive telemetry data phoned home by the storage system 306. Such telemetry data may describe various operating characteristics of the storage system 306 and may be analyzed, for example, to determine the health of the storage system 306, to identify workloads that are executing on the storage system 306, to predict when the storage system 306 will run out of various resources, to recommend configuration changes, hardware or software upgrades, workflow migrations, or other actions that may improve the operation of the storage system 306.
  • The cloud services provider 302 may also be configured to provide access to virtualized computing environments to the storage system 306 and users of the storage system 306. Such virtualized computing environments may be embodied, for example, as a virtual machine or other virtualized computer hardware platforms, virtual storage devices, virtualized computer network resources, and so on. Examples of such virtualized environments can include virtual machines that are created to emulate an actual computer, virtualized desktop environments that separate a logical desktop from a physical machine, virtualized file systems that allow uniform access to different types of concrete file systems, and many others.
  • For further explanation, FIG. 3B sets forth a diagram of a storage system 306 in accordance with some embodiments of the present disclosure. Although depicted in less detail, the storage system 306 depicted in FIG. 3B may be similar to the storage systems described above with reference to FIGS. 1A-1D and FIGS. 2A-2G as the storage system may include many of the components described above.
  • The storage system 306 depicted in FIG. 3B may include storage resources 308, which may be embodied in many forms. For example, in some embodiments the storage resources 308 can include nano-RAM or another form of nonvolatile random access memory that utilizes carbon nanotubes deposited on a substrate. In some embodiments, the storage resources 308 may include 3D crosspoint non-volatile memory in which bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. In some embodiments, the storage resources 308 may include flash memory, including single-level cell (‘SLC’) NAND flash, multi-level cell (‘MLC’) NAND flash, triple-level cell (‘TLC’) NAND flash, quad-level cell (‘QLC’) NAND flash, and others. In some embodiments, the storage resources 308 may include non-volatile magnetoresistive random-access memory (‘MRAM’), including spin transfer torque (‘STT’) MRAM, in which data is stored through the use of magnetic storage elements. In some embodiments, the example storage resources 308 may include non-volatile phase-change memory (‘PCM’) that may have the ability to hold multiple bits in a single cell as cells can achieve a number of distinct intermediary states. In some embodiments, the storage resources 308 may include quantum memory that allows for the storage and retrieval of photonic quantum information. In some embodiments, the example storage resources 308 may include resistive random-access memory (‘ReRAM’) in which data is stored by changing the resistance across a dielectric solid-state material. In some embodiments, the storage resources 308 may include storage class memory (‘SCM’) in which solid-state nonvolatile memory may be manufactured at a high density using some combination of sub-lithographic patterning techniques, multiple bits per cell, multiple layers of devices, and so on. Readers will appreciate that other forms of computer memories and storage devices may be utilized by the storage systems described above, including DRAM, SRAM, EEPROM, universal memory, and many others. The storage resources 308 depicted in FIG. 3A may be embodied in a variety of form factors, including but not limited to, dual in-line memory modules (‘DIMMs’), non-volatile dual in-line memory modules (‘NVDIMMs’), M.2, U.2, and others.
  • The example storage system 306 depicted in FIG. 3B may implement a variety of storage architectures. For example, storage systems in accordance with some embodiments of the present disclosure may utilize block storage where data is stored in blocks, and each block essentially acts as an individual hard drive. Storage systems in accordance with some embodiments of the present disclosure may utilize object storage, where data is managed as objects. Each object may include the data itself, a variable amount of metadata, and a globally unique identifier, where object storage can be implemented at multiple levels (e.g., device level, system level, interface level). Storage systems in accordance with some embodiments of the present disclosure utilize file storage in which data is stored in a hierarchical structure. Such data may be saved in files and folders, and presented to both the system storing it and the system retrieving it in the same format.
  • The example storage system 306 depicted in FIG. 3B may be embodied as a storage system in which additional storage resources can be added through the use of a scale-up model, additional storage resources can be added through the use of a scale-out model, or through some combination thereof. In a scale-up model, additional storage may be added by adding additional storage devices. In a scale-out model, however, additional storage nodes may be added to a cluster of storage nodes, where such storage nodes can include additional processing resources, additional networking resources, and so on.
  • The storage system 306 depicted in FIG. 3B also includes communications resources 310 that may be useful in facilitating data communications between components within the storage system 306, as well as data communications between the storage system 306 and computing devices that are outside of the storage system 306. The communications resources 310 may be configured to utilize a variety of different protocols and data communication fabrics to facilitate data communications between components within the storage systems as well as computing devices that are outside of the storage system. For example, the communications resources 310 can include fibre channel (‘FC’) technologies such as FC fabrics and FC protocols that can transport SCSI commands over FC networks. The communications resources 310 can also include FC over ethernet (‘FCoE’) technologies through which FC frames are encapsulated and transmitted over Ethernet networks. The communications resources 310 can also include InfiniBand (‘IB’) technologies in which a switched fabric topology is utilized to facilitate transmissions between channel adapters. The communications resources 310 can also include NVM Express (‘NVMe’) technologies and NVMe over fabrics (‘NVMeoF’) technologies through which non-volatile storage media attached via a PCI express (‘PCIe’) bus may be accessed. The communications resources 310 can also include mechanisms for accessing storage resources 308 within the storage system 306 utilizing serial attached SCSI (‘SAS’), serial ATA (‘SATA’) bus interfaces for connecting storage resources 308 within the storage system 306 to host bus adapters within the storage system 306, internet small computer systems interface (‘iSCSI’) technologies to provide block-level access to storage resources 308 within the storage system 306, and other communications resources that that may be useful in facilitating data communications between components within the storage system 306, as well as data communications between the storage system 306 and computing devices that are outside of the storage system 306.
  • The storage system 306 depicted in FIG. 3B also includes processing resources 312 that may be useful in useful in executing computer program instructions and performing other computational tasks within the storage system 306. The processing resources 312 may include one or more application-specific integrated circuits (‘ASICs’) that are customized for some particular purpose as well as one or more central processing units (‘CPUs’). The processing resources 312 may also include one or more digital signal processors (‘DSPs’), one or more field-programmable gate arrays (‘FPGAs’), one or more systems on a chip (‘SoCs’), or other form of processing resources 312. The storage system 306 may utilize the storage resources 312 to perform a variety of tasks including, but not limited to, supporting the execution of software resources 314 that will be described in greater detail below.
  • The storage system 306 depicted in FIG. 3B also includes software resources 314 that, when executed by processing resources 312 within the storage system 306, may perform various tasks. The software resources 314 may include, for example, one or more modules of computer program instructions that when executed by processing resources 312 within the storage system 306 are useful in carrying out various data protection techniques to preserve the integrity of data that is stored within the storage systems. Readers will appreciate that such data protection techniques may be carried out, for example, by system software executing on computer hardware within the storage system, by a cloud services provider, or in other ways. Such data protection techniques can include, for example, data archiving techniques that cause data that is no longer actively used to be moved to a separate storage device or separate storage system for long-term retention, data backup techniques through which data stored in the storage system may be copied and stored in a distinct location to avoid data loss in the event of equipment failure or some other form of catastrophe with the storage system, data replication techniques through which data stored in the storage system is replicated to another storage system such that the data may be accessible via multiple storage systems, data snapshotting techniques through which the state of data within the storage system is captured at various points in time, data and database cloning techniques through which duplicate copies of data and databases may be created, and other data protection techniques. Through the use of such data protection techniques, business continuity and disaster recovery objectives may be met as a failure of the storage system may not result in the loss of data stored in the storage system.
  • The software resources 314 may also include software that is useful in implementing software-defined storage (‘SDS’). In such an example, the software resources 314 may include one or more modules of computer program instructions that, when executed, are useful in policy-based provisioning and management of data storage that is independent of the underlying hardware. Such software resources 314 may be useful in implementing storage virtualization to separate the storage hardware from the software that manages the storage hardware.
  • The software resources 314 may also include software that is useful in facilitating and optimizing I/O operations that are directed to the storage resources 308 in the storage system 306. For example, the software resources 314 may include software modules that perform carry out various data reduction techniques such as, for example, data compression, data deduplication, and others. The software resources 314 may include software modules that intelligently group together I/O operations to facilitate better usage of the underlying storage resource 308, software modules that perform data migration operations to migrate from within a storage system, as well as software modules that perform other functions. Such software resources 314 may be embodied as one or more software containers or in many other ways.
  • Readers will appreciate that the various components depicted in FIG. 3B may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may minimize compatibility issues between various components within the storage system 306 while also reducing various costs associated with the establishment and operation of the storage system 306. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach, or in other ways.
  • Readers will appreciate that the storage system 306 depicted in FIG. 3B may be useful for supporting various types of software applications. For example, the storage system 306 may be useful in supporting artificial intelligence applications, database applications, DevOps projects, electronic design automation tools, event-driven software applications, high performance computing applications, simulation applications, high-speed data capture and analysis applications, machine learning applications, media production applications, media serving applications, picture archiving and communication systems (‘PACS’) applications, software development applications, and many other types of applications by providing storage resources to such applications.
  • The storage systems described above may operate to support a wide variety of applications. In view of the fact that the storage systems include compute resources, storage resources, and a wide variety of other resources, the storage systems may be well suited to support applications that are resource intensive such as, for example, artificial intelligence applications. Such artificial intelligence applications may enable devices to perceive their environment and take actions that maximize their chance of success at some goal. The storage systems described above may also be well suited to support other types of applications that are resource intensive such as, for example, machine learning applications. Machine learning applications may perform various types of data analysis to automate analytical model building. Using algorithms that iteratively learn from data, machine learning applications can enable computers to learn without being explicitly programmed.
  • In addition to the resources already described, the storage systems described above may also include graphics processing units (‘GPUs’), occasionally referred to as visual processing unit (‘VPUs’). Such GPUs may be embodied as specialized electronic circuits that rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Such GPUs may be included within any of the computing devices that are part of the storage systems described above.
  • For further explanation, FIG. 4A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset (442) that is synchronized across a plurality of storage systems (438, 440) according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (438, 440) depicted in FIG. 4A may be similar to the storage systems described above with reference to FIGS. 1A-1D, FIGS. 2A-2G, FIGS. 3A-3B, or any combination thereof. In fact, the storage system depicted in FIG. 4A may include the same, fewer, additional components as the storage systems described above.
  • The dataset (442) depicted in FIG. 4A may be embodied, for example, as the contents of a particular volume, as the contents of a particular shared of a volume, or as any other collection of one or more data elements. The dataset (442) may be synchronized across a plurality of storage systems (438, 440) such that each storage system (438, 440) retains a local copy of the dataset (442). In the examples described herein, such a dataset (442) is synchronously replicated across the storage systems (438, 440) in such a way that the dataset (442) can be accessed through any of the storage systems (438, 440) with performance characteristics such that any one storage system in the cluster doesn't operate substantially more optimally any other storage system in the cluster, at least as long as the cluster and the particular storage system being accessed are running nominally. In such systems, modifications to the dataset (442) should be made to the copy of the dataset that resides on each storage system (438, 440) in such a way that accessing the dataset (442) on any storage system (438, 440) will yield consistent results. For example, a write request issued to the dataset must be serviced on all storage systems (438, 440) or on none of the storage systems (438, 440) that were running nominally at the beginning of the write and that remained running nominally through completion of the write. Likewise, some groups of operations (e.g., two write operations that are directed to same location within the dataset) must be executed in the same order, or other steps must be taken as described in greater detail below, on all storage systems (438, 440) such that the dataset is ultimately identical on all storage systems (438, 440). Modifications to the dataset (442) need not be made at the exact same time, but some actions (e.g., issuing an acknowledgement that a write request directed to the dataset, enabling read access to a location within the dataset that is targeted by a write request that has not yet been completed on both storage systems) may be delayed until the copy of the dataset on each storage system (438, 440) has been modified.
  • In the example method depicted in FIG. 4A, the designation of one storage system (440) as the ‘leader’ and another storage system (438) as the ‘follower’ may refer to the respective relationships of each storage system for the purposes of synchronously replicating a particular dataset across the storage systems. In such an example, and as will be described in greater detail below, the leader storage system (440) may be responsible for performing some processing of an incoming I/O operation and passing such information along to the follower storage system (438) or performing other tasks that are not required of the follower storage system (440). The leader storage system (440) may be responsible for performing tasks that are not required of the follower storage system (438) for all incoming I/O operations or, alternatively, the leader-follower relationship may be specific to only a subset of the I/O operations that are received by either storage system. For example, the leader-follower relationship may be specific to I/O operations that are directed towards a first volume, a first group of volumes, a first group of logical addresses, a first group of physical addresses, or some other logical or physical delineator. In such a way, a first storage system may serve as the leader storage system for I/O operations directed to a first set of volumes (or other delineator) while a second storage system may serve as the leader storage system for I/O operations directed to a second set of volumes (or other delineator). The example method depicted in FIG. 4A depicts an embodiment where synchronizing a plurality of storage systems (438, 440) occurs in response to the receipt of a request (404) to modify a dataset (442) by the leader storage system (440), although synchronizing a plurality of storage systems (438, 440) may also be carried out in response to the receipt of a request (404) to modify a dataset (442) by the follower storage system (438), as will be described in greater detail below.
  • The example method depicted in FIG. 4A includes receiving (406), by a leader storage system (440), a request (404) to modify the dataset (442). The request (404) to modify the dataset (442) may be embodied, for example, as a request to write data to a location within the storage system (440) that contains data that is included in the dataset (442), as a request to write data to a volume that contains data that is included in the dataset (442), as a request to take a snapshot of the dataset (442), as a virtual range copy, as an UNMAP operation that essentially represents a deletion of some portion of the data in the dataset (442), as a modifying transformations of the dataset (442) (rather than a change to a portion of data within the dataset), or as some other operation that results in a change to some portion of the data that is included in the dataset (442). In the example method depicted in FIG. 4A, the request (404) to modify the dataset (442) is issued by a host (402) that may be embodied, for example, as an application that is executing on a virtual machine, as an application that is executing on a computing device that is connected to the storage system (440), or as some other entity configured to access the storage system (440).
  • The example method depicted in FIG. 4A also includes generating (408), by the leader storage system (440), information (410) describing the modification to the dataset (442). The leader storage system (440) may generate (408) the information (410) describing the modification to the dataset (442), for example, by determining ordering versus any other operations that are in progress, by determining the proper outcome of overlapping modifications (e.g., the appropriate outcome of two requests to modify the same storage location), calculating any distributed state changes such as to common elements of metadata across all members of the pod (e.g., all storage systems across which the dataset is synchronously replicated), and so on. The information (410) describing the modification to the dataset (442) may be embodied, for example, as system-level information that is used to describe an I/O operation that is to be performed by a storage system. The leader storage system (440) may generate (408) the information (410) describing the modification to the dataset (442) by processing the request (404) to modify the dataset (442) just enough to figure out what should happen in order to service the request (404) to modify the dataset (442). For example, the leader storage system (440) may determine whether some ordering of the execution of the request (404) to modify the dataset (442) relative to other requests to modify the dataset (442) is required, or some other steps must be taken as described in greater detail below, to produce an equivalent result on each storage system (438, 440).
  • Consider an example in which the request (404) to modify the dataset (442) is embodied as a request to copy blocks from a first address range in the dataset (442) to a second address range in the dataset (442). In such an example, assume that three other write operations (write A, write B, write C) are directed to the first address range in the dataset (442). In such an example, if the leader storage system (440) services write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset (442) to the second address range in the dataset (442), the follower storage system (438) must also service write A and write B (but does not service write C) prior to copying the blocks from the first address range in the dataset (442) to the second address range in the dataset (442) in order to yield consistent results. As such, when the leader storage system (440) generates (408) the information (410) describing the modification to the dataset (442), in this example, the leader storage system (440) could generate information (e.g., sequence numbers for write A and write B) that identifies other operations that must be completed before the follower storage system (438) can process the request (404) to modify the dataset (442).
  • Consider an additional example in which two requests (e.g., Write A and Write B) are directed to overlapping portions of the dataset (442). In such an example, if the leader storage system (440) services write A and subsequently services write B, while the follower storage system (438) services write B and subsequently services write A, the dataset (442) would not be consistent across both storage systems (438, 440). As such, when the leader storage system (440) generates (408) the information (410) describing the modification to the dataset (442), in this example, the leader storage system (440) could generate information (e.g., sequence numbers for write A and write B) that identifies the order in which the requests should be executed. Alternatively, rather than generating information (410) describing the modification to the dataset (442) which requires intermediate behavior from each storage system (438, 440), the leader storage system (440) may generate (408) information (410) describing the modification to the dataset (442) that includes information that identifies the proper outcome of the two requests. For example, if write B logically follows write A (and overlaps with write A), the end result must be that the dataset (442) includes the parts of write B that overlap with write A, rather than including the parts of write A that overlap with write B. Such an outcome could be facilitated by merging a result in memory and writing the result of such a merge to the dataset (442), rather than strictly requiring that a particular storage system (438, 440) execute write A and then subsequently execute write B. Readers will appreciate that more subtle cases relate to snapshots and virtual address range copies.
  • Readers will further appreciate that correct results for any operation must be committed to the point of being recoverable before the operation can be acknowledged. But, multiple operations can be committed together, or operations can be partially committed if recovery would ensure correctness. For example, a snapshot could locally commit with a recorded dependency on an expected write of A and B, but A or B might not have themselves committed. The snapshot cannot be acknowledged, and recovery might end up backing out the snapshot if the missing I/O cannot be recovered from another array. Also, if write B overlaps with write A, then the leader may “order” B to be after A, but A could actually be discarded and the operation to write A would then simply wait for B. Writes A, B, C, and D, coupled with a snapshot between A,B and C,D could commit and/or acknowledge some or all parts together as long as recovery cannot result in a snapshot inconsistency across arrays and as long as acknowledgement does not complete a later operation before an earlier operation has been persisted to the point that it is guaranteed to be recoverable.
  • The example method depicted in FIG. 4A also includes sending (412), from the leader storage system (440) to a follower storage system (438), information (410) describing the modification to the dataset (442). Sending (412) information (410) describing the modification to the dataset (442) from the leader storage system (440) to a follower storage system (438) may be carried out, for example, by the leader storage system (440) sending one or more messages to the follower storage system (438). The leader storage system (440) may also send, in the same messages or in one or more different messages, I/O payload (414) for the request (404) to modify the dataset (442). The I/O payload (414) may be embodied, for example, as data that is to be written to storage within the follower storage system (438) when the request (404) to modify the dataset (442) is embodied as a request to write data to the dataset (442). In such an example, because the request (404) to modify the dataset (442) was received (406) by the leader storage system (440), the follower storage system (438) has not received the I/O payload (414) associated with the request (404) to modify the dataset (442). In the example method depicted in FIG. 4A, the information (410) describing the modification to the dataset (442) and the I/O payload (414) that is associated with the request (404) to modify the dataset (442) may be sent (412) from the leader storage system (440) to the follower storage system (438) via one or more data communications networks that couple the leader storage system (440) to the follower storage system (438), via one or more dedicated data communications links (e.g., a first link for sending I/O payload and a second link for sending information describing modifications to datasets) that couples the leader storage system (440) to the follower storage system (438), or via some other mechanism.
  • The example method depicted in FIG. 4A also includes receiving (416), by the follower storage system (438), the information (410) describing the modification to the dataset (442). The follower storage system (438) may receive (416) the information (410) describing the modification to the dataset (442) and I/O payload (414) from the leader storage system (440), for example, via one or more messages that are sent from the leader storage system (440) to the follower storage system (438). The one or more messages may be sent from the leader storage system (440) to the follower storage system (438) via one or more dedicated data communications links between the two storage systems (438, 440), by the leader storage system (440) writing the message to a predetermined memory location (e.g., the location of a queue) on the follower storage system (438) using RDMA or a similar mechanism, or in other ways.
  • In one embodiment, the follower storage system (438) may receive (416) the information (410) describing the modification to the dataset (442) and I/O payload (414) from the leader storage system (440) through the use of the use of SCSI requests (writes from sender to receiver, or reads from receiver to sender) as a communication mechanism. In such an embodiment, a SCSI Write request is used to encode information that is intended to be sent (which includes whatever data and metadata), and which may be delivered to a special pseudo-device or over a specially configured SCSI network, or through any other agreed upon addressing mechanism. Or, alternately, the model can issue a set of open SCSI read requests from a receiver to a sender, also using special devices, specially configured SCSI networks, or other agreed upon mechanisms. Encoded information including data and metadata will be delivered to the receiver as a response to one or more of these open SCSI requests. Such a model can be implemented over Fibre Channel SCSI networks, which are often deployed as the “dark fibre” storage network infrastructure between data centers. Such a model also allows the use of the same network lines for host-to-remote-array multipathing and bulk array-to-array communications.
  • The example method depicted in FIG. 4A also includes processing (418), by the follower storage system (438), the request (404) to modify the dataset (442). In the example method depicted in FIG. 4A, the follower storage system (438) may process (418) the request (404) to modify the dataset (442) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the follower storage system (438) in dependence upon the information (410) describing the modification to the dataset (442) as well as the I/O payload (414) that was received from the leader storage system (440). Consider an example in which the request (404) to modify the dataset (442) is embodied as a write operation that is directed to a volume that is included in the dataset (442) and the information (410) describing the modification to the dataset (442) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (418) the request (404) to modify the dataset (442) may be carried out by the follower storage system (438) first verifying that the previously issued write operation has been processed on the follower storage system (438) and subsequently writing I/O payload (414) associated with the write operation to one or more storage devices that are included in the follower storage system (438). In such an example, the request (404) to modify the dataset (442) may be considered to have been completed and successfully processed, for example, when the I/O payload (414) has been committed to persistent storage within the follower storage system (438).
  • The example method depicted in FIG. 4A also includes acknowledging (420), by the follower storage system (438) to the leader storage system (440), completion of the request (404) to modify the dataset (442). In the example method depicted in FIG. 4A, acknowledging (420), by the follower storage system (438) to the leader storage system (440), completion of the request (404) to modify the dataset (442) may be carried out by the follower storage system (438) sending an acknowledgment (422) message to the leader storage system (440). Such messages may include, for example, information identifying the particular request (404) to modify the dataset (442) that was completed as well as any additional information useful in acknowledging (420) the completion of the request (404) to modify the dataset (442) by the follower storage system (438). In the example method depicted in FIG. 4A, acknowledging (420) completion of the request (404) to modify the dataset (442) to the leader storage system (440) is illustrated by the follower storage system (438) issuing an acknowledgment (422) message to the leader storage system (438).
  • The example method depicted in FIG. 4A also includes processing (424), by the leader storage system (440), the request (404) to modify the dataset (442). In the example method depicted in FIG. 4A, the leader storage system (440) may process (424) the request (404) to modify the dataset (442) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the leader storage system (440) in dependence upon the information (410) describing the modification to the dataset (442) as well as the I/O payload (414) that was received as part of the request (404) to modify the dataset (442). Consider an example in which the request (404) to modify the dataset (442) is embodied as a write operation that is directed to a volume that is included in the dataset (442) and the information (410) describing the modification to the dataset (442) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (424) the request (404) to modify the dataset (442) may be carried out by the leader storage system (440) first verifying that the previously issued write operation has been processed by the leader storage system (440) and subsequently writing I/O payload (414) associated with the write operation to one or more storage devices that are included in the leader storage system (440). In such an example, the request (404) to modify the dataset (442) may be considered to have been completed and successfully processed, for example, when the I/O payload (414) has been committed to persistent storage within the leader storage system (440).
  • The example method depicted in FIG. 4A also includes receiving (426), from the follower storage system (438), an indication that the follower storage system (438) has processed the request (404) to modify the dataset (436). In this example, the indication that the follower storage system (438) has processed the request (404) to modify the dataset (436) is embodied as an acknowledgement (422) message sent from the follower storage system (438) to the leader storage system (440). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required. In fact, because the follower storage system (438) and the leader storage system (440) are independent storage systems, each storage system may be performing some of the steps described above in parallel. For example, the follower storage system (438) may receive (416) the information (410) describing the modification to the dataset (442), process (418) the request (404) to modify the dataset (442), or acknowledge (420) completion of the request (404) to modify the dataset (442) before the leader storage system (440) has processed (424) the request (404) to modify the dataset (442). Alternatively, the leader storage system (440) may have processed (424) the request (404) to modify the dataset (442) before the follower storage system (438) has received (416) the information (410) describing the modification to the dataset (442), processed (418) the request (404) to modify the dataset (442), or acknowledged (420) completion of the request (404) to modify the dataset (442).
  • The example method depicted in FIG. 4A also includes acknowledging (434), by the leader storage system (440), completion of the request (404) to modify the dataset (442). In the example method depicted in FIG. 4A, acknowledging (434) completion of the request (404) to modify the dataset (442) may be carried out through the use of one or more acknowledgement (436) messages that are sent from the leader storage system (440) to the host (402) or via some other appropriate mechanism. In the example method depicted in FIG. 4A, the leader storage system (440) may determine (428) whether the request (404) to modify the dataset (442) has been processed (418) by the follower storage system (438) prior to acknowledging (434) completion of the request (404) to modify the dataset (442). The leader storage system (440) may determine (428) whether the request (404) to modify the dataset (442) has been processed (418) by the follower storage system (438), for example, by determining whether the leader storage system (440) has received an acknowledgment message or other message from the follower storage system (438) indicating that the request (404) to modify the dataset (442) has been processed (418) by the follower storage system (438). In such an example, if the leader storage system (440) affirmatively (430) determines that the request (404) to modify the dataset (442) has been processed (418) by the follower storage system (438) and also processed (424) by the leader storage system (438), the leader storage system (440) may proceed by acknowledging (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442). If the leader storage system (440) determines that the request (404) to modify the dataset (442) has not (432) been processed (418) by the follower storage system (438) or has not been processed (424) by the leader storage system (438), however, the leader storage system (440) may not yet acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442), as the leader storage system (440) may only acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442) when the request (404) to modify the dataset (442) has been successfully processed on all storage systems (438, 440) across which a dataset (442) is synchronously replicated.
  • Readers will appreciate that in the example method depicted in FIG. 4A, sending (412), from the leader storage system (440) to a follower storage system (438), information (410) describing the modification to the dataset (442) and acknowledging (420), by the follower storage system (438) to the leader storage system (440), completion of the request (404) to modify the dataset (442) can be carried out using single roundtrip messaging. Single roundtrip messaging may be used, for example, through the use of Fibre Channel as a data interconnect. Typically, SCSI protocols are used with Fibre Channel. Such interconnects are commonly provisioned between data centers because some older replication technologies may be built to essentially replicate data as SCSI transactions over Fibre Channel networks. Also, historically Fibre Channel SCSI infrastructure had less overhead and lower latencies than networks based on Ethernet and TCP/IP. Further, when data centers are internally connected to block storage arrays using Fibre Channel, the Fibre Channel networks may be stretched to other data centers so that hosts in one data center can switch to accessing storage arrays in a remote data center when local storage arrays fail.
  • SCSI could be used as a general communication mechanism, even though it is normally designed for use with block storage protocols for storing and retrieving data in block-oriented volumes (or for tape). For example, SCSI READ or SCSI WRITE could be used to deliver or retrieve message data between storage controllers in paired storage systems. A typical implementation of SCSI WRITE requires two message round trips: a SCSI initiator sends a SCSI CDB describing the SCSI WRITE operation, a SCSI target receives that CDB and the SCSI target sends a “Ready to Receive” message to the SCSI initiator. The SCSI initiator then sends data to the SCSI target and when SCSI WRITE is complete the SCSI target responds to the SCSI initiator with a Success completion. A SCSI READ request, on the other hand, requires only one round trip: the SCSI initiator sends a SCSI CDB describing the SCSI READ operation, a SCSI target receives that CDB and responds with data and then a Success completion. As a result, over distance, a SCSI READ incurs half of the distance-related latency as a SCSI WRITE. Because of this, it may be faster for a data communications receiver to use SCSI READ requests to receive messages than for a sender of messages to use SCSI WRITE requests to send data. Using SCSI READ simply requires a message sender to operate as a SCSI target, and for a message receiver to operate as a SCSI initiator. A message receiver may send some number of SCSI CDB READ requests to any message sender, and the message sender would respond to one of the outstanding CDB READ requests when message data is available. Since SCSI subsystems may timeout if a READ request is outstanding for too long (e.g., 10 seconds), READ requests should be responded to within a few seconds even if there is no message data to be sent.
  • SCSI tape requests, as described in the SCSI Stream Commands standard from the T10 Technical Committee of the InterNational Committee on Information Technology Standards, support variable response data, which can be more flexible for returning variable-sized message data. The SCSI standard also supports an Immediate mode for SCSI WRITE requests, which could allow single-round-trip SCSI WRITE commands. Readers will appreciate that many of the embodiments described below also utilize single roundtrip messaging.
  • For further explanation, FIG. 4B sets forth a flow chart illustrating an additional example method for servicing I/O operations directed to a dataset (442) that is synchronized across a plurality of storage systems (438, 440, 450) according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (438, 440, 450) depicted in FIG. 4A may be similar to the storage systems described above with reference to FIGS. 1A-1D, FIGS. 2A-2G, FIGS. 3A-3B, or any combination thereof. In fact, the storage system depicted in FIG. 4A may include the same, fewer, additional components as the storage systems described above. The example method depicted in FIG. 4B is similar to the example method depicted in FIG. 4A, as the example method depicted in FIG. 4B also includes: receiving (406), by a leader storage system (440), a request (404) to modify the dataset (442); generating (408), by the leader storage system (440), information (410) describing the modification to the dataset (442); sending (412), from the leader storage system (440) to a follower storage system (438), information (410) describing the modification to the dataset (442); receiving (416), by the follower storage system (438), the information (410) describing the modification to the dataset (442); processing (418), by the follower storage system (438), the request (404) to modify the dataset (442); acknowledging (420), by the follower storage system (438) to the leader storage system (440), completion of the request (404) to modify the dataset (442); processing (424), by the leader storage system (440), the request (404) to modify the dataset (442); and acknowledging (434), by the leader storage system (440), completion of the request (404) to modify the dataset (442).
  • The example method depicted in FIG. 4B differs from the example method depicted in FIG. 4A, however, as the example method depicted in FIG. 4B depicts an embodiment in which the dataset (442) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system (440) and the remaining storage systems are follower storage systems (438, 450). In such an example, the additional follower storage system (450) carries out many of the same steps as the follower storage system (438) that was depicted in FIG. 4A, as the additional follower storage system (450) can: receive (442), from the leader storage system (440), information (410) describing the modification to the data set (442); process (442) the request (404) to modify the data set (442) in dependence upon the information (410) describing the modification to the data set (442); acknowledge (446), to the leader storage system (440), completion of the request (404) to modify the dataset (442) through the use of an acknowledgement (448) message or other appropriate mechanism; and so on.
  • In the example method depicted in FIG. 4B, the information (410) describing the modification to the data set (442) can include ordering information (452) for the request (404) to modify the dataset (442). In the example method depicted in FIG. 4B, the ordering information (452) for the request (404) to modify the dataset (442) can represent descriptions of relationships between operations (e.g., requests to modify the dataset) and common metadata updates that can be described by the leader storage system (440) as a set of interdependencies between separate requests to modify the dataset and possibly between requests to modify the dataset and various metadata changes. These interdependencies can be described as a set of precursors that one request to modify the dataset depends on in some way, as predicates that must be true for that request to modify the dataset to complete.
  • A queue predicate is one example of predicates that must be true for that request to modify the dataset to complete. A queue predicate can stipulate that a particular request to modify the dataset cannot complete until a previous request to modify the dataset completes. Queue predicates can be used, for example, for overlapping write-type operations. In such an example, the leader storage system (440) can declare that a second write-type operation logically follows a first such operation, so the second write-type operation can't complete until the first write-type operation completes. Depending on the implementation, the second write-type operation may not even be made durable until it is ensured that the first such write-type operation is durable (the two operations can be made durable together). Queue predicates could also be used for snapshot operations and virtual block range copy operations, by declaring that a known set of incomplete precursor (e.g., a set of write-type) operations must each complete before a snapshot can complete, and as further operations are identified as following the snapshot (prior to the snapshot being complete) each of these operations can be predicated on the snapshot operation itself completing. This predicate could also indicate that those following operations apply to the post-snapshot image of a volume rather than included in the snapshot.
  • An alternative predicate that could be used for snapshots is to assign an identifier to every snapshot, and to associate all modifying operations that can be included in a particular snapshot with that identifier. Then, the snapshot can complete when all of the included modifying operations complete. This can be done with a counting predicate. Each storage system across which a dataset is synchronously replicated can implement its own count of operations associated with time since the last snapshot or since some other relatively infrequent operation (or for embodiments that implement multiple leader storage systems, with those operations organized by a particular leader storage system, a count can be established by that leader storage system for the parts of a dataset it controls). The snapshot operation itself can then include a counting predicate that depends on that number of operations being received and made durable before the snapshot can itself be made durable or be signaled as completed. Modifying operations that should follow the snapshot (prior to the snapshot completing) can either be delayed, given a queue predicate dependent on the snapshot, or the snapshot identity can be used as an indication that the modifying operation should be excluded from the snapshot. Virtual block range copies (SCSI EXTENDED COPY or similar operations) could use queue predicates or they could use counting predicates and snapshot or similar identifiers. With counting predicates and snapshot or virtual copy identifiers, each virtual block range copy might establish a new virtual snapshot or virtual copy identifier, even if copy operation only covers two small regions of one or two volumes. In the examples described above, the request (404) to modify the dataset (442) can include a request to take a snapshot of the dataset (442) and the ordering information (452) for the request (404) to modify the dataset (442) can therefore include an identification of one or more other requests to modify the dataset that must be completed prior to taking the snapshot of the dataset (442).
  • In the example method depicted in FIG. 4B, the information (410) describing the modification to the data set (442) can include common metadata information (454) associated with the request (404) to modify the dataset (442). The common metadata information (454) associated with the request (404) to modify the dataset (442) may be used to ensure common metadata that is associated with the dataset (442) in a storage system (438, 440, 450) that a dataset (442) is synchronously replicated across. Common metadata in this context may be embodied, for example, as any data other than the content stored into the dataset (442) by one or more requests (e.g., one or more write requests issued by a host). The common metadata may include data that a synchronous replication implementation keeps in some way consistent across storage systems (438, 440, 450) that a dataset (442) is synchronously replicated across, particularly if that common metadata relates to how the stored content is managed, recovered, resynchronized, snapshotted, or asynchronously replicated. Readers will appreciate that two or more modifying operations may depend on the same common metadata, where ordering of the modifying operations themselves is unnecessary, but consistent application of the common metadata once rather than twice is necessary. One way to handle multiple dependence on common metadata is to define the metadata in a separate operation instantiated and described from a leader storage system. Then, two modifying operations that depend on that common metadata can be given a queue predicate that depends on that modifying operation. Another way to handle multiple dependence on common metadata is to associate the common metadata with a first of two operations, and make the second operation depend on the first. A variation makes the second operation dependent only on the common metadata aspects of the first, such that only that part of the first operation has to be made durable before the second operation can be processed. Yet another way of handling multiple dependence on common metadata is to include the common metadata in all operation descriptions that depend on that common metadata. This works well if applying the common metadata can be idempotent, for example, simply by attaching an identifier to the common metadata. If that identifier has already been processed it can be ignored. In some cases, identifiers might be associated with parts of the common metadata.
  • In the example method depicted in FIG. 4B, receiving (426) an indication that the follower storage system has processed the request (404) to modify the dataset (442) can include receiving (456), from each of the follower storage systems (438, 450), an indication that the follower storage system (438, 450) has processed the request (404) to modify the dataset (442). In this example, the indication that each follower storage system (438, 450) has processed the request (404) to modify the dataset (436) is embodied as distinct acknowledgement (422, 448) messages sent from each follower storage system (438, 450) to the leader storage system (440). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required. In fact, because the follower storage systems (438, 450) and the leader storage system (440) are independent storage systems, each storage system may be performing some of the steps described above in parallel. For example, one or more of the follower storage systems (438, 450) may receive (416, 442) the information (410) describing the modification to the dataset (442), process (418, 444) the request (404) to modify the dataset (442), or acknowledge (420, 446) completion of the request (404) to modify the dataset (442) before the leader storage system (440) has processed (424) the request (404) to modify the dataset (442). Alternatively, the leader storage system (440) may have processed (424) the request (404) to modify the dataset (442) before one or more of the follower storage systems (438, 450) have received (416, 442) the information (410) describing the modification to the dataset (442), processed (418, 444) the request (404) to modify the dataset (442), or acknowledged (420, 446) completion of the request (404) to modify the dataset (442).
  • The example method depicted in FIG. 4B also includes determining (458), by the leader storage system (440), whether the request (404) to modify the dataset (442) has been processed (418, 444) by each of the follower storage systems (438, 450) prior to acknowledging (434) completion of the request (404) to modify the dataset (442). The leader storage system (440) may determine (458) whether the request (404) to modify the dataset (442) has been processed (418, 444) by each of the follower storage systems (438, 450), for example, by determining whether the leader storage system (440) has received an acknowledgment messages or other messages from each of the follower storage systems (438, 450) indicating that the request (404) to modify the dataset (442) has been processed (418, 444) by each of the follower storage systems (438, 450). In such an example, if the leader storage system (440) affirmatively (462) determines that the request (404) to modify the dataset (442) has been processed (418, 444) by each of the follower storage systems (438, 450) and also processed (424) by the leader storage system (438), the leader storage system (440) may proceed by acknowledging (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442). If the leader storage system (440) determines that the request (404) to modify the dataset (442) has not (460) been processed (418, 444) by at least one of the follower storage systems (438, 450) or has not been processed (424) by the leader storage system (438), however, the leader storage system (440) may not yet acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442), as the leader storage system (440) may only acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442) when the request (404) to modify the dataset (442) has been successfully processed on all storage systems (438, 440, 450) across which a dataset (442) is synchronously replicated.
  • Readers will appreciate that although the example method depicted in FIG. 4B depicts an embodiment in which the dataset (442) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system (440) and the remaining storage systems are follower storage systems (438, 450), other embodiments may include even additional storage systems. In such other embodiments, additional follower storage systems may operate in the same way as the follower storage systems (438, 450) depicted in FIG. 4B.
  • For further explanation, FIG. 5A sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset (442) that is synchronized across a plurality of storage systems (438, 440) according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (438, 440) depicted in FIG. 5A may be similar to the storage systems described above with reference to FIGS. 1A-1D, FIGS. 2A-2G, FIGS. 3A-3B, or any combination thereof. In fact, the storage system depicted in FIG. 5A may include the same, fewer, additional components as the storage systems described above.
  • The example method depicted in FIG. 5A includes receiving (502), by a follower storage system (438), a request (404) to modify the dataset (442). The request (404) to modify the dataset (442) may be embodied, for example, as a request to write data to a location within the storage system (438) that contains data that is included in the dataset (442), as a request to write data to a volume that contains data that is included in the dataset (442), or as some other operation that results in a change to some portion of the data that is included in the dataset (442). In the example method depicted in FIG. 5A, the request (404) to modify the dataset (442) is issued by a host (402) that may be embodied, for example, as an application that is executing on a virtual machine, as an application that is executing on a computing device that is connected to the storage system (438), or as some other entity configured to access the storage system (438).
  • The example method depicted in FIG. 5A also includes sending (504), from the follower storage system (438) to a leader storage system (440), a logical description (506) of the request (404) to modify the dataset (442). In the example method depicted in FIG. 5A, the logical description (506) of the request (404) to modify the dataset (442) may be formatted in a way that is understood by the leader storage system (438) and may contain information describing the type of operation (e.g. a read-type operation, a snapshot-type operation) requested in the request (404) to modify the dataset (442), information describing a location where I/O payload is being placed, information describing the size of the I/O payload, or some other information. In an alternative embodiment, the follower storage system (438) may simply forward some portion (including all of) the request (404) to modify the dataset (442) to the leader storage system (440).
  • The example method depicted in FIG. 5A also includes generating (508), by the leader storage system (440), information (510) describing the modification to the dataset (442). The leader storage system (440) may generate (508) the information (510) describing the modification to the dataset (442), for example, by determining ordering versus any other operations that are in progress, calculating any distributed state changes such as to common elements of metadata across all members of the pod (e.g., all storage systems across which the dataset is synchronously replicated), and so on. The information (510) describing the modification to the dataset (442) may be embodied, for example, as system-level information that is used to describe an I/O operation that is to be performed by a storage system. The leader storage system (440) may generate (508) the information (510) describing the modification to the dataset (442) by processing the request (404) to modify the dataset (442) just enough to figure out what should happen in order to service the request (404) to modify the dataset (442). For example, the leader storage system (440) may determine whether some ordering of the execution of the request (404) to modify the dataset (442) relative to other requests to modify the dataset (442) is required to produce an equivalent result on each storage system (438, 440).
  • Consider an example in which the request (404) to modify the dataset (442) is embodied as a request to copy blocks from a first address range in the dataset (442) to a second address range in the dataset (442). In such an example, assume that three other write operations (write A, write B, write C) are directed to the first address range in the dataset (442). In such an example, if the leader storage system (440) orders write A and write B (but does not order write C) prior to copying the blocks from the first address range in the dataset (442) to the second address range in the dataset (442), the follower storage system (438) must also order write A and write B (but not order write C) prior to copying the blocks from the first address range in the dataset (442) to the second address range in the dataset (442) in order to yield consistent results. As such, when the leader storage system (440) generates (508) the information (510) describing the modification to the dataset (442), in this example, the leader storage system (440) could generate information (e.g., sequence numbers for write A and write B) that identifies other operations that must be ordered before the follower storage system (438) can process the request (404) to modify the dataset (442).
  • Readers will further appreciate that correct results for any operation must be committed to the point of being recoverable before the operation can be acknowledged. But, multiple operations can be committed together, or operations can be partially committed if recovery would ensure correctness. For example, a snapshot could locally commit with a recorded dependency on an expected write of A and B, but A or B might not have themselves committed. The snapshot cannot be acknowledged, and recovery might end up backing out the snapshot if the missing I/O cannot be recovered from another array. Also, if write B overlaps with write A, then the leader may “order” B to be after A, but A could actually be discarded and the operation to write A would then simply wait for B. Writes A, B, C, and D, coupled with a snapshot between A,B and C,D could commit and/or acknowledge some or all parts together as long as recovery cannot result in a snapshot inconsistency across arrays and as long as acknowledgement does not complete a later operation before an earlier operation has been persisted to the point that it is guaranteed to be recoverable.
  • The example method depicted in FIG. 5A also includes sending (512), from the leader storage system (440) to the follower storage system (538), the information (510) describing the modification to the dataset (442). Sending (512) the information (510) describing the modification to the dataset (442) from the leader storage system (440) to a follower storage system (438) may be carried out, for example, by the leader storage system (440) sending one or more messages to the follower storage system (438). The leader storage system (440) may not need to send I/O payload for the request (404) to modify the dataset (442), however, in view of the fact that the follower storage system (438) was the original recipient of the request (404) to modify the dataset (442). As such, the follower storage system (438) may extract the I/O payload from the request (404) to modify the dataset (442), the follower storage system (438) may receive the I/O payload as part of one or more other messages associated with the request (404) to modify the dataset (442), the follower storage system (438) may have access to the I/O payload as the I/O payload may have been stored by the host (404) in a known location (e.g., a buffer in the follower storage system (438) that was accessed via an RDMA or RDMA-like access), or in some other way.
  • The example method depicted in FIG. 5A also includes processing (518), by the leader storage system (440), the request (404) to modify the dataset (442). In the example method depicted in FIG. 5A, the leader storage system (440) may process (518) the request (404) to modify the dataset (442), for example, by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the leader storage system (440) in dependence upon the information (410) describing the modification to the dataset (442) as well as the I/O payload that was received from the follower storage system (438). Consider an example in which the request (404) to modify the dataset (442) is embodied as a write operation that is directed to a volume that is included in the dataset (442) and the information (410) describing the modification to the dataset (442) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (518) the request (404) to modify the dataset (442) may be carried out by the leader storage system (440) first verifying that the previously issued write operation has been processed on the leader storage system (440) and subsequently writing I/O payload associated with the write operation to one or more storage devices that are included in the leader storage system (440). In such an example, the request (404) to modify the dataset (442) may be considered to have been completed and successfully processed, for example, when the I/O payload has been committed to persistent storage within the leader storage system (440).
  • The example method depicted in FIG. 5A also includes acknowledging (520), by the leader storage system (440) to the follower storage system (438), completion of the request (404) to modify the dataset (442). In the example method depicted in FIG. 5A, the leader storage system (440) may acknowledge (520) completion of the request (404) to modify the dataset (442), for example, through the use of one or more acknowledgement (522) messages that are sent from the leader storage system (440) to the follower storage system (438), or via some other appropriate mechanism.
  • The example method depicted in FIG. 5A also includes receiving (514), from the leader storage system (440), the information (510) describing the modification to the dataset (442). The follower storage system (438) may receive (514) the information (410) describing the modification to the dataset (442) from the leader storage system (440), for example, via one or more messages that are sent from the leader storage system (440) to the follower storage system (438). The one or more messages may be sent from the leader storage system (440) to the follower storage system (438) via one or more dedicated data communications links between the two storage systems (438, 440), by the leader storage system (440) writing the message to a predetermined memory location (e.g., the location of a queue) on the follower storage system (438) using RDMA or a similar mechanism, or in other ways. Readers will appreciate that in the example method depicted in FIG. 5A, however, the leader storage system (440) does not need to send I/O payload associated with the request (404) to modify the dataset (442) to the follower storage system (438), as the follower storage system (438) can extract such I/O payload from the request (404) to modify the dataset (442) that was received by the follower storage system (438), the follower storage system (438) can extract such I/O payload from one or more other messages that were received from the host (402), or the follower storage system (438) can obtain the I/O payload in some other way by virtue of the fact that the follower storage system (438) was the target of the request (404) to modify the dataset (442) that was issued by the host (402).
  • In one embodiment, the follower storage system (438) may receive (514) the information (410) describing the modification to the dataset (442) from the leader storage system (440) through the use of the use of SCSI requests (writes from sender to receiver, or reads from receiver to sender) as a communication mechanism. In such an embodiment, a SCSI Write request is used to encode information that we intend to send (which includes whatever data and metadata), and which may be delivered to a special pseudo-device or over a specially configured SCSI network, or through any other agreed upon addressing mechanism. Or, alternately, the model can issue a set of open SCSI read requests from a receiver to a sender, also using special devices, specially configured SCSI networks, or other agreed upon mechanisms. Encoded information including data and metadata will be delivered to the receiver as a response to one or more of these open SCSI requests. Such a model can be implemented over Fibre Channel SCSI networks, which are often deployed as the “dark fibre” storage network infrastructure between data centers. Such a model also allows the use of the same network lines for host-to-remote-array multipathing and bulk array-to-array communications.
  • The example method depicted in FIG. 5A also includes processing (516), by the follower storage system (438), the request (404) to modify the dataset (442). In the example method depicted in FIG. 5A, the follower storage system (438) may process (516) the request (404) to modify the dataset (442) by modifying the contents of one or more storage devices (e.g., an NVRAM device, an SSD, an HDD) that are included in the follower storage system (438) in dependence upon the information (410) describing the modification to the dataset (442). Consider an example in which the request (404) to modify the dataset (442) is embodied as a write operation that is directed to a volume that is included in the dataset (442) and the information (410) describing the modification to the dataset (442) indicates that the write operation can only be executed after a previously issued write operation has been processed. In such an example, processing (516) the request (404) to modify the dataset (442) may be carried out by the follower storage system (438) first verifying that the previously issued write operation has been processed on the follower storage system (438) and subsequently writing I/O payload associated with the write operation to one or more storage devices that are included in the follower storage system (438). In such an example, the request (404) to modify the dataset (442) may be considered to have been completed and successfully processed, for example, when the I/O payload associated with the request (404) to modify the dataset (442) has been committed to persistent storage within the follower storage system (438).
  • The example method depicted in FIG. 5A also includes receiving (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442). In this example, the indication that the leader storage system (440) has processed the request (404) to modify the dataset (442) is embodied as an acknowledgement (522) message sent from the leader storage system (440) to the follower storage system (438). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required. In fact, because the follower storage system (438) and the leader storage system (440) are independent storage systems, each storage system may be performing some of the steps described above in parallel. For example, the follower storage system (438) may receive (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442) prior to processing (516) the request (404) to modify the dataset (442). Likewise, the follower storage system (438) may receive (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442) prior to receiving (514) the information (410) describing the modification to the dataset (442) from the leader storage system (440).
  • The example method depicted in FIG. 5A also includes acknowledging (526), by the follower storage system (438), completion of the request (404) to modify the dataset (442). Acknowledging (526) completion of the request (404) to modify the dataset (442) may be carried out, for example, by the follower storage system (438) issuing an acknowledgement (528) message to the host (402) that issued the request (404) to modify the dataset (442). In the example method depicted in FIG. 5A, the follower storage system (438) may determine whether the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440) prior to acknowledging (528) completion of the request (404) to modify the dataset (442). The follower storage system (438) may determine whether the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440), for example, by determining whether the follower storage system (438) has received an acknowledgment message or other message from the leader storage system (440) indicating that the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440). In such an example, if the follower storage system (438) affirmatively determines that the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440) and the follower storage system (438) has also processed (516) the request (404) to modify the dataset (442), the follower storage system (438) may proceed by acknowledging (526) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442). If the leader storage system (440) determines that the request (404) to modify the dataset (442) has not been processed (518) by the leader storage system (440) or the follower storage system (438) has not yet processed (516) the request (404) to modify the dataset (442), however, the follower storage system (438) may not yet acknowledge (526) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442), as the follower storage system (438) may only acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442) when the request (404) to modify the dataset (442) has been successfully processed on all storage systems (438, 440) across which the dataset (442) is synchronously replicated.
  • For further explanation, FIG. 5B sets forth a flow chart illustrating an example method for servicing I/O operations directed to a dataset (442) that is synchronized across a plurality of storage systems (438, 440, 534) according to some embodiments of the present disclosure. Although depicted in less detail, the storage systems (438, 440, 534) depicted in FIG. 5A may be similar to the storage systems described above with reference to FIGS. 1A-1D, FIGS. 2A-2G, FIGS. 3A-3B, or any combination thereof. In fact, the storage system depicted in FIG. 5A may include the same, fewer, additional components as the storage systems described above.
  • The example method depicted in FIG. 5B may be similar to the example method depicted in FIG. 5A, as the example method depicted in FIG. 5B also includes: receiving (502), by a follower storage system (438), a request (404) to modify the dataset (442); sending (504), from the follower storage system (438) to a leader storage system (440), a logical description (506) of the request (404) to modify the dataset (442); generating (508), by the leader storage system (440), information (510) describing the modification to the dataset (442); processing (518), by the leader storage system (440), the request (404) to modify the dataset (442); acknowledging (520), by the leader storage system (440) to the follower storage system (438), completion of the request (404) to modify the dataset (442); receiving (514), from the leader storage system (440), the information (510) describing the modification to the dataset (442); processing (516), by the follower storage system (438), the request (404) to modify the dataset (442); receiving (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442); and acknowledging (526), by the follower storage system (438), completion of the request (404) to modify the dataset (442).
  • The example method depicted in FIG. 5B differs from the example method depicted in FIG. 5A, however, as the example method depicted in FIG. 5B depicts an embodiment in which the dataset (442) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system (440) and the remaining storage systems are follower storage systems (438, 534). In such an example, the additional follower storage system (534) carries out many of the same steps as the follower storage system (438) that was depicted in FIG. 5A, as the additional follower storage system (534) can: receive (442), from the leader storage system (440), information (410) describing the modification to the data set (442) and also process (442) the request (404) to modify the data set (442) in dependence upon the information (410) describing the modification to the data set (442).
  • In the example method depicted in FIG. 5B, the leader storage system (440) can send (538) the information (410) describing the modification to the data set (442) to all of the follower storage systems (438, 534). In the example method depicted in FIG. 5B, the additional follower storage system (534) can also acknowledge (530) completion of the request (404) to modify the dataset (442) to the follower storage system (438) that received (502) the request (404) to modify the dataset (442). In the example method depicted in FIG. 5B, the additional follower storage system (534) can acknowledge (530) completion of the request (404) to modify the dataset (442) to the follower storage system (438) that received (502) the request (404) to modify the dataset (442), for example, through the use of one or more acknowledgement (532) messages that are sent from the additional follower storage system (534) to the follower storage system (438) that received (502) the request (404) to modify the dataset (442), or via some other appropriate mechanism.
  • In the example method depicted in FIG. 5B, the follower storage system (438) that received (502) the request (404) to modify the dataset (442) may also receive (536) an indication that all other follower storage systems (534) have processed the request (404) to modify the dataset (442). In this example, the indication all other follower storage systems (534) have processed the request (404) to modify the dataset (442) is embodied as an acknowledgement (532) message sent from the other follower storage system (534) to the follower storage system (438) that received (502) the request (404) to modify the dataset (442). Readers will appreciate that although many of the steps described above are depicted and described as occurring in a particular order, no particular order is actually required. In fact, because the follower storage systems (438, 534) and the leader storage system (440) are each independent storage systems, each storage system may be performing some of the steps described above in parallel. For example, the follower storage system (438) may receive (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442) prior to processing (516) the request (404) to modify the dataset (442). In addition, the follower storage system (438) may receive (536) an indication that all other follower storage systems (534) have processed the request (404) to modify the dataset (442) prior to receiving (524) an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442). Alternatively, the follower storage system (438) may receive (536) an indication that all other follower storage systems (534) have processed the request (404) to modify the dataset (442) prior to processing (516) the request (404) to modify the dataset (442). Likewise, the follower storage system (438) may receive (524), from the leader storage system (440), an indication that the leader storage system (440) has processed the request (404) to modify the dataset (442) prior to receiving (514) the information (410) describing the modification to the dataset (442) from the leader storage system (440). In addition, the follower storage system (438) may receive (536) an indication that all other follower storage systems (534) have processed the request (404) to modify the dataset (442) prior to receiving (514) the information (410) describing the modification to the dataset (442) from the leader storage system (440).
  • Although not expressly depicted in FIG. 5B, the follower storage system (438) may determine whether the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440) and also processed (444) by all other follower storage systems (534) prior to acknowledging (528) completion of the request (404) to modify the dataset (442). The follower storage system (438) may determine whether the request (404) to modify the dataset (442) has been processed (518) by the leader storage system (440) and also processed (444) by all other follower storage systems (534), for example, by determining whether the follower storage system (438) has received an acknowledgment messages from the leader storage system (440) and all other follower storage systems (534) indicating that the request (404) to modify the dataset (442) has been processed (518, 444) by each storage system (440, 534). In such an example, if the follower storage system (438) affirmatively determines that the request (404) to modify the dataset (442) has been processed by the leader storage system (440), all other follower storage systems (534), and the follower storage system (438), the follower storage system (438) may proceed by acknowledging (526) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442). If the leader storage system (440) determines that the request (404) to modify the dataset (442) has not been processed by at least one of the leader storage system (440), all other follower storage systems (534), or the follower storage system (438), however, the follower storage system (438) may not yet acknowledge (526) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442), as the follower storage system (438) may only acknowledge (434) completion of the request (404) to modify the dataset (442) to the host (402) that initiated the request (404) to modify the dataset (442) when the request (404) to modify the dataset (442) has been successfully processed on all storage systems (438, 440, 534) across which the dataset (442) is synchronously replicated.
  • Although not expressly depicted in FIG. 5B, in some embodiments, in an effort to unblock any concurrent overlapping reads executing on the one or the storage systems (438, 440, 534), the follower storage system (438) that received (502) the request (404) to modify the dataset (422) can send a message back to the leader storage system (440) and to other follower storage systems (534) to signal that the modifying operation has completed everywhere. Alternately, the follower storage system (438) that received (502) the request (404) to modify the dataset (422) could send that message to the leader storage system (438) and the leader storage system (438) could send a message to propagate the completion and unblock reads elsewhere.
  • Readers will appreciate that although the example method depicted in FIG. 5B depicts an embodiment in which the dataset (442) is synchronously replicated across three storage systems, where one of the storage systems is a leader storage system (440) and the remaining storage systems are follower storage systems (438, 534), other embodiments may include even additional storage systems. In such other embodiments, additional follower storage systems may operate in the same way as the other follower storage system (534) depicted in FIG. 5B.
  • Readers will also appreciate that although only the example depicted in FIG. 4B expressly depicts an embodiments in which the information (510) describing the modification to the dataset (442) includes ordering information (452) for the request (404) to modify the dataset (442), common metadata information (454) associated with the request (404) to modify the dataset (442), and I/O payload (414) associated with the request (404) to modify the dataset (442), the information (510) describing the modification to the dataset (442) can include all of (or a subset) of such information in the examples depicted in the remaining figures. Further, in embodiments where the request (404) to modify the dataset (442) includes a request to take a snapshot of the dataset (442), the information (510) describing the modification to the dataset (442) can also include an identification of one or more other requests to modify the dataset (442) that are to be included in the content of the snapshot of the dataset (442) in each of the figures described above.
  • Readers will appreciate that as a result of the information (510) describing the modification to the dataset (442) including an identification of one or more other requests to modify the dataset (442) that are to be included in the content of the snapshot of the dataset (442), rather than including information identifying one or more other requests to modify the dataset (442) that must be completed prior to taking the snapshot, a few situations can be addressed. One is that an atomic operation could perform a snapshot and complete the last few writes in the same atomic update, meaning that the last few writes do not complete “prior” to the snapshot. Another is that writes could actually be completed after the snapshot point is taken as long as when the writes are completed they are included and as long as the snapshot itself isn't considered complete until all writes are completed by all in-sync storage systems. Finally, a write that had not been indicated to a requestor as completed prior to the snapshot being received could be included or left out of the snapshot as a result of recovery actions. Essentially, recovery can rewrite the detailed history of received operations as long as the result is consistent and doesn't violate any guarantees related to operations that were signaled as having completed.
  • Example embodiments are described largely in the context of a fully functional system. Readers of skill in the art will recognize, however, that the present disclosure also may be embodied in a computer program product disposed upon computer readable storage media for use with any suitable data processing system. Such computer readable storage media may be any storage medium for machine-readable information, including magnetic media, optical media, or other suitable media. Examples of such media include magnetic disks in hard drives or diskettes, compact disks for optical drives, magnetic tape, and others as will occur to those of skill in the art. Persons skilled in the art will immediately recognize that any computer system having suitable programming means will be capable of executing the steps of the method as embodied in a computer program product. Persons skilled in the art will recognize also that, although some of the example embodiments described in this specification are oriented to software installed and executing on computer hardware, nevertheless, alternative embodiments implemented as firmware or as hardware are well within the scope of the present disclosure.
  • Embodiments can include be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to some embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • Readers will appreciate that the steps described herein may be carried out in a variety ways and that no particular ordering is required. It will be further understood from the foregoing description that modifications and changes may be made in various embodiments of the present disclosure without departing from its true spirit. The descriptions in this specification are for purposes of illustration only and are not to be construed in a limiting sense. The scope of the present disclosure is limited only by the language of the following claims.

Claims (20)

What is claimed is:
1. A method comprising:
receiving, by a leader storage system, a request to modify a dataset that is synchronized across a plurality of storage systems;
sending, from the leader storage system to a follower storage system, information describing the request to modify the dataset, wherein the leader storage system and the follower storage system each store a copy of the dataset;
processing, by the leader storage system on the copy of the dataset that is stored on the leader storage system, the request to modify the dataset;
receiving, from the follower storage system, an indication that the follower storage system has processed the request to modify the dataset on the copy of the dataset that is stored on the follower storage system; and
acknowledging, by the leader storage system, completion of the request to modify the dataset.
2. The method of claim 1 wherein the information describing the request to modify the dataset includes ordering information for the request to modify the dataset.
3. The method of claim 2 wherein:
the request to modify the dataset includes a request to take a snapshot of the dataset; and
the ordering information for the request to modify the dataset includes an identification of one or more other requests to modify the dataset that are to be included in the content of the snapshot of the dataset.
4. The method of claim 1 wherein the information describing the request to modify the dataset includes common metadata information associated with the request to modify the dataset.
5. The method of claim 1 further comprising sending, from the follower storage system to a leader storage system, an indication that the follower storage system has processed the request to modify the dataset on the copy of the dataset that is stored on the follower storage system.
6. The method of claim 1 wherein the plurality of storage systems includes at least three storage systems, including one leader storage system and at least two follower storage systems.
7. The method of claim 6 further comprising receiving, by the leader storage system that received the request to modify the dataset, an indication that all follower storage systems have processed the request to modify the dataset.
8. The method of claim 1 wherein:
the request to modify the dataset that is synchronized across the plurality of storage systems is received by the leader storage system from a host computing device, and
the leader storage system acknowledges completion of the request to modify the dataset to the host computing device.
9. A storage system that includes a plurality of storage devices, the storage system including a computer processor and a computer memory that includes computer program instructions that, when executed by the computer processor, cause the storage system to carry out the steps of:
receiving a request to modify a dataset that is synchronized across a plurality of storage systems;
sending, to a follower storage system, information describing the request to modify the dataset, wherein the storage system and the follower storage system each store a copy of the dataset;
processing, on the copy of the dataset that is stored on the storage system, the request to modify the dataset;
receiving, from the follower storage system, an indication that the follower storage system has processed the request to modify the dataset on the copy of the dataset that is stored on the follower storage system; and
acknowledging completion of the request to modify the dataset.
10. The storage system of claim 9 wherein the information describing the request to modify the dataset includes ordering information for the request to modify the dataset.
11. The storage system of claim 10 wherein the request to modify the dataset includes a request to take a snapshot of the dataset; and
the ordering information for the request to modify the dataset includes an identification of one or more other requests to modify the dataset that are to be included in the content of the snapshot of the dataset.
12. The storage system of claim 9 wherein the information describing the request to modify the dataset includes common metadata information associated with the request to modify the dataset.
13. The storage system of claim 9 wherein the plurality of storage systems includes at least three storage systems, including one leader storage system and at least two follower storage systems.
14. The storage system of claim 13 further comprising computer program instructions that, when executed by the computer processor, cause the storage system to carry out the step of receiving an indication that all follower storage systems have processed the request to modify the dataset.
15. The storage system of claim 9 wherein:
the request to modify the dataset that is synchronized across the plurality of storage systems is received by the leader storage system from a host computing device, and
the leader storage system acknowledges completion of the request to modify the dataset to the host computing device.
16. A computer program product disposed on a non-transitory computer readable medium, the computer program product including computer program instructions that, when executed, carry out the steps of:
receiving a request to modify a dataset that is synchronized across a plurality of storage systems;
sending, to a follower storage system, information describing the request to modify the dataset, wherein the storage system and the follower storage system each store a copy of the dataset;
processing, on the copy of the dataset that is stored on the storage system, the request to modify the dataset;
receiving, from the follower storage system, an indication that the follower storage system has processed the request to modify the dataset on the copy of the dataset that is stored on the follower storage system; and
acknowledging completion of the request to modify the dataset.
17. The computer program product of claim 16 wherein the information describing the request to modify the dataset includes ordering information for the request to modify the dataset.
18. The computer program product of claim 16 wherein the request to modify the dataset includes a request to take a snapshot of the dataset; and
the ordering information for the request to modify the dataset includes an identification of one or more other requests to modify the dataset that are to be included in the content of the snapshot of the dataset.
19. The computer program product of claim 16 wherein the information describing the request to modify the dataset includes common metadata information associated with the request to modify the dataset.
20. The computer program product of claim 16 wherein the plurality of storage systems includes at least three storage systems, including one leader storage system and at least two follower storage systems, and the computer program product further comprises computer program instructions that, when executed, carry out the step of receiving an indication that all follower storage systems have processed the request to modify the dataset.
US17/537,976 2017-03-10 2021-11-30 Modifying A Synchronously Replicated Dataset Pending US20220091977A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/537,976 US20220091977A1 (en) 2017-03-10 2021-11-30 Modifying A Synchronously Replicated Dataset

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762470172P 2017-03-10 2017-03-10
US201762518071P 2017-06-12 2017-06-12
US15/671,518 US10521344B1 (en) 2017-03-10 2017-08-08 Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US16/680,746 US11210219B1 (en) 2017-03-10 2019-11-12 Synchronously replicating a dataset across a plurality of storage systems
US17/537,976 US20220091977A1 (en) 2017-03-10 2021-11-30 Modifying A Synchronously Replicated Dataset

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/680,746 Continuation US11210219B1 (en) 2017-03-10 2019-11-12 Synchronously replicating a dataset across a plurality of storage systems

Publications (1)

Publication Number Publication Date
US20220091977A1 true US20220091977A1 (en) 2022-03-24

Family

ID=67394294

Family Applications (26)

Application Number Title Priority Date Filing Date
US15/671,518 Active US10521344B1 (en) 2017-03-10 2017-08-08 Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US15/683,823 Active 2037-12-06 US10680932B1 (en) 2017-03-10 2017-08-23 Managing connectivity to synchronously replicated storage systems
US15/696,418 Active 2038-10-13 US11422730B1 (en) 2017-03-10 2017-09-06 Recovery for storage systems synchronously replicating a dataset
US15/703,559 Active 2037-12-09 US10558537B1 (en) 2017-03-10 2017-09-13 Mediating between storage systems synchronously replicating a dataset
US15/713,153 Active 2037-12-22 US10365982B1 (en) 2017-03-10 2017-09-22 Establishing a synchronous replication relationship between two or more storage systems
US15/800,760 Active 2038-03-30 US10585733B1 (en) 2017-03-10 2017-11-01 Determining active membership among storage systems synchronously replicating a dataset
US15/800,857 Active 2038-06-16 US10671408B1 (en) 2017-03-10 2017-11-01 Automatic storage system configuration for mediation services
US15/835,054 Active 2038-03-29 US10613779B1 (en) 2017-03-10 2017-12-07 Determining membership among storage systems synchronously replicating a dataset
US15/838,859 Active 2038-11-21 US10884993B1 (en) 2017-03-10 2017-12-12 Synchronizing metadata among storage systems synchronously replicating a dataset
US16/519,474 Active 2037-09-29 US10990490B1 (en) 2017-03-10 2019-07-23 Creating a synchronous replication lease between two or more storage systems
US16/680,746 Active US11210219B1 (en) 2017-03-10 2019-11-12 Synchronously replicating a dataset across a plurality of storage systems
US16/702,538 Active US11237927B1 (en) 2017-03-10 2019-12-04 Resolving disruptions between storage systems replicating a dataset
US16/778,183 Active 2038-01-10 US11379285B1 (en) 2017-03-10 2020-01-31 Mediation for synchronous replication
US16/815,317 Active 2038-03-08 US11347606B2 (en) 2017-03-10 2020-03-11 Responding to a change in membership among storage systems synchronously replicating a dataset
US16/888,572 Active 2037-11-27 US11954002B1 (en) 2017-03-10 2020-05-29 Automatically provisioning mediation services for a storage system
US16/891,398 Active 2037-09-10 US11500745B1 (en) 2017-03-10 2020-06-03 Issuing operations directed to synchronously replicated data
US17/088,152 Active 2038-08-24 US11687500B1 (en) 2017-03-10 2020-11-03 Updating metadata for a synchronously replicated dataset
US17/537,976 Pending US20220091977A1 (en) 2017-03-10 2021-11-30 Modifying A Synchronously Replicated Dataset
US17/588,619 Active US11645173B2 (en) 2017-03-10 2022-01-31 Resilient mediation between storage systems replicating a dataset
US17/825,031 Active US11698844B2 (en) 2017-03-10 2022-05-26 Managing storage systems that are synchronously replicating a dataset
US17/845,690 Active US11687423B2 (en) 2017-03-10 2022-06-21 Prioritizing highly performant storage systems for servicing a synchronously replicated dataset
US17/957,045 Active US11789831B2 (en) 2017-03-10 2022-09-30 Directing operations to synchronously replicated storage systems
US18/309,924 Pending US20230289267A1 (en) 2017-03-10 2023-05-01 Continuing To Service A Dataset After Prevailing In Mediation
US18/320,751 Pending US20230289268A1 (en) 2017-03-10 2023-05-19 Managing Storage System Replication
US18/339,834 Pending US20230333947A1 (en) 2017-03-10 2023-06-22 Replication using shared content mappings
US18/341,568 Pending US20230342271A1 (en) 2017-03-10 2023-06-26 Performance-Based Prioritization For Storage Systems Replicating A Dataset

Family Applications Before (17)

Application Number Title Priority Date Filing Date
US15/671,518 Active US10521344B1 (en) 2017-03-10 2017-08-08 Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US15/683,823 Active 2037-12-06 US10680932B1 (en) 2017-03-10 2017-08-23 Managing connectivity to synchronously replicated storage systems
US15/696,418 Active 2038-10-13 US11422730B1 (en) 2017-03-10 2017-09-06 Recovery for storage systems synchronously replicating a dataset
US15/703,559 Active 2037-12-09 US10558537B1 (en) 2017-03-10 2017-09-13 Mediating between storage systems synchronously replicating a dataset
US15/713,153 Active 2037-12-22 US10365982B1 (en) 2017-03-10 2017-09-22 Establishing a synchronous replication relationship between two or more storage systems
US15/800,760 Active 2038-03-30 US10585733B1 (en) 2017-03-10 2017-11-01 Determining active membership among storage systems synchronously replicating a dataset
US15/800,857 Active 2038-06-16 US10671408B1 (en) 2017-03-10 2017-11-01 Automatic storage system configuration for mediation services
US15/835,054 Active 2038-03-29 US10613779B1 (en) 2017-03-10 2017-12-07 Determining membership among storage systems synchronously replicating a dataset
US15/838,859 Active 2038-11-21 US10884993B1 (en) 2017-03-10 2017-12-12 Synchronizing metadata among storage systems synchronously replicating a dataset
US16/519,474 Active 2037-09-29 US10990490B1 (en) 2017-03-10 2019-07-23 Creating a synchronous replication lease between two or more storage systems
US16/680,746 Active US11210219B1 (en) 2017-03-10 2019-11-12 Synchronously replicating a dataset across a plurality of storage systems
US16/702,538 Active US11237927B1 (en) 2017-03-10 2019-12-04 Resolving disruptions between storage systems replicating a dataset
US16/778,183 Active 2038-01-10 US11379285B1 (en) 2017-03-10 2020-01-31 Mediation for synchronous replication
US16/815,317 Active 2038-03-08 US11347606B2 (en) 2017-03-10 2020-03-11 Responding to a change in membership among storage systems synchronously replicating a dataset
US16/888,572 Active 2037-11-27 US11954002B1 (en) 2017-03-10 2020-05-29 Automatically provisioning mediation services for a storage system
US16/891,398 Active 2037-09-10 US11500745B1 (en) 2017-03-10 2020-06-03 Issuing operations directed to synchronously replicated data
US17/088,152 Active 2038-08-24 US11687500B1 (en) 2017-03-10 2020-11-03 Updating metadata for a synchronously replicated dataset

Family Applications After (8)

Application Number Title Priority Date Filing Date
US17/588,619 Active US11645173B2 (en) 2017-03-10 2022-01-31 Resilient mediation between storage systems replicating a dataset
US17/825,031 Active US11698844B2 (en) 2017-03-10 2022-05-26 Managing storage systems that are synchronously replicating a dataset
US17/845,690 Active US11687423B2 (en) 2017-03-10 2022-06-21 Prioritizing highly performant storage systems for servicing a synchronously replicated dataset
US17/957,045 Active US11789831B2 (en) 2017-03-10 2022-09-30 Directing operations to synchronously replicated storage systems
US18/309,924 Pending US20230289267A1 (en) 2017-03-10 2023-05-01 Continuing To Service A Dataset After Prevailing In Mediation
US18/320,751 Pending US20230289268A1 (en) 2017-03-10 2023-05-19 Managing Storage System Replication
US18/339,834 Pending US20230333947A1 (en) 2017-03-10 2023-06-22 Replication using shared content mappings
US18/341,568 Pending US20230342271A1 (en) 2017-03-10 2023-06-26 Performance-Based Prioritization For Storage Systems Replicating A Dataset

Country Status (1)

Country Link
US (26) US10521344B1 (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10484474B2 (en) * 2013-08-29 2019-11-19 Pure Storage, Inc. Rotating offline DS units
CN107045422B (en) * 2016-02-06 2020-12-01 华为技术有限公司 Distributed storage method and device
US11803453B1 (en) 2017-03-10 2023-10-31 Pure Storage, Inc. Using host connectivity states to avoid queuing I/O requests
US10521344B1 (en) 2017-03-10 2019-12-31 Pure Storage, Inc. Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US11442825B2 (en) 2017-03-10 2022-09-13 Pure Storage, Inc. Establishing a synchronous replication relationship between two or more storage systems
US11238164B2 (en) * 2017-07-10 2022-02-01 Burstiq, Inc. Secure adaptive data storage platform
US10831935B2 (en) * 2017-08-31 2020-11-10 Pure Storage, Inc. Encryption management with host-side data reduction
US10733066B2 (en) * 2018-03-09 2020-08-04 Hewlett Packard Enterprise Development Lp Persistent reservation commands in a distributed storage system
US10917471B1 (en) * 2018-03-15 2021-02-09 Pure Storage, Inc. Active membership in a cloud-based storage system
US11128578B2 (en) * 2018-05-21 2021-09-21 Pure Storage, Inc. Switching between mediator services for a storage system
JP7326707B2 (en) * 2018-06-21 2023-08-16 カシオ計算機株式会社 Robot, robot control method and program
US11269917B1 (en) * 2018-07-13 2022-03-08 Cisco Technology, Inc. Secure cluster pairing for business continuity and disaster recovery
US10860444B2 (en) * 2018-07-30 2020-12-08 EMC IP Holding Company LLC Seamless mobility for kubernetes based stateful pods using moving target defense
JP7053399B2 (en) 2018-07-31 2022-04-12 キオクシア株式会社 Information processing system
US11237750B2 (en) 2018-08-30 2022-02-01 Portworx, Inc. Dynamic volume replication factor adjustment
US11449367B2 (en) * 2019-02-27 2022-09-20 International Business Machines Corporation Functional completion when retrying a non-interruptible instruction in a bi-modal execution environment
US10936010B2 (en) * 2019-03-08 2021-03-02 EMC IP Holding Company LLC Clock synchronization for storage systems in an active-active configuration
US11322236B1 (en) * 2019-04-03 2022-05-03 Precis, Llc Data abstraction system architecture not requiring interoperability between data providers
JP7326903B2 (en) * 2019-06-14 2023-08-16 富士フイルムビジネスイノベーション株式会社 Information processing device and program
CN112307113A (en) * 2019-07-29 2021-02-02 中兴通讯股份有限公司 Service request message sending method and distributed database architecture
US11567905B2 (en) * 2019-09-27 2023-01-31 Dell Products, L.P. Method and apparatus for replicating a concurrently accessed shared filesystem between storage clusters
US11500573B2 (en) * 2019-12-12 2022-11-15 ExxonMobil Technology and Engineering Company Multiple interface data exchange application for use in process control
US11818114B2 (en) * 2020-06-12 2023-11-14 Strata Identity, Inc. Systems, methods, and storage media for synchronizing identity information across identity domains in an identity infrastructure
US20220050855A1 (en) * 2020-08-14 2022-02-17 Snowflake Inc. Data exchange availability, listing visibility, and listing fulfillment
JP7153942B2 (en) * 2020-08-17 2022-10-17 ラトナ株式会社 Information processing device, method, computer program, and recording medium
US11651096B2 (en) 2020-08-24 2023-05-16 Burstiq, Inc. Systems and methods for accessing digital assets in a blockchain using global consent contracts
CN116466876A (en) * 2020-09-11 2023-07-21 华为技术有限公司 Storage system and data processing method
KR20220039404A (en) * 2020-09-22 2022-03-29 에스케이하이닉스 주식회사 Memory system and operating method of memory system
US11573937B2 (en) * 2020-10-09 2023-02-07 Bank Of America Corporation System and method for automatically resolving metadata structure discrepancies
CN112346661B (en) * 2020-11-16 2023-09-29 脸萌有限公司 Data processing method and device and electronic equipment
US11593352B2 (en) * 2020-11-23 2023-02-28 Sap Se Cloud-native object storage for page-based relational database
US20220197544A1 (en) * 2020-12-17 2022-06-23 Electronics And Telecommunications Research Institute Apparatus and method for selecting storage location based on data usage
US11656795B2 (en) * 2021-01-21 2023-05-23 EMC IP Holding Company LLC Indicating optimized and non-optimized paths to hosts using NVMe-oF in a metro cluster storage system
US11822801B2 (en) * 2021-03-10 2023-11-21 EMC IP Holding Company LLC Automated uniform host attachment
US11954124B2 (en) 2021-04-19 2024-04-09 Wealthfront Corporation Synchronizing updates of records in a distributed system
CN113590033B (en) * 2021-06-30 2023-11-03 郑州云海信息技术有限公司 Information synchronization method and device of super fusion system
CN113608691A (en) * 2021-07-24 2021-11-05 济南浪潮数据技术有限公司 High-availability method and device for NFS (network file system) of storage array
US11556482B1 (en) 2021-09-30 2023-01-17 International Business Machines Corporation Security for address translation services
WO2023070159A1 (en) * 2021-10-29 2023-05-04 Safecret Pty Ltd A data storage and management system
US11954073B2 (en) * 2022-03-16 2024-04-09 International Business Machines Corporation Multi-protocol multi-site replication
CN116166477B (en) * 2022-11-30 2024-02-13 郭东升 Dual-activity gateway system and method for storing different brands of objects
CN116149558B (en) * 2023-02-21 2023-10-27 北京志凌海纳科技有限公司 Copy allocation strategy system and method in distributed storage dual-active mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105078A (en) * 1997-12-18 2000-08-15 International Business Machines Corporation Extended remote copying system for reporting both active and idle conditions wherein the idle condition indicates no updates to the system for a predetermined time period
US20050050286A1 (en) * 2003-08-28 2005-03-03 International Busines Machines Corporation Apparatus and method for asynchronous logical mirroring
US20180267723A1 (en) * 2017-03-20 2018-09-20 International Business Machines Corporation Processing a recall request for data migrated from a primary storage system having data mirrored to a secondary storage system
US10613789B1 (en) * 2014-03-31 2020-04-07 EMC IP Holding Company LLC Analytics engine using consistent replication on distributed sites

Family Cites Families (283)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544347A (en) * 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
US5233604A (en) * 1992-04-28 1993-08-03 International Business Machines Corporation Methods and apparatus for optimum path selection in packet transmission networks
US5651133A (en) 1995-02-01 1997-07-22 Hewlett-Packard Company Methods for avoiding over-commitment of virtual capacity in a redundant hierarchic data storage system
JPH08242229A (en) 1995-03-01 1996-09-17 Fujitsu Ltd State matching processing system for monitoring network
US5799200A (en) 1995-09-28 1998-08-25 Emc Corporation Power failure responsive apparatus and method having a shadow dram, a flash ROM, an auxiliary battery, and a controller
US6012032A (en) 1995-11-30 2000-01-04 Electronic Data Systems Corporation System and method for accounting of computer data storage utilization
US5933598A (en) 1996-07-17 1999-08-03 Digital Equipment Corporation Method for sharing variable-grained memory of workstations by sending particular block including line and size of the block to exchange shared data structures
US6108699A (en) * 1997-06-27 2000-08-22 Sun Microsystems, Inc. System and method for modifying membership in a clustered distributed computer system and updating system configuration
US5999712A (en) * 1997-10-21 1999-12-07 Sun Microsystems, Inc. Determining cluster membership in a distributed computer system
US6085333A (en) 1997-12-19 2000-07-04 Lsi Logic Corporation Method and apparatus for synchronization of code in redundant controllers in a swappable environment
US6360330B1 (en) * 1998-03-31 2002-03-19 Emc Corporation System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server
US6163855A (en) * 1998-04-17 2000-12-19 Microsoft Corporation Method and system for replicated and consistent modifications in a server cluster
US6148383A (en) * 1998-07-09 2000-11-14 International Business Machines Corporation Storage system employing universal timer for peer-to-peer asynchronous maintenance of consistent mirrored storage
US7774469B2 (en) * 1999-03-26 2010-08-10 Massa Michael T Consistent cluster operational data in a server cluster using a quorum of replicas
US7065538B2 (en) * 2000-02-11 2006-06-20 Quest Software, Inc. System and method for reconciling transactions between a replication system and a recovered database
US6647514B1 (en) 2000-03-23 2003-11-11 Hewlett-Packard Development Company, L.P. Host I/O performance and availability of a storage array during rebuild by prioritizing I/O request
US6643641B1 (en) 2000-04-27 2003-11-04 Russell Snyder Web search engine with graphic snapshots
JP3968207B2 (en) 2000-05-25 2007-08-29 株式会社日立製作所 Data multiplexing method and data multiplexing system
US6640287B2 (en) * 2000-06-10 2003-10-28 Hewlett-Packard Development Company, L.P. Scalable multiprocessor system and cache coherence method incorporating invalid-to-dirty requests
JP2002041305A (en) 2000-07-26 2002-02-08 Hitachi Ltd Allocating method of computer resource in virtual computer system, and virtual computer system
US6789162B1 (en) 2000-10-17 2004-09-07 Sun Microsystems, Inc. Storage controller configured to select unused regions of a storage device for data storage according to head position
US20020140726A1 (en) * 2000-12-22 2002-10-03 Schwartz Richard L. Method and system for facilitating mediated communication
US20020141388A1 (en) * 2000-12-22 2002-10-03 Schwartz Richard L. Method and system for facilitating mediated communication
US7751383B2 (en) * 2000-12-22 2010-07-06 Openwave Systems Inc. Method and system for facilitating mediated communication
US6985924B2 (en) * 2000-12-22 2006-01-10 Solomio Corporation Method and system for facilitating mediated communication
US20030005126A1 (en) * 2001-05-25 2003-01-02 Solomio Corp. Method and system for facilitating interactive communication
US20040139125A1 (en) * 2001-06-05 2004-07-15 Roger Strassburg Snapshot copy of data volume during data access
US7640582B2 (en) * 2003-04-16 2009-12-29 Silicon Graphics International Clustered filesystem for mix of trusted and untrusted nodes
US7765329B2 (en) * 2002-06-05 2010-07-27 Silicon Graphics International Messaging between heterogeneous clients of a storage area network
US7617292B2 (en) * 2001-06-05 2009-11-10 Silicon Graphics International Multi-class heterogeneous clients in a clustered filesystem
US7016946B2 (en) * 2001-07-05 2006-03-21 Sun Microsystems, Inc. Method and system for establishing a quorum for a geographically distributed cluster of computers
US7032003B1 (en) * 2001-08-13 2006-04-18 Union Gold Holdings, Ltd. Hybrid replication scheme with data and actions for wireless devices
US20050114285A1 (en) * 2001-11-16 2005-05-26 Cincotta Frank A. Data replication system and method
US6857045B2 (en) 2002-01-25 2005-02-15 International Business Machines Corporation Method and system for updating data in a compressed read cache
US6728738B2 (en) 2002-04-03 2004-04-27 Sun Microsystems, Inc. Fast lifetime analysis of objects in a garbage-collected system
US6978396B2 (en) * 2002-05-30 2005-12-20 Solid Information Technology Oy Method and system for processing replicated transactions parallel in secondary server
US6895464B2 (en) 2002-06-03 2005-05-17 Honeywell International Inc. Flash memory management system and method utilizing multiple block list windows
US7876693B2 (en) * 2002-06-04 2011-01-25 Alcatel-Lucent Usa Inc. Testing and error recovery across multiple switching fabrics
US7334124B2 (en) 2002-07-22 2008-02-19 Vormetric, Inc. Logical access block processing protocol for transparent secure file storage
US7158998B2 (en) * 2002-07-31 2007-01-02 Cingular Wireless Ii, Llc Efficient synchronous and asynchronous database replication
US20040024808A1 (en) * 2002-08-01 2004-02-05 Hitachi, Ltd. Wide area storage localization system
US7219149B2 (en) * 2003-06-12 2007-05-15 Dw Holdings, Inc. Versatile terminal adapter and network for transaction processing
US7146521B1 (en) 2002-08-21 2006-12-05 3Pardata, Inc. Preventing damage of storage devices and data loss in a data storage system
CA2461446A1 (en) 2002-08-29 2004-03-11 Matsushita Electric Industrial Co., Ltd. Semiconductor memory apparatus and method for writing data into the flash memory device
AU2002335996A1 (en) * 2002-09-11 2004-04-30 Nokia Corporation Method, device and system for automated synchronization between terminals
US20040153844A1 (en) 2002-10-28 2004-08-05 Gautam Ghose Failure analysis method and system for storage area networks
US6831865B2 (en) 2002-10-28 2004-12-14 Sandisk Corporation Maintaining erase counts in non-volatile storage systems
US7072905B2 (en) 2002-12-06 2006-07-04 Sun Microsystems, Inc. Better placement of objects reachable from outside a generation managed by the train algorithm
US20040153841A1 (en) * 2003-01-16 2004-08-05 Silicon Graphics, Inc. Failure hierarchy in a cluster filesystem
US6898685B2 (en) * 2003-03-25 2005-05-24 Emc Corporation Ordering data writes from a local storage device to a remote storage device
US7181580B2 (en) 2003-03-27 2007-02-20 International Business Machines Corporation Secure pointers
WO2004095201A2 (en) 2003-04-09 2004-11-04 Intervideo Inc. Systems and methods for caching multimedia data
US20040210656A1 (en) * 2003-04-16 2004-10-21 Silicon Graphics, Inc. Failsafe operation of storage area network
US7437530B1 (en) 2003-04-24 2008-10-14 Network Appliance, Inc. System and method for mapping file block numbers to logical block addresses
US7120824B2 (en) * 2003-05-09 2006-10-10 International Business Machines Corporation Method, apparatus and program storage device for maintaining data consistency and cache coherency during communications failures between nodes in a remote mirror pair
US20040243699A1 (en) * 2003-05-29 2004-12-02 Mike Koclanes Policy based management of storage resources
US7434097B2 (en) 2003-06-05 2008-10-07 Copan System, Inc. Method and apparatus for efficient fault-tolerant disk drive replacement in raid storage systems
US7089272B1 (en) 2003-06-18 2006-08-08 Sun Microsystems, Inc. Specializing write-barriers for objects in a garbage collected heap
JP4124348B2 (en) 2003-06-27 2008-07-23 株式会社日立製作所 Storage system
EP1692264A2 (en) 2003-10-28 2006-08-23 The Johns Hopkins University Quantitative multiplex methylation-specific pcr
US7434214B2 (en) 2004-01-21 2008-10-07 International Business Machines Corporation Method for determining a close approximate benefit of reducing memory footprint of a Java application
US20050188246A1 (en) 2004-02-25 2005-08-25 Emberty Robert G. Persistent worldwide names assigned to removable media storage
US7120769B2 (en) * 2004-03-08 2006-10-10 Hitachi, Ltd. Point in time remote copy for multiple sites
US7526684B2 (en) 2004-03-24 2009-04-28 Seagate Technology Llc Deterministic preventive recovery from a predicted failure in a distributed storage system
KR100557192B1 (en) * 2004-04-06 2006-03-03 삼성전자주식회사 Method for sending data in case of irregularity completion in data synchronization between server and client and system therefor
JP4476683B2 (en) * 2004-04-28 2010-06-09 株式会社日立製作所 Data processing system
US7493424B1 (en) 2004-04-30 2009-02-17 Netapp, Inc. Network storage system with shared software stack for LDMA and RDMA
US7225307B2 (en) * 2004-05-04 2007-05-29 International Business Machines Corporation Apparatus, system, and method for synchronizing an asynchronous mirror volume using a synchronous mirror volume
JP4392601B2 (en) 2004-05-07 2010-01-06 パナソニック株式会社 Data access device and recording medium
US8042163B1 (en) 2004-05-20 2011-10-18 Symatec Operating Corporation Secure storage access using third party capability tokens
US7533292B2 (en) 2004-07-15 2009-05-12 International Business Machines Corporation Management method for spare disk drives in a raid system
EP1829332A2 (en) 2004-12-15 2007-09-05 Exostar Corporation Enabling trust in a federated collaboration of networks
US7426623B2 (en) 2005-01-14 2008-09-16 Sandisk Il Ltd System and method for configuring flash memory partitions as super-units
US20060230245A1 (en) 2005-04-08 2006-10-12 Microsoft Corporation Data storage safety indicator and expander
US8200887B2 (en) 2007-03-29 2012-06-12 Violin Memory, Inc. Memory management system and method
US7689609B2 (en) 2005-04-25 2010-03-30 Netapp, Inc. Architecture for supporting sparse volumes
US7366825B2 (en) 2005-04-26 2008-04-29 Microsoft Corporation NAND flash memory management
JP4506594B2 (en) 2005-07-22 2010-07-21 日本電気株式会社 Redundant path control method
US7694082B2 (en) 2005-07-29 2010-04-06 International Business Machines Corporation Computer program and method for managing resources in a distributed storage system
US7617216B2 (en) 2005-09-07 2009-11-10 Emc Corporation Metadata offload for a file server cluster
ITVA20050061A1 (en) 2005-11-08 2007-05-09 St Microelectronics Srl METHOD OF MANAGEMENT OF A NON-VOLATILE MEMORY AND RELATIVE MEMORY DEVICE
US7831783B2 (en) 2005-12-22 2010-11-09 Honeywell International Inc. Effective wear-leveling and concurrent reclamation method for embedded linear flash file systems
US7716180B2 (en) * 2005-12-29 2010-05-11 Amazon Technologies, Inc. Distributed storage system with web services client interface
US7421552B2 (en) 2006-03-17 2008-09-02 Emc Corporation Techniques for managing data within a data storage system utilizing a flash-based memory vault
US7899780B1 (en) 2006-03-30 2011-03-01 Emc Corporation Methods and apparatus for structured partitioning of management information
US7702866B2 (en) * 2006-03-31 2010-04-20 International Business Machines Corporation Use of volume containers in replication and provisioning management
US20070294564A1 (en) 2006-04-27 2007-12-20 Tim Reddin High availability storage system
US8266472B2 (en) 2006-05-03 2012-09-11 Cisco Technology, Inc. Method and system to provide high availability of shared data
US9455955B2 (en) 2006-05-17 2016-09-27 Richard Fetik Customizable storage controller with integrated F+ storage firewall protection
US8966018B2 (en) * 2006-05-19 2015-02-24 Trapeze Networks, Inc. Automated network device configuration and network deployment
US7743239B2 (en) 2006-06-30 2010-06-22 Intel Corporation Accelerating integrity checks of code and data stored in non-volatile memory
US7627786B2 (en) 2006-09-26 2009-12-01 International Business Machines Corporation Tracking error events relating to data storage drives and/or media of automated data storage library subsystems
US8555021B1 (en) * 2006-09-29 2013-10-08 Emc Corporation Systems and methods for automating and tuning storage allocations
US8620970B2 (en) 2006-10-03 2013-12-31 Network Appliance, Inc. Methods and apparatus for changing versions of a filesystem
US7587435B2 (en) * 2006-11-10 2009-09-08 Sybase, Inc. Replication system with methodology for replicating database sequences
US7669029B1 (en) 2006-11-15 2010-02-23 Network Appliance, Inc. Load balancing a data storage system
US7710777B1 (en) 2006-12-20 2010-05-04 Marvell International Ltd. Semi-volatile NAND flash memory
US7640332B2 (en) 2006-12-27 2009-12-29 Hewlett-Packard Development Company, L.P. System and method for hot deployment/redeployment in grid computing environment
KR100923990B1 (en) 2007-02-13 2009-10-28 삼성전자주식회사 Computing system based on characteristcs of flash storage
US20080222111A1 (en) * 2007-03-07 2008-09-11 Oracle International Corporation Database system with dynamic database caching
US9632870B2 (en) 2007-03-29 2017-04-25 Violin Memory, Inc. Memory system with multiple striping of raid groups and method for performing the same
US7975115B2 (en) 2007-04-11 2011-07-05 Dot Hill Systems Corporation Method and apparatus for separating snapshot preserved and write data
US8706914B2 (en) 2007-04-23 2014-04-22 David D. Duchesneau Computing infrastructure
US7996599B2 (en) 2007-04-25 2011-08-09 Apple Inc. Command resequencing in memory operations
US7991942B2 (en) 2007-05-09 2011-08-02 Stmicroelectronics S.R.L. Memory block compaction method, circuit, and system in storage devices based on flash memories
US8996409B2 (en) * 2007-06-06 2015-03-31 Sony Computer Entertainment Inc. Management of online trading services using mediated communications
US7870360B2 (en) 2007-09-14 2011-01-11 International Business Machines Corporation Storage area network (SAN) forecasting in a heterogeneous environment
KR101433859B1 (en) 2007-10-12 2014-08-27 삼성전자주식회사 Nonvolatile memory system and method managing file data thereof
US8055735B2 (en) * 2007-10-30 2011-11-08 Hewlett-Packard Development Company, L.P. Method and system for forming a cluster of networked nodes
US8271700B1 (en) 2007-11-23 2012-09-18 Pmc-Sierra Us, Inc. Logical address direct memory access with multiple concurrent physical ports and internal switching
US7743191B1 (en) 2007-12-20 2010-06-22 Pmc-Sierra, Inc. On-chip shared memory based device architecture
JP4471007B2 (en) 2008-02-05 2010-06-02 ソニー株式会社 RECORDING DEVICE, RECORDING DEVICE CONTROL METHOD, RECORDING DEVICE CONTROL METHOD PROGRAM AND RECORDING DEVICE CONTROL METHOD PROGRAM RECORDING MEDIUM
US8150802B2 (en) * 2008-03-24 2012-04-03 Microsoft Corporation Accumulating star knowledge in replicated data protocol
US8949863B1 (en) 2008-04-30 2015-02-03 Netapp, Inc. Creating environmental snapshots of storage device failure events
US8093868B2 (en) 2008-09-04 2012-01-10 International Business Machines Corporation In situ verification of capacitive power support
US8086585B1 (en) 2008-09-30 2011-12-27 Emc Corporation Access control to block storage devices for a shared disk based file system
US9239767B2 (en) * 2008-12-22 2016-01-19 Rpx Clearinghouse Llc Selective database replication
US9473419B2 (en) 2008-12-22 2016-10-18 Ctera Networks, Ltd. Multi-tenant cloud storage system
US8078848B2 (en) * 2009-01-09 2011-12-13 Micron Technology, Inc. Memory controller having front end and back end channels for modifying commands
US8762642B2 (en) 2009-01-30 2014-06-24 Twinstrata Inc System and method for secure and reliable multi-cloud data replication
JP4844639B2 (en) 2009-02-19 2011-12-28 Tdk株式会社 MEMORY CONTROLLER, FLASH MEMORY SYSTEM HAVING MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD
US9134922B2 (en) 2009-03-12 2015-09-15 Vmware, Inc. System and method for allocating datastores for virtual machines
US8225057B1 (en) * 2009-03-24 2012-07-17 Netapp, Inc. Single-system configuration for backing-up and restoring a clustered storage system
KR101586047B1 (en) 2009-03-25 2016-01-18 삼성전자주식회사 Nonvolatile memory device and program methods for the same
US8805953B2 (en) 2009-04-03 2014-08-12 Microsoft Corporation Differential file and system restores from peers and the cloud
TWI408689B (en) 2009-04-14 2013-09-11 Jmicron Technology Corp Method for accessing storage apparatus and related control circuit
US8332678B1 (en) * 2009-06-02 2012-12-11 American Megatrends, Inc. Power save mode operation for continuous data protection
US8504797B2 (en) 2009-06-02 2013-08-06 Hitachi, Ltd. Method and apparatus for managing thin provisioning volume by using file storage system
JP4874368B2 (en) 2009-06-22 2012-02-15 株式会社日立製作所 Storage system management method and computer using flash memory
US7948798B1 (en) 2009-07-22 2011-05-24 Marvell International Ltd. Mixed multi-level cell and single level cell storage device
US8402242B2 (en) 2009-07-29 2013-03-19 International Business Machines Corporation Write-erase endurance lifetime of memory storage devices
US20110035540A1 (en) 2009-08-10 2011-02-10 Adtron, Inc. Flash blade system architecture and method
US8868957B2 (en) 2009-09-24 2014-10-21 Xyratex Technology Limited Auxiliary power supply, a method of providing power to a data storage system and a back-up power supply charging circuit
US8126987B2 (en) * 2009-11-16 2012-02-28 Sony Computer Entertainment Inc. Mediation of content-related services
TWI428917B (en) 2009-11-25 2014-03-01 Silicon Motion Inc Flash memory device, data storage system, and operation method of a data storage system
US8250324B2 (en) 2009-11-30 2012-08-21 International Business Machines Corporation Method to efficiently locate meta-data structures on a flash-based storage device
US8387136B2 (en) 2010-01-05 2013-02-26 Red Hat, Inc. Role-based access control utilizing token profiles
US8452932B2 (en) 2010-01-06 2013-05-28 Storsimple, Inc. System and method for efficiently creating off-site data volume back-ups
WO2011156746A2 (en) 2010-06-11 2011-12-15 California Institute Of Technology Systems and methods for rapid processing and storage of data
US20120023144A1 (en) 2010-07-21 2012-01-26 Seagate Technology Llc Managing Wear in Flash Memory
US8510270B2 (en) * 2010-07-27 2013-08-13 Oracle International Corporation MYSQL database heterogeneous log based replication
US20120054264A1 (en) 2010-08-31 2012-03-01 International Business Machines Corporation Techniques for Migrating Active I/O Connections with Migrating Servers and Clients
US8566546B1 (en) 2010-09-27 2013-10-22 Emc Corporation Techniques for enforcing capacity restrictions of an allocation policy
US8775868B2 (en) 2010-09-28 2014-07-08 Pure Storage, Inc. Adaptive RAID for an SSD environment
US8930620B2 (en) * 2010-11-12 2015-01-06 Symantec Corporation Host discovery and handling of ALUA preferences and state transitions
US8949502B2 (en) 2010-11-18 2015-02-03 Nimble Storage, Inc. PCIe NVRAM card based on NVDIMM
US8787367B2 (en) * 2010-11-30 2014-07-22 Ringcentral, Inc. User partitioning in a communication system
US8812860B1 (en) 2010-12-03 2014-08-19 Symantec Corporation Systems and methods for protecting data stored on removable storage devices by requiring external user authentication
US9208071B2 (en) 2010-12-13 2015-12-08 SanDisk Technologies, Inc. Apparatus, system, and method for accessing memory
US8560792B2 (en) * 2010-12-16 2013-10-15 International Business Machines Corporation Synchronous extent migration protocol for paired storage
US8589723B2 (en) 2010-12-22 2013-11-19 Intel Corporation Method and apparatus to provide a high availability solid state drive
US8572031B2 (en) * 2010-12-23 2013-10-29 Mongodb, Inc. Method and apparatus for maintaining replica sets
US9805108B2 (en) * 2010-12-23 2017-10-31 Mongodb, Inc. Large distributed database clustering systems and methods
US10614098B2 (en) * 2010-12-23 2020-04-07 Mongodb, Inc. System and method for determining consensus within a distributed database
US8465332B2 (en) 2011-01-13 2013-06-18 Tyco Electronics Corporation Contact assembly for an electrical connector
US8578442B1 (en) 2011-03-11 2013-11-05 Symantec Corporation Enforcing consistent enterprise and cloud security profiles
US8694647B2 (en) * 2011-03-18 2014-04-08 Microsoft Corporation Read-only operations processing in a paxos replication system
US8738882B2 (en) 2011-06-03 2014-05-27 Apple Inc. Pre-organization of data
US8751463B1 (en) 2011-06-30 2014-06-10 Emc Corporation Capacity forecasting for a deduplicating storage system
US8769622B2 (en) 2011-06-30 2014-07-01 International Business Machines Corporation Authentication and authorization methods for cloud computing security
WO2013016013A1 (en) 2011-07-27 2013-01-31 Cleversafe, Inc. Generating dispersed storage network event records
US8931041B1 (en) 2011-07-29 2015-01-06 Symantec Corporation Method and system for visibility and control over access transactions between clouds using resource authorization messages
US20130036272A1 (en) 2011-08-02 2013-02-07 Microsoft Corporation Storage engine node for cloud-based storage
US8527544B1 (en) 2011-08-11 2013-09-03 Pure Storage Inc. Garbage collection in a storage system
US9525900B2 (en) 2011-09-15 2016-12-20 Google Inc. Video management system
JP2013077278A (en) 2011-09-16 2013-04-25 Toshiba Corp Memory device
US9197623B2 (en) 2011-09-29 2015-11-24 Oracle International Corporation Multiple resource servers interacting with single OAuth server
CA2852639A1 (en) 2011-10-24 2013-05-02 Schneider Electric Industries Sas System and method for managing industrial processes
US8595546B2 (en) * 2011-10-28 2013-11-26 Zettaset, Inc. Split brain resistant failover in high availability clusters
WO2013071087A1 (en) 2011-11-09 2013-05-16 Unisys Corporation Single sign on for cloud
US20130311434A1 (en) 2011-11-17 2013-11-21 Marc T. Jones Method, apparatus and system for data deduplication
US9330245B2 (en) 2011-12-01 2016-05-03 Dashlane SAS Cloud-based data backup and sync with secure local storage of access keys
US8738813B1 (en) * 2011-12-27 2014-05-27 Emc Corporation Method and apparatus for round trip synchronous replication using SCSI reads
US20130219164A1 (en) 2011-12-29 2013-08-22 Imation Corp. Cloud-based hardware security modules
US8800009B1 (en) 2011-12-30 2014-08-05 Google Inc. Virtual machine service access
US8613066B1 (en) 2011-12-30 2013-12-17 Amazon Technologies, Inc. Techniques for user authentication
US9069827B1 (en) * 2012-01-17 2015-06-30 Amazon Technologies, Inc. System and method for adjusting membership of a data replication group
US9423983B2 (en) 2012-01-19 2016-08-23 Syncsort Incorporated Intelligent storage controller
US9116812B2 (en) 2012-01-27 2015-08-25 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for a de-duplication cache
JP2013161235A (en) 2012-02-03 2013-08-19 Fujitsu Ltd Storage device, method for controlling storage device and control program for storage device
US8875234B2 (en) * 2012-09-13 2014-10-28 PivotCloud, Inc. Operator provisioning of a trustworthy workspace to a subscriber
US8681992B2 (en) * 2012-02-13 2014-03-25 Alephcloud Systems, Inc. Monitoring and controlling access to electronic content
US9146705B2 (en) * 2012-04-09 2015-09-29 Microsoft Technology, LLC Split brain protection in computer clusters
US8984350B1 (en) * 2012-04-16 2015-03-17 Google Inc. Replication method and apparatus in a distributed system
US9218406B2 (en) * 2012-04-26 2015-12-22 Connected Data, Inc. System and method for managing user data in a plurality of storage appliances over a wide area network for collaboration, protection, publication, or sharing
US10474584B2 (en) 2012-04-30 2019-11-12 Hewlett Packard Enterprise Development Lp Storing cache metadata separately from integrated circuit containing cache controller
US8832372B2 (en) 2012-05-24 2014-09-09 Netapp, Inc. Network storage systems having clustered raids for improved redundancy and load balancing
US10341435B2 (en) 2012-06-12 2019-07-02 Centurylink Intellectual Property Llc High performance cloud storage
US9130927B2 (en) 2012-07-02 2015-09-08 Sk Planet Co., Ltd. Single certificate service system and operational method thereof
US20140040200A1 (en) * 2012-08-03 2014-02-06 Sap Ag Mediation objects for complex replications
JP6024296B2 (en) * 2012-08-30 2016-11-16 富士通株式会社 Information processing apparatus, copy control program, and copy control method
US9047181B2 (en) 2012-09-07 2015-06-02 Splunk Inc. Visualization of data from clusters
US8769651B2 (en) 2012-09-19 2014-07-01 Secureauth Corporation Mobile multifactor single-sign-on authentication
WO2014051552A1 (en) 2012-09-25 2014-04-03 Empire Technology Development Llc Limiting data usage of a device connected to the internet via tethering
US9245144B2 (en) 2012-09-27 2016-01-26 Intel Corporation Secure data container for web applications
US8990914B2 (en) 2012-09-28 2015-03-24 Intel Corporation Device, method, and system for augmented reality security
US8990905B1 (en) 2012-09-28 2015-03-24 Emc Corporation Protected resource access control utilizing intermediate values of a hash chain
US8850546B1 (en) 2012-09-30 2014-09-30 Emc Corporation Privacy-preserving user attribute release and session management
US20140101434A1 (en) 2012-10-04 2014-04-10 Msi Security, Ltd. Cloud-based file distribution and management using real identity authentication
WO2014077918A1 (en) * 2012-11-19 2014-05-22 Board Of Regents, The University Of Texas System Robustness in a scalable block storage system
US9209973B2 (en) 2012-11-20 2015-12-08 Google Inc. Delegate authorization in cloud-based storage system
US8997197B2 (en) 2012-12-12 2015-03-31 Citrix Systems, Inc. Encryption-based data access management
US9317223B2 (en) 2012-12-17 2016-04-19 International Business Machines Corporation Method and apparatus for automated migration of data among storage centers
US9075529B2 (en) 2013-01-04 2015-07-07 International Business Machines Corporation Cloud based data migration and replication
US9589008B2 (en) 2013-01-10 2017-03-07 Pure Storage, Inc. Deduplication of volume regions
US9052917B2 (en) 2013-01-14 2015-06-09 Lenovo (Singapore) Pte. Ltd. Data storage for remote environment
US9483657B2 (en) 2013-01-14 2016-11-01 Accenture Global Services Limited Secure online distributed data storage services
US9009526B2 (en) 2013-01-24 2015-04-14 Hewlett-Packard Development Company, L.P. Rebuilding drive data
US20140229654A1 (en) 2013-02-08 2014-08-14 Seagate Technology Llc Garbage Collection with Demotion of Valid Data to a Lower Memory Tier
US20140230017A1 (en) 2013-02-12 2014-08-14 Appsense Limited Programmable security token
US8902532B2 (en) 2013-03-20 2014-12-02 International Business Machines Corporation Write avoidance areas around bad blocks on a hard disk drive platter
US9307011B2 (en) * 2013-04-01 2016-04-05 Netapp, Inc. Synchronous mirroring of NVLog to multiple destinations (architecture level)
US10102144B2 (en) * 2013-04-16 2018-10-16 Sandisk Technologies Llc Systems, methods and interfaces for data virtualization
GB2513377A (en) 2013-04-25 2014-10-29 Ibm Controlling data storage in an array of storage devices
US9432215B2 (en) * 2013-05-21 2016-08-30 Nicira, Inc. Hierarchical network managers
US9317382B2 (en) 2013-05-21 2016-04-19 International Business Machines Corporation Storage device with error recovery indication
US10038726B2 (en) 2013-06-12 2018-07-31 Visa International Service Association Data sensitivity based authentication and authorization
US9124569B2 (en) 2013-06-14 2015-09-01 Microsoft Technology Licensing, Llc User authentication in a cloud environment
US8898346B1 (en) 2013-06-20 2014-11-25 Qlogic, Corporation Method and system for configuring network devices
US8984602B1 (en) 2013-06-28 2015-03-17 Emc Corporation Protected resource access control utilizing credentials based on message authentication codes and hash chain values
US9454423B2 (en) 2013-09-11 2016-09-27 Dell Products, Lp SAN performance analysis tool
CA2931098A1 (en) 2013-09-27 2015-04-02 Intel Corporation Determination of a suitable target for an initiator by a control plane processor
US9442662B2 (en) 2013-10-18 2016-09-13 Sandisk Technologies Llc Device and method for managing die groups
US9519580B2 (en) 2013-11-11 2016-12-13 Globalfoundries Inc. Load balancing logical units in an active/passive storage system
US9619311B2 (en) 2013-11-26 2017-04-11 International Business Machines Corporation Error identification and handling in storage area networks
US9280678B2 (en) 2013-12-02 2016-03-08 Fortinet, Inc. Secure cloud storage distribution and aggregation
US9529546B2 (en) 2014-01-08 2016-12-27 Netapp, Inc. Global in-line extent-based deduplication
US9395922B2 (en) 2014-01-10 2016-07-19 Hitachi, Ltd. Information system and I/O processing method
US10169440B2 (en) * 2014-01-27 2019-01-01 International Business Machines Corporation Synchronous data replication in a content management system
WO2015118865A1 (en) * 2014-02-05 2015-08-13 日本電気株式会社 Information processing device, information processing system, and data access method
US20150244795A1 (en) * 2014-02-21 2015-08-27 Solidfire, Inc. Data syncing in a distributed system
US9361194B2 (en) * 2014-03-20 2016-06-07 Netapp Inc. Mirror vote synchronization
US9514012B2 (en) * 2014-04-03 2016-12-06 International Business Machines Corporation Tertiary storage unit management in bidirectional data copying
US20150293708A1 (en) * 2014-04-11 2015-10-15 Netapp, Inc. Connectivity-Aware Storage Controller Load Balancing
JP6199508B2 (en) * 2014-04-21 2017-09-20 株式会社日立製作所 Information storage system
US9250823B1 (en) 2014-05-20 2016-02-02 Emc Corporation Online replacement of physical storage in a virtual storage system
US10210171B2 (en) * 2014-06-18 2019-02-19 Microsoft Technology Licensing, Llc Scalable eventual consistency system using logical document journaling
US9424145B2 (en) * 2014-06-25 2016-08-23 Sybase, Inc. Ensuring the same completion status for transactions after recovery in a synchronous replication environment
ES2642218T3 (en) 2014-06-27 2017-11-15 Huawei Technologies Co., Ltd. Controller, flash memory device and procedure for writing data to flash memory device
US9516167B2 (en) 2014-07-24 2016-12-06 Genesys Telecommunications Laboratories, Inc. Media channel management apparatus for network communications sessions
US9807164B2 (en) * 2014-07-25 2017-10-31 Facebook, Inc. Halo based file system replication
US10204010B2 (en) 2014-10-03 2019-02-12 Commvault Systems, Inc. Intelligent protection of off-line mail data
US9720752B2 (en) * 2014-10-20 2017-08-01 Netapp, Inc. Techniques for performing resynchronization on a clustered system
US20160182542A1 (en) 2014-12-18 2016-06-23 Stuart Staniford Denial of service and other resource exhaustion defense and mitigation using transition tracking
US10545987B2 (en) * 2014-12-19 2020-01-28 Pure Storage, Inc. Replication to the cloud
US9652334B2 (en) * 2015-01-29 2017-05-16 Microsoft Technology Licensing, Llc Increasing coordination service reliability
US9521200B1 (en) 2015-05-26 2016-12-13 Pure Storage, Inc. Locally providing cloud storage array services
US9716755B2 (en) 2015-05-26 2017-07-25 Pure Storage, Inc. Providing cloud storage array services by a local storage array in a data center
US20160350009A1 (en) 2015-05-29 2016-12-01 Pure Storage, Inc. Buffering data to be written to an array of non-volatile storage devices
US9300660B1 (en) 2015-05-29 2016-03-29 Pure Storage, Inc. Providing authorization and authentication in a cloud for a user of a storage array
US10067969B2 (en) * 2015-05-29 2018-09-04 Nuodb, Inc. Table partitioning within distributed database systems
US9444822B1 (en) 2015-05-29 2016-09-13 Pure Storage, Inc. Storage array access control from cloud-based user authorization and authentication
US10021170B2 (en) 2015-05-29 2018-07-10 Pure Storage, Inc. Managing a storage array using client-side services
US9772794B2 (en) * 2015-06-05 2017-09-26 University Of Florida Research Foundation, Incorporated Method and apparatus for big data cloud storage resource management
US10623486B2 (en) * 2015-06-15 2020-04-14 Redis Labs Ltd. Methods, systems, and media for providing distributed database access during a network split
US9626116B1 (en) * 2015-06-22 2017-04-18 EMC IP Holding Company LLC Distributed service level objective management in active-active environments
US9678667B2 (en) * 2015-10-30 2017-06-13 Netapp, Inc. Techniques for maintaining device coordination in a storage cluster system
US9916214B2 (en) * 2015-11-17 2018-03-13 International Business Machines Corporation Preventing split-brain scenario in a high-availability cluster
US20170149883A1 (en) * 2015-11-20 2017-05-25 Datadirect Networks, Inc. Data replication in a data storage system having a disjointed network
US9917896B2 (en) * 2015-11-27 2018-03-13 Netapp Inc. Synchronous replication for storage area network protocol storage
US10216534B1 (en) * 2015-12-14 2019-02-26 Amazon Technologies, Inc. Moving storage volumes for improved performance
EP3319258B1 (en) * 2015-12-23 2019-11-27 Huawei Technologies Co., Ltd. Service take-over method and storage device, and service take-over apparatus
US10496320B2 (en) * 2015-12-28 2019-12-03 Netapp Inc. Synchronous replication
US10230809B2 (en) * 2016-02-29 2019-03-12 Intel Corporation Managing replica caching in a distributed storage system
KR102527992B1 (en) 2016-03-14 2023-05-03 삼성전자주식회사 Data storage device and data processing system having the same
US10404835B2 (en) * 2016-03-17 2019-09-03 Google Llc Hybrid client-server data provision
US9507532B1 (en) 2016-05-20 2016-11-29 Pure Storage, Inc. Migrating data in a storage array that includes a plurality of storage devices and a plurality of write buffer devices
US10567406B2 (en) 2016-08-16 2020-02-18 International Business Machines Corporation Cloud computing environment activity monitoring
US10459657B2 (en) 2016-09-16 2019-10-29 Hewlett Packard Enterprise Development Lp Storage system with read cache-on-write buffer
US10089205B2 (en) * 2016-09-30 2018-10-02 International Business Machines Corporation Disaster recovery practice mode for application virtualization infrastructure
US10719244B2 (en) * 2016-11-18 2020-07-21 International Business Machines Corporation Multi-mode data replication for data loss risk reduction
US10454754B1 (en) * 2016-12-16 2019-10-22 Amazon Technologies, Inc. Hybrid cluster recovery techniques
US11076509B2 (en) 2017-01-24 2021-07-27 The Research Foundation for the State University Control systems and prediction methods for it cooling performance in containment
US10761946B2 (en) * 2017-02-10 2020-09-01 Sap Se Transaction commit protocol with recoverable commit identifier
US10503427B2 (en) * 2017-03-10 2019-12-10 Pure Storage, Inc. Synchronously replicating datasets and other managed objects to cloud-based storage systems
US10521344B1 (en) * 2017-03-10 2019-12-31 Pure Storage, Inc. Servicing input/output (‘I/O’) operations directed to a dataset that is synchronized across a plurality of storage systems
US10454810B1 (en) 2017-03-10 2019-10-22 Pure Storage, Inc. Managing host definitions across a plurality of storage systems
US10846137B2 (en) 2018-01-12 2020-11-24 Robin Systems, Inc. Dynamic adjustment of application resources in a distributed computing system
US10917471B1 (en) * 2018-03-15 2021-02-09 Pure Storage, Inc. Active membership in a cloud-based storage system
US11128578B2 (en) * 2018-05-21 2021-09-21 Pure Storage, Inc. Switching between mediator services for a storage system
US11176173B2 (en) * 2018-07-10 2021-11-16 Comptel Oy Arrangement for enriching data stream in a communications network and related method
US11106810B2 (en) 2018-07-30 2021-08-31 EMC IP Holding Company LLC Multi-tenant deduplication with non-trusted storage system
US10877683B2 (en) 2019-04-09 2020-12-29 International Business Machines Corporation Tiered storage optimization and migration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105078A (en) * 1997-12-18 2000-08-15 International Business Machines Corporation Extended remote copying system for reporting both active and idle conditions wherein the idle condition indicates no updates to the system for a predetermined time period
US20050050286A1 (en) * 2003-08-28 2005-03-03 International Busines Machines Corporation Apparatus and method for asynchronous logical mirroring
US10613789B1 (en) * 2014-03-31 2020-04-07 EMC IP Holding Company LLC Analytics engine using consistent replication on distributed sites
US20180267723A1 (en) * 2017-03-20 2018-09-20 International Business Machines Corporation Processing a recall request for data migrated from a primary storage system having data mirrored to a secondary storage system

Also Published As

Publication number Publication date
US20230036992A1 (en) 2023-02-02
US11500745B1 (en) 2022-11-15
US20230333947A1 (en) 2023-10-19
US20220318083A1 (en) 2022-10-06
US20220156165A1 (en) 2022-05-19
US10521344B1 (en) 2019-12-31
US10365982B1 (en) 2019-07-30
US20230289267A1 (en) 2023-09-14
US20230289268A1 (en) 2023-09-14
US10585733B1 (en) 2020-03-10
US11347606B2 (en) 2022-05-31
US10671408B1 (en) 2020-06-02
US11687423B2 (en) 2023-06-27
US10558537B1 (en) 2020-02-11
US11210219B1 (en) 2021-12-28
US11954002B1 (en) 2024-04-09
US10613779B1 (en) 2020-04-07
US11687500B1 (en) 2023-06-27
US11645173B2 (en) 2023-05-09
US10884993B1 (en) 2021-01-05
US11237927B1 (en) 2022-02-01
US20220283916A1 (en) 2022-09-08
US10680932B1 (en) 2020-06-09
US11422730B1 (en) 2022-08-23
US11379285B1 (en) 2022-07-05
US20200264960A1 (en) 2020-08-20
US10990490B1 (en) 2021-04-27
US11789831B2 (en) 2023-10-17
US11698844B2 (en) 2023-07-11
US20230342271A1 (en) 2023-10-26

Similar Documents

Publication Publication Date Title
US11210219B1 (en) Synchronously replicating a dataset across a plurality of storage systems
US11714718B2 (en) Performing partial redundant array of independent disks (RAID) stripe parity calculations
US20220283935A1 (en) Storage system buffering
US10534677B2 (en) Providing high availability for applications executing on a storage system
US11803492B2 (en) System resource management using time-independent scheduling
US11126381B1 (en) Lightweight copy
US10740294B2 (en) Garbage collection of data blocks in a storage system with direct-mapped storage devices
US10467107B1 (en) Maintaining metadata resiliency among storage device failures
US10141050B1 (en) Page writes for triple level cell flash memory
US10552090B2 (en) Solid state drives with multiple types of addressable memory
US11579790B1 (en) Servicing input/output (‘I/O’) operations during data migration
US11789780B1 (en) Preserving quality-of-service (‘QOS’) to storage system workloads
US20230138462A1 (en) Migrating Similar Data To A Single Data Reduction Pool
US20210263654A1 (en) Mapping luns in a storage memory
US11592991B2 (en) Converting raid data between persistent storage types
US10671494B1 (en) Consistent selection of replicated datasets during storage system recovery
US20230088163A1 (en) Similarity data for reduced data usage
US10509581B1 (en) Maintaining write consistency in a multi-threaded storage system
US10776202B1 (en) Drive, blade, or data shard decommission via RAID geometry shrinkage

Legal Events

Date Code Title Description
AS Assignment

Owner name: PURE STORAGE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRUNWALD, DAVID;HODGSON, STEVEN;KARR, RONALD;AND OTHERS;SIGNING DATES FROM 20170802 TO 20170808;REEL/FRAME:058242/0276

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION