WO2016033691A1 - Application centric distributed storage system and method - Google Patents

Application centric distributed storage system and method Download PDF

Info

Publication number
WO2016033691A1
WO2016033691A1 PCT/CA2015/050847 CA2015050847W WO2016033691A1 WO 2016033691 A1 WO2016033691 A1 WO 2016033691A1 CA 2015050847 W CA2015050847 W CA 2015050847W WO 2016033691 A1 WO2016033691 A1 WO 2016033691A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
volume
data
storage devices
network
Prior art date
Application number
PCT/CA2015/050847
Other languages
French (fr)
Inventor
Rayan Zachariassen
Steven Lamb
Original Assignee
Iofabric Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iofabric Inc. filed Critical Iofabric Inc.
Priority to CA2960150A priority Critical patent/CA2960150C/en
Priority to US15/506,334 priority patent/US20170251058A1/en
Priority to EP15837248.2A priority patent/EP3195129A4/en
Priority to CN201580050089.5A priority patent/CN106716385A/en
Publication of WO2016033691A1 publication Critical patent/WO2016033691A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/20Network management software packages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Definitions

  • This invention relates generally to data storage. More specifically it relates to a system and method of partitioning and storing data on multiple storage resources in a way that enhances latency and protection parameters.
  • SDS Software defined storage
  • QoS Quality of Service
  • a storage or application administrator wants to alter the current QoS setting of an application or virtual machine it needs to be migrated to another volume.
  • the volume cannot adjust to the needs of the VM.
  • SDS tends to entirely replace the software services that are available on the storage system.
  • SDS as it currently exists, means that an organization is buying the feature twice. Once when it is “included” with the hardware, and again with the SDS solution.
  • the justifications for this "double-buy" are that the IT professional can now manage storage through a single pane of glass and that future storage hardware can be purchased without these services. In reality it is hard to find a storage system without some form of data services.
  • a software defined storage network comprising one or more storage nodes, each storage node including a computer processor and one or more data storage devices; the one or more storage devices including a computer readable medium storing data partitioned into one or more volumes; wherein the one or more volumes are visible to at least a subset of the storage nodes and to non-storage nodes on the network; and a computer system in communication with the network having a computer processor executing instructions stored on a computer readable medium to define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of at least one of the volumes and acts as an exclusive or non-exclusive controller for all or part of each of the at least one data storage devices; wherein each of the plurality of actors places data for each volume on the storage devices according to at least one policy; the at least one policy including maintaining a maximum latency target for each volume.
  • the at least one policy includes one of optimizing for a latency target, input/output operations per second and/or bandwidth.
  • the software service determines latency
  • the storage service implements the placement of data for each volume on the storage devices based on the latency target for each volume and on the determined latency characteristics for each storage device available to the volume.
  • multiple storage services are amalgamated into a single storage service.
  • the storage service permits replicated data to be placed on storage devices violating the maximum latency target determined for each volume, wherein a copy of the replicated data is available to maintain the latency target.
  • the software service provides a name of each volume consistent among each node where the volume is visible to applications.
  • placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
  • the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
  • a method for storing computer data on a storage network comprising one or more storage nodes, each node including a computer processor and one or more storage device and each storage device including a computer readable medium storing data partitioned into one or more volumes visible to storage and non-storage nodes on the network, the method including implementing via computer executable instructions that when executed by a processor define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of at least one of the volumes and acts as an exclusive or non-exclusive controller for each of the at least one data storage devices; placing, via at least one of the plurality of actors, data for each volume on the storage devices according to at least one policy.
  • the at least one policy includes one of optimizing for a latency target, input/output operations per second and/or bandwidth.
  • the method further comprises determining performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices.
  • the method further comprises storing data for each volume on the storage devices based on the latency target for each volume and on the determined latency characteristics for each storage device available to the volume.
  • the method further comprises violating the maximum latency target determined for each volume when storing replicated data, provided a copy of the replicated data is available to maintain the latency target.
  • the software service provides a name of each volume consistent among each node where the volume is visible to applications.
  • placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
  • the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
  • a storage system comprising multiple storage devices on one or more network attached storage nodes where data is partitioned into one or more volumes, with each volume visible [to applications] on a subset of the storage nodes and on non-storage nodes on the network, where data for each volume is placed on storage devices in order to maintain a maximum latency target determined for each volume.
  • the latency characteristics of each storage device that can participate in a volume is determined (measured or derived) in a way that is correlated with the experience of one or more users of the volume.
  • a storage service operates for each visible volume on a network attached node and the storage service decides, or is told, how to place data for a volume on the available storage devices based on the latency target declared for the volume and the known or declared or calculated latency characteristics of each storage device available to the volume.
  • multiple storage services are amalgamated into a single storage service making decisions for multiple visible volumes.
  • replicated data can be placed on storage devices that violate the maximum latency target determined for each volume because other copies of the replicated data are available to maintain the latency target.
  • the name of each visible volume is consistent among the nodes where the volume is visible to applications.
  • the storage devices may themselves be independent storage systems.
  • the placement information required to access or store data is only partially available to a storage service and that information must be determined through a discovery protocol.
  • the placement information determined through a discovery protocol may not be correct at the subsequent time of use, and with the mechanisms to realize this and use correct placement information.
  • a storage system comprising multiple storage devices on one or more network attached storage nodes, where data is partitioned into one or more volumes, where each storage device is represented by an actor that provides a storage service for one or more volumes that can have their data stored on [i.e. are eligible to use] said storage device.
  • multiple storage services are amalgamated into a single storage service acting for multiple storage devices.
  • the name of each volume is consistent among the nodes where the volume is visible to applications.
  • each storage device may itself be an independent storage system.
  • Figs. 1 and 2 are schematic system diagrams of the application centric storage system according to one embodiment of the invention.
  • inventions of the systems and methods described herein may be implemented in hardware or software, or a combination of both. These embodiments may be implemented in computer programs executing on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one
  • Program code is applied to input data to perform the functions described herein and to generate output information.
  • the output information is applied to one or more output devices, in known fashion.
  • Each program may be implemented in a high level procedural or object oriented programming or scripting language, or both, to communicate with a computer system. However, alternatively the programs may be implemented in assembly or machine language, if desired. The language may be a compiled or interpreted language. Each such computer program may be stored on a storage media or a device (e.g., ROM, magnetic disk, optical disc), readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein.
  • Embodiments of the system may also be considered to be implemented as a non- transitory computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
  • the systems and methods of the described embodiments are capable of being distributed in a computer program product including a physical, non-transitory computer readable medium that bears computer usable instructions for one or more processors.
  • the medium may be provided in various forms, including one or more diskettes, compact disks, tapes, chips, magnetic and electronic storage media, and the like.
  • Non-transitory computer- readable media comprise all computer-readable media, with the exception being a transitory, propagating signal.
  • the term non-transitory is not intended to exclude computer readable media such as a volatile memory or RAM, where the data stored thereon is only temporarily stored.
  • the computer usable instructions may also be in various forms, including compiled and non- compiled code.
  • X and/or Y is intended to mean X or Y or both, for example.
  • X, Y, and/or Z is intended to mean X or Y or Z or any combination thereof.
  • Storage Node - a storage node includes any server or computer system providing access to one or more storage devices in a network.
  • Non-Storage Node - a non-storage node is a network server having as its primary function a task other than data storage.
  • Application Centric - application centric is defined in the context of this description as the ability to make data storage decisions and carry out data storage functions based on the requirements of applications accessing the data, or to otherwise optimize data storage functions from the applications perspective.
  • Actor - an actor is a virtual or software representation of a volume stored on one or more storage devices, which also acts as a software-implemented controller for the storage device. It may or may not be stored or implemented on the storage device itself.
  • the application centric distributed storage system manages data storage on a large number of storage devices. It may be used to amalgamate an existing system of multiple storage devices into one storage service, and can absorb additional storage devices added at a later time.
  • the system automatically responds to user settings to adjust and continuously monitor data storage to satisfy the user's computing and/or data storage requirements. These requirements broadly define a policy for which the storage is optimized in various embodiments of the invention.
  • the policy could be optimized for a target latency, or IOPS (input/output per second), or bandwidth.
  • IOPS input/output per second
  • Minimum and maximum limitations are used for device selection and throttling. For example, the system could throttle at max IOPS and prevent placing data on a storage device that is slower than a minimum IOPS.
  • the distributed model could be similar to the hyper-converged and web-scale architectures that the compute tier employs. This could be done by deploying agents within physical servers or virtual machines that can scan all the available storage resources.. Storage administration should include assigning capacity, performance level and data protection requirements. From this policy information for each volume the invention automatically places data in storage devices expected to provide the required service and monitors and moves or copies data as necessary to maintain the required service.
  • the architecture of having such monitoring and control decisions made by each actor in their local context allows the invention to scale without architectural limits. This architecture allows storage policies to scale across storage systems, in a shared-nothing model.
  • the application centric distributed storage system is a more efficient storage system that may automatically manage and standardize multiple storage services with varying hardware configurations, in such a way as to meet a user's defined performance targets.
  • the application centric distributed storage system may improve data storage automation by having storage automatically adjust to conditions occurring in the environment, (eg: allocate more flash storage to a data set seeing an increase in read and/or write activity. Or increase data protection based on activity - something accessed continuously may be backed up continuously) It may also deliver orchestration: network and storage infrastructure are preprogrammed to deliver an intended service level.
  • QoS Quality of Service
  • FIG. 1 is a schematic diagram of a specific application centric distributed storage system 100 for storing data on a set of distributed storage devices comprising one or more storage devices 102, a computer network 104, storage nodes 106, computer system 108, actors 110.
  • Fig. 2 is a generalized version of Fig. 1 showing a plurality of the aforementioned elements.
  • the system in Fig. 2 is scalable and may include as many of each elements as is practically and economically feasible.
  • actors 110 are virtual representations of a volume 112 which present a virtualized volume to a specific application and acts as controllers by managing part of each of the storage devices where the underlying volume 112 is stored.
  • Actors 110 could be executed from computer system 108.
  • Computer system 108 may generally be a network server through which user computers access the network.
  • the system 100 uses a distributed metadata model and decentralized decision making, in that each volume 112 is represented by an actor 110 that understands which storage devices 102 participate in the volume 112, and communicates with other actors 110 for those volumes, and makes independent queries and decisions about the state of other actors (110) and the data they are responsible for.
  • computer system 108 (or a plurality of computer systems represented by system 108) contains a set of actors 108, where each individual actor is a virtual representation of a volume 112. These actors are in communication with each other such that each is aware of other actors (and by extension, other storage devices) used for particular volumes of data.
  • Storage device 102 may be any hardware device capable of storing data including hard drives, flash drives, solid state drives, storage class memory and the like. Storage device 102 may also be a cloud-based storage device or any other storage service visible to a particular storage node.
  • System 100 may contain a combination of different types of storage devices 102. Each storage device 102 may have unique technical specifications including memory capacity, read/write speed, lifespan, etc. Each storage device 102 may have unique known latency characteristics, or said latency characteristics may be determined. Additional storage devices 102 may be added to the system 100 at any time and the system 100 may maintain latency targets.
  • Communication network 104 may be substantially any public or private network, wired or wireless, and may be substantially comprised of one or more networks that may be able to facilitate communication between themselves and between the various parts of system 100.
  • Storage node 106 may be any electronic device attached to the communication network 104 capable of receiving or transmitting data.
  • Storage node 106 may be a standard server having at least one storage node behind it.
  • storage node 106 is a physical or virtual Linux server.
  • Computer user system 108 may be a combination of one or more computers running software applications that require accessing stored digital data. Any computer may have a number of physical and logical components such as processors, memory, input/output interfaces, network connections, etc. System 108 may include a central computer that may control the operation of the system 100 through a dashboard interface.
  • One or more computers of user system 108 may run the storage service software.
  • User system 108 may comprise one or more human operators, such as an IT employee, capable of using software to adjust desired storage system requirements as needed. Operators (administrators) may define QoS policies for individual applications or groups of applications through the dashboard. QoS policies may include performance (IOPS, latency, bandwidth), capacity, and data protection (e.g. replication, snapshots) levels.
  • IOPS performance
  • latency bandwidth
  • capacity capacity
  • data protection e.g. replication, snapshots
  • Actor 110 may be a software module, in part representing a storage device 102.
  • the actor 110 may keep track of which volumes the associated storage device 102 participates in. Actor 110 may communicate with other actors for associated volumes. Actor 110 may make queries and decisions about the state of other actors and the data with other associated actors 110.
  • the actor 110 may determine how to place data for a volume on storage devices
  • a volume of data as represented by, and known to the actors is a virtualized volume, which may physically exist in one or more of the individual storage devices 102. This virtualization of the volume definitions permits the actors to manipulate where data is physically stored while maintaining the volume definitions at the application level, thus resulting in the application-centric data storage system. Applications see consistent definitions and mappings of volumes, even where data itself may be moved or manipulated between different specific hardware storage devices.
  • the plurality of actors 110 acting together form a storage service, whereby each actor defines a virtual representation within the storage service of its respective volume and acts as a controller for that data storage device.
  • controller is used to refer to the function of the actors managing part of for each of the storage devices where the volume they represent has an interest.
  • the software service determines performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices. This could be accomplished by characterizing idle performance of storage devices and/or by real-time measurements of the storage device performance from the perspective of an application.
  • the actors 110 may be understood as providing the functionality of a volume manager.
  • An access point is where the storage service is exposed.
  • a traditional block device volume might be exposed simultaneously on three nodes, so there would be three actors running for that volume, all in communication with each other.
  • Communication between the actors could be implemented using TCP sessions with a known protocol.
  • the actors all have to talk to each other to ensure consistency of allocations and data migrations/movements.
  • the actors both internally within a volume and externally between volumes, compete with each other for storage resources.
  • the actors individually manage QoS on behalf of their application (ie. talking to the volume through a local access point), but when communicating amongst each other within these confines creates the architecture and opportunity to scale the system up because the complexity does not grow with system size it grows for each volume with the number of storage devices that participate in the volume.
  • the storage service implements the placement of data for each volume on the storage devices based on the performance target for each volume and on the determined performance characteristics for each storage device available to the volume.
  • the storage service permits replicated data to be placed on storage devices violating the maximum latency target determined for each volume, provided a copy of the replicated data is available to maintain the latency target. This allows the storage service to deemphasize data replication applications or back-up instructions from other applications so as to optimize latency targets for applications using the data for normal operations.
  • the behavior of the entire system 100 is therefore the aggregated behavior of a number of actors 110 making independent decisions on placement of data based on where the data is accessed from, the nature of the access (reads or writes), the performance policy, and each actor's 110 understanding of the state of its correspondent actors 110.
  • the information used by any actor 110 to make a placement or retrieval decision may not be correct at the time of the decision or its implementation, and the invention is designed to assume this and self-correct.
  • the actors are in constant communication with each other and implement failure handling mechanisms to ensure consistency. In its simplest implementation, if an actor drops out, its data is considered lost. However, it is also contemplated that the data of an actor that has dropped out may be resynchronized.
  • actor 110 results in storage virtualization that is responsive to real-time parameters and characteristics of the physical storage devices in the system, all the while requiring no adaptation by applications accessing the data.
  • Applications view the virtualized storage system as virtual volumes indistinguishable from physical volumes, even though the actual data storage could be spread across multiple storage devices as described above.
  • the system 100 software may have multiple automated processes and abilities: the ability to place active data on high-performance media for fast access, and stale data onto inexpensive capacity media. generate alerts if QoS levels are violated, and may automatically make adjustments to attain the permitted levels. Adjustments generally consist of moving data to a storage device that complies with QoS requirements. Alternatively, in the case of data protection, adjustments may include copying the data. partition data into one or more volumes (named collection of data) and determine the location(s) where each volume may be placed on one or more of the storage devices 102. The determination may be made using calculated, preset performance targets (of the volume) and known performance characteristics of each storage device 102.
  • each volume may be placed on storage devices in such a way in order to maintain a maximum performance target determined for each volume.
  • each volume may have a name or identifier, such that each visible volume is consistent among the nodes where the volume is visible to applications.
  • use a discovery protocol to determine the data placement information, without such discovery protocol the placement information is only partially available to a storage service.
  • the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information. detect the addition of new storage devices 102 and automatically use them, possibly subject to policy constraints, for existing and new volumes, which may result in volume data being moved to the new storage devices.
  • the system 100 includes a data protection mechanism (nominally replication) that is enforced on every write of data, but because placement decisions are based on fulfilling a performance policy the placement may be asymmetrical in that only one high performance location is required to fulfill a high performance read request, and multiple high performance locations are required to fulfill a high performance write request with full protection (positive write acknowledgements from remote nodes 106) on the data.
  • a data protection mechanism nominally replication
  • the placement may be asymmetrical in that only one high performance location is required to fulfill a high performance read request, and multiple high performance locations are required to fulfill a high performance write request with full protection (positive write acknowledgements from remote nodes 106) on the data.
  • Performance settings may include placing active data on performance media near compute and stale data on appropriate capacity media.
  • QoS settings may include minimum/maximum, target, and burst for IOPS, latency, and bandwidth, as well as data protection and data placement policies. (Real-time setting and enforcement of latency, bandwidth, and performance over various workloads.)
  • Capacity management may include thick provisioning and elastic storage without a fixed capacity.
  • Embodiments of the invention as herein described provide a deeper granularity than prior art volume definitions or LUN.
  • the solution makes decisions about volume storage definitions based QoS parameters. QoS-driven data movement decisions are made at an extent size granularity which can be quite small, and the effect of data movement is to change the storage device(s) data is physically placed on, not to move the data to a different volume.
  • Tiers are not limited to flash and hard disks. For example, DRAM could be accessed as another tier of storage that can be allocated to these various types of QoS policies, allowing for even greater storage performance prioritization.
  • QoS is also not limited to performance.
  • Another QoS parameter could be set for data protection levels.
  • a QoS setting could require that data be asynchronously copied to a second, independent storage system creating a real-time backup.
  • a QoS setting could require a synchronous copy of data be made to a second system.
  • Another data protection capability is limiting the storage devices participating in a volume to a number or to a set that has particular relationships to the sets of storage devices used for other volumes, in order to limit the total effect of particular storage devices or computers with storage devices failing.
  • a distributed hash table based storage system because all volumes keep data on all nodes, one more failure than the system is designed for will almost certainly destroy data on all volumes in the system, whereas in the invention, even without special policies in this regard, the data destroyed is only that which certain volumes keep on the failed device.
  • the sophistication of this mechanism can be improved over time by coordination between actors that have choices in which storage devices to use for a volume.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A software defined storage network comprising one or more storage nodes, each storage node including a computer processor and one or more data storage devices; the one or more storage devices including a computer readable medium storing data partitioned into one or more volumes; wherein the one or more volumes are visible to at least a subset of the storage nodes and to non-storage nodes on the network; and a computer system in communication with the network having a computer processor executing instructions stored on a computer readable medium to define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of at least one of the volumes and acts as a controller for each of the at least one data storage devices; wherein each of the plurality of actors places data for each volume on the storage devices according to at least one policy.

Description

APPLICATION CENTRIC DISTRIBUTED STORAGE SYSTEM AND METHOD RELATED APPLICATIONS
[0001] This application claims priority from US Provisional Patent Application No.
62/045,927, filed September 4, 2014, the contents of which are incorporated herein by reference.
TECHNICAL FIELD
[0002] This invention relates generally to data storage. More specifically it relates to a system and method of partitioning and storing data on multiple storage resources in a way that enhances latency and protection parameters.
BACKGROUND
[0003] Software defined storage (SDS) is a concept of computer data storage where the storage of digital data is managed by software rather than the storage hardware itself. Many operations previously managed by each independent hardware device are virtualized into software. Multiple storage hardware elements can be managed through software, with a central interface.
[0004] Storage and computational demands and workloads change continuously. Data requirements are constantly increasing. Current SDS systems are limited in several ways, exemplified by a lack of a sub-volume level of understanding of the data for most of the software features, so capabilities like snapshots and data tiering have to occur at the volume level not at the application or virtual machine level. This results from their adherence to legacy storage architecture limitations and volumes.
[0005] Quality of Service (QoS), if it is available, also is typically limited to a specific volume. This means that if a storage or application administrator wants to alter the current QoS setting of an application or virtual machine it needs to be migrated to another volume. The volume cannot adjust to the needs of the VM.
[0006] SDS tends to entirely replace the software services that are available on the storage system. In other words, SDS, as it currently exists, means that an organization is buying the feature twice. Once when it is "included" with the hardware, and again with the SDS solution. The justifications for this "double-buy" are that the IT professional can now manage storage through a single pane of glass and that future storage hardware can be purchased without these services. In reality it is hard to find a storage system without some form of data services.
[0007] Finally, most SDS architectures are dependent on a single- or dual-controller architecture. This limits the system's ability to scale and limits availability. These are critical features for the SDS design since it proposes to replace all data services. If these nodes fail all services stop.
[0008] There is accordingly a need in the art for improved software defined storage methods and systems.
SUMMARY OF THE INVENTION
[0009] In an embodiment of the invention, there is provided a software defined storage network comprising one or more storage nodes, each storage node including a computer processor and one or more data storage devices; the one or more storage devices including a computer readable medium storing data partitioned into one or more volumes; wherein the one or more volumes are visible to at least a subset of the storage nodes and to non-storage nodes on the network; and a computer system in communication with the network having a computer processor executing instructions stored on a computer readable medium to define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of at least one of the volumes and acts as an exclusive or non-exclusive controller for all or part of each of the at least one data storage devices; wherein each of the plurality of actors places data for each volume on the storage devices according to at least one policy; the at least one policy including maintaining a maximum latency target for each volume.
[0010] In one aspect of the invention, the at least one policy includes one of optimizing for a latency target, input/output operations per second and/or bandwidth.
[0011] In one aspect of the invention, the software service determines latency
performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices. [0012] In another aspect of the invention, the storage service implements the placement of data for each volume on the storage devices based on the latency target for each volume and on the determined latency characteristics for each storage device available to the volume.
[0013] In another aspect of the invention, multiple storage services are amalgamated into a single storage service.
[0014] In another aspect of the invention, the storage service permits replicated data to be placed on storage devices violating the maximum latency target determined for each volume, wherein a copy of the replicated data is available to maintain the latency target.
[0015] In another aspect of the invention, the software service provides a name of each volume consistent among each node where the volume is visible to applications.
[0016] In another aspect of the invention, placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
[0017] In another aspect of the invention, the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
[0018] In another embodiment of the invention, there is provided a method for storing computer data on a storage network, the storage network comprising one or more storage nodes, each node including a computer processor and one or more storage device and each storage device including a computer readable medium storing data partitioned into one or more volumes visible to storage and non-storage nodes on the network, the method including implementing via computer executable instructions that when executed by a processor define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of at least one of the volumes and acts as an exclusive or non-exclusive controller for each of the at least one data storage devices; placing, via at least one of the plurality of actors, data for each volume on the storage devices according to at least one policy. [0019] In one aspect of this method, the at least one policy includes one of optimizing for a latency target, input/output operations per second and/or bandwidth.
[0020] In another aspect of the invention, the method further comprises determining performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices.
[0021] In another aspect of the invention, the method further comprises storing data for each volume on the storage devices based on the latency target for each volume and on the determined latency characteristics for each storage device available to the volume.
[0022] In another aspect of the invention, the method further comprises violating the maximum latency target determined for each volume when storing replicated data, provided a copy of the replicated data is available to maintain the latency target.
[0023] In another aspect of the invention, the software service provides a name of each volume consistent among each node where the volume is visible to applications.
[0024] In another aspect of the invention, placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
[0025] In another aspect of the invention the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
[0026] In another embodiment of the invention, there is provided a storage system comprising multiple storage devices on one or more network attached storage nodes where data is partitioned into one or more volumes, with each volume visible [to applications] on a subset of the storage nodes and on non-storage nodes on the network, where data for each volume is placed on storage devices in order to maintain a maximum latency target determined for each volume. [0027] In one aspect of the invention, the latency characteristics of each storage device that can participate in a volume is determined (measured or derived) in a way that is correlated with the experience of one or more users of the volume.
[0028] In another aspect of the invention, a storage service operates for each visible volume on a network attached node and the storage service decides, or is told, how to place data for a volume on the available storage devices based on the latency target declared for the volume and the known or declared or calculated latency characteristics of each storage device available to the volume.
[0029] In another aspect of the invention, multiple storage services are amalgamated into a single storage service making decisions for multiple visible volumes.
[0030] In another aspect of the invention, replicated data can be placed on storage devices that violate the maximum latency target determined for each volume because other copies of the replicated data are available to maintain the latency target.
[0031] In another aspect of the invention, the name of each visible volume is consistent among the nodes where the volume is visible to applications.
[0032] In another aspect of the invention, the storage devices may themselves be independent storage systems.
[0033] In another aspect of the invention, the placement information required to access or store data is only partially available to a storage service and that information must be determined through a discovery protocol.
[0034] In another aspect of the invention, the placement information determined through a discovery protocol may not be correct at the subsequent time of use, and with the mechanisms to realize this and use correct placement information.
[0035] According to another embodiment of the invention, there is provided a storage system comprising multiple storage devices on one or more network attached storage nodes, where data is partitioned into one or more volumes, where each storage device is represented by an actor that provides a storage service for one or more volumes that can have their data stored on [i.e. are eligible to use] said storage device.
[0036] In one aspect of the second embodiment, multiple storage services are amalgamated into a single storage service acting for multiple storage devices.
[0037] In another aspect of the second embodiment, the name of each volume is consistent among the nodes where the volume is visible to applications.
[0038] In another aspect of the second embodiment, each storage device may itself be an independent storage system.
[0039] Aspects described with respect to the method are equally applicable to those aspects described with respect to the system, and vice versa.
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] The invention is illustrated in the figures of the accompanying drawings which are meant to be exemplary and not limiting, in which like references are intended to refer to like or corresponding parts, and in which:
Figs. 1 and 2 are schematic system diagrams of the application centric storage system according to one embodiment of the invention.
DETAILED DESCRIPTION
[0041] For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements or steps. In addition, numerous specific details are set forth in order to provide a thorough understanding of the exemplary embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments generally described herein. [0042] Furthermore, this description is not to be considered as limiting the scope of the embodiments described herein in any way, but rather as merely describing the implementation of various embodiments as described.
[0043] The embodiments of the systems and methods described herein may be implemented in hardware or software, or a combination of both. These embodiments may be implemented in computer programs executing on programmable computers, each computer including at least one processor, a data storage system (including volatile memory or non-volatile memory or other data storage elements or a combination thereof), and at least one
communication interface. Program code is applied to input data to perform the functions described herein and to generate output information. The output information is applied to one or more output devices, in known fashion.
[0044] Each program may be implemented in a high level procedural or object oriented programming or scripting language, or both, to communicate with a computer system. However, alternatively the programs may be implemented in assembly or machine language, if desired. The language may be a compiled or interpreted language. Each such computer program may be stored on a storage media or a device (e.g., ROM, magnetic disk, optical disc), readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer to perform the procedures described herein. Embodiments of the system may also be considered to be implemented as a non- transitory computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
[0045] Furthermore, the systems and methods of the described embodiments are capable of being distributed in a computer program product including a physical, non-transitory computer readable medium that bears computer usable instructions for one or more processors. The medium may be provided in various forms, including one or more diskettes, compact disks, tapes, chips, magnetic and electronic storage media, and the like. Non-transitory computer- readable media comprise all computer-readable media, with the exception being a transitory, propagating signal. The term non-transitory is not intended to exclude computer readable media such as a volatile memory or RAM, where the data stored thereon is only temporarily stored. The computer usable instructions may also be in various forms, including compiled and non- compiled code.
[0046] It should also be noted that, as used herein, the wording and/or is intended to represent an inclusive-or. That is, X and/or Y is intended to mean X or Y or both, for example. As a further example, X, Y, and/or Z is intended to mean X or Y or Z or any combination thereof.
[0047] Definition of Key Terms
[0048] While most terminology used in this description will have their plain and common meaning as used in the art of network and/or storage computer systems, certain key terms are defined below for added clarity and understanding of the invention.
[0049] Storage Node - a storage node includes any server or computer system providing access to one or more storage devices in a network.
[0050] Non-Storage Node - a non-storage node is a network server having as its primary function a task other than data storage.
[0051] Application Centric - application centric is defined in the context of this description as the ability to make data storage decisions and carry out data storage functions based on the requirements of applications accessing the data, or to otherwise optimize data storage functions from the applications perspective.
[0052] Actor - an actor is a virtual or software representation of a volume stored on one or more storage devices, which also acts as a software-implemented controller for the storage device. It may or may not be stored or implemented on the storage device itself.
[0053] Preferred Embodiments
[0054] The application centric distributed storage system according to the invention manages data storage on a large number of storage devices. It may be used to amalgamate an existing system of multiple storage devices into one storage service, and can absorb additional storage devices added at a later time. The system automatically responds to user settings to adjust and continuously monitor data storage to satisfy the user's computing and/or data storage requirements. These requirements broadly define a policy for which the storage is optimized in various embodiments of the invention. The policy could be optimized for a target latency, or IOPS (input/output per second), or bandwidth. Minimum and maximum limitations are used for device selection and throttling. For example, the system could throttle at max IOPS and prevent placing data on a storage device that is slower than a minimum IOPS.
[0055] The distributed model could be similar to the hyper-converged and web-scale architectures that the compute tier employs. This could be done by deploying agents within physical servers or virtual machines that can scan all the available storage resources.. Storage administration should include assigning capacity, performance level and data protection requirements. From this policy information for each volume the invention automatically places data in storage devices expected to provide the required service and monitors and moves or copies data as necessary to maintain the required service. The architecture of having such monitoring and control decisions made by each actor in their local context allows the invention to scale without architectural limits. This architecture allows storage policies to scale across storage systems, in a shared-nothing model.
[0056] The application centric distributed storage system is a more efficient storage system that may automatically manage and standardize multiple storage services with varying hardware configurations, in such a way as to meet a user's defined performance targets.
[0057] The application centric distributed storage system may improve data storage automation by having storage automatically adjust to conditions occurring in the environment, (eg: allocate more flash storage to a data set seeing an increase in read and/or write activity. Or increase data protection based on activity - something accessed continuously may be backed up continuously) It may also deliver orchestration: network and storage infrastructure are preprogrammed to deliver an intended service level. QoS (Quality of Service) is the mechanism that drives, allows user to set service levels, and then adjusts to maintain those service levels as the environment around changes. [0058] Fig. 1 is a schematic diagram of a specific application centric distributed storage system 100 for storing data on a set of distributed storage devices comprising one or more storage devices 102, a computer network 104, storage nodes 106, computer system 108, actors 110. Fig. 2 is a generalized version of Fig. 1 showing a plurality of the aforementioned elements. The system in Fig. 2 is scalable and may include as many of each elements as is practically and economically feasible. In implementation, actors 110 are virtual representations of a volume 112 which present a virtualized volume to a specific application and acts as controllers by managing part of each of the storage devices where the underlying volume 112 is stored. Actors 110 could be executed from computer system 108. Computer system 108 may generally be a network server through which user computers access the network.
[0059] The system 100 uses a distributed metadata model and decentralized decision making, in that each volume 112 is represented by an actor 110 that understands which storage devices 102 participate in the volume 112, and communicates with other actors 110 for those volumes, and makes independent queries and decisions about the state of other actors (110) and the data they are responsible for. Specifically, computer system 108 (or a plurality of computer systems represented by system 108) contains a set of actors 108, where each individual actor is a virtual representation of a volume 112. These actors are in communication with each other such that each is aware of other actors (and by extension, other storage devices) used for particular volumes of data.
[0060] Storage device 102 may be any hardware device capable of storing data including hard drives, flash drives, solid state drives, storage class memory and the like. Storage device 102 may also be a cloud-based storage device or any other storage service visible to a particular storage node. System 100 may contain a combination of different types of storage devices 102. Each storage device 102 may have unique technical specifications including memory capacity, read/write speed, lifespan, etc. Each storage device 102 may have unique known latency characteristics, or said latency characteristics may be determined. Additional storage devices 102 may be added to the system 100 at any time and the system 100 may maintain latency targets. [0061] Communication network 104 may be substantially any public or private network, wired or wireless, and may be substantially comprised of one or more networks that may be able to facilitate communication between themselves and between the various parts of system 100.
[0062] Storage node 106 may be any electronic device attached to the communication network 104 capable of receiving or transmitting data. Storage node 106 may be a standard server having at least one storage node behind it. In an exemplary embodiment, storage node 106 is a physical or virtual Linux server.
[0063] Computer user system 108 may be a combination of one or more computers running software applications that require accessing stored digital data. Any computer may have a number of physical and logical components such as processors, memory, input/output interfaces, network connections, etc. System 108 may include a central computer that may control the operation of the system 100 through a dashboard interface.
[0064] One or more computers of user system 108 may run the storage service software.
This is where the dashboard will be run from and the settings will be determined.
[0065] User system 108 may comprise one or more human operators, such as an IT employee, capable of using software to adjust desired storage system requirements as needed. Operators (administrators) may define QoS policies for individual applications or groups of applications through the dashboard. QoS policies may include performance (IOPS, latency, bandwidth), capacity, and data protection (e.g. replication, snapshots) levels.
[0066] Actor 110 may be a software module, in part representing a storage device 102.
The actor 110 may keep track of which volumes the associated storage device 102 participates in. Actor 110 may communicate with other actors for associated volumes. Actor 110 may make queries and decisions about the state of other actors and the data with other associated actors 110.
[0067] The actor 110 may determine how to place data for a volume on storage devices
102 based on latency targets for the volume and latency characteristics of the storage device. This determination is made at the actor level and occurs without specific user applications being aware of the presence or actions of the actors 110. [0068] Each actor 110 also understands the policies for each volume and promotes or demotes data among the actors 110 for a particular volume, including itself, based on their latency distance from itself and how that relates to the latency policy for the volume. For greater emphasis, a volume of data as represented by, and known to the actors, is a virtualized volume, which may physically exist in one or more of the individual storage devices 102. This virtualization of the volume definitions permits the actors to manipulate where data is physically stored while maintaining the volume definitions at the application level, thus resulting in the application-centric data storage system. Applications see consistent definitions and mappings of volumes, even where data itself may be moved or manipulated between different specific hardware storage devices.
[0069] The plurality of actors 110 acting together form a storage service, whereby each actor defines a virtual representation within the storage service of its respective volume and acts as a controller for that data storage device. The term controller is used to refer to the function of the actors managing part of for each of the storage devices where the volume they represent has an interest. The software service determines performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices. This could be accomplished by characterizing idle performance of storage devices and/or by real-time measurements of the storage device performance from the perspective of an application.
[0070] The actors 110 may be understood as providing the functionality of a volume manager. In this context, there is an actor, or a volume manager, running at each access point for the volume. An access point is where the storage service is exposed. For example, a traditional block device volume might be exposed simultaneously on three nodes, so there would be three actors running for that volume, all in communication with each other. Communication between the actors could be implemented using TCP sessions with a known protocol. The actors all have to talk to each other to ensure consistency of allocations and data migrations/movements. In addition, the actors, both internally within a volume and externally between volumes, compete with each other for storage resources. The actors individually manage QoS on behalf of their application (ie. talking to the volume through a local access point), but when communicating amongst each other within these confines creates the architecture and opportunity to scale the system up because the complexity does not grow with system size it grows for each volume with the number of storage devices that participate in the volume.
[0071] The storage service implements the placement of data for each volume on the storage devices based on the performance target for each volume and on the determined performance characteristics for each storage device available to the volume.
[0072] The storage service permits replicated data to be placed on storage devices violating the maximum latency target determined for each volume, provided a copy of the replicated data is available to maintain the latency target. This allows the storage service to deemphasize data replication applications or back-up instructions from other applications so as to optimize latency targets for applications using the data for normal operations.
[0073] The behavior of the entire system 100 is therefore the aggregated behavior of a number of actors 110 making independent decisions on placement of data based on where the data is accessed from, the nature of the access (reads or writes), the performance policy, and each actor's 110 understanding of the state of its correspondent actors 110. The information used by any actor 110 to make a placement or retrieval decision may not be correct at the time of the decision or its implementation, and the invention is designed to assume this and self-correct. The actors are in constant communication with each other and implement failure handling mechanisms to ensure consistency. In its simplest implementation, if an actor drops out, its data is considered lost. However, it is also contemplated that the data of an actor that has dropped out may be resynchronized.
[0074] The implementation of actors 110 as herein described results in storage virtualization that is responsive to real-time parameters and characteristics of the physical storage devices in the system, all the while requiring no adaptation by applications accessing the data. Applications view the virtualized storage system as virtual volumes indistinguishable from physical volumes, even though the actual data storage could be spread across multiple storage devices as described above.
[0075] The system 100 software may have multiple automated processes and abilities: the ability to place active data on high-performance media for fast access, and stale data onto inexpensive capacity media. generate alerts if QoS levels are violated, and may automatically make adjustments to attain the permitted levels. Adjustments generally consist of moving data to a storage device that complies with QoS requirements. Alternatively, in the case of data protection, adjustments may include copying the data. partition data into one or more volumes (named collection of data) and determine the location(s) where each volume may be placed on one or more of the storage devices 102. The determination may be made using calculated, preset performance targets (of the volume) and known performance characteristics of each storage device 102. volumes may be placed on storage devices in such a way in order to maintain a maximum performance target determined for each volume. each volume may have a name or identifier, such that each visible volume is consistent among the nodes where the volume is visible to applications. use a discovery protocol to determine the data placement information, without such discovery protocol the placement information is only partially available to a storage service. The software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information. detect the addition of new storage devices 102 and automatically use them, possibly subject to policy constraints, for existing and new volumes, which may result in volume data being moved to the new storage devices. manage replicated data, including placing replicated data on storage devices 102 in a way that violates performance targets. [0076] The system 100 includes a data protection mechanism (nominally replication) that is enforced on every write of data, but because placement decisions are based on fulfilling a performance policy the placement may be asymmetrical in that only one high performance location is required to fulfill a high performance read request, and multiple high performance locations are required to fulfill a high performance write request with full protection (positive write acknowledgements from remote nodes 106) on the data.
[0077] Even though the conceptual design of the system 110 uses independent actors
110 for each volume, in a practical implementation these may be joined into a single process or a number of processes that is smaller than the number of volumes represented, without changing the essence of the invention.
[0078] Performance settings may include placing active data on performance media near compute and stale data on appropriate capacity media.
[0079] QoS settings may include minimum/maximum, target, and burst for IOPS, latency, and bandwidth, as well as data protection and data placement policies. (Real-time setting and enforcement of latency, bandwidth, and performance over various workloads.)
[0080] Capacity management may include thick provisioning and elastic storage without a fixed capacity.
[0081] Embodiments of the invention as herein described provide a deeper granularity than prior art volume definitions or LUN. The solution makes decisions about volume storage definitions based QoS parameters. QoS-driven data movement decisions are made at an extent size granularity which can be quite small, and the effect of data movement is to change the storage device(s) data is physically placed on, not to move the data to a different volume.
[0082] For example, if a move from a first level to a second level is requested then the flash allocation to that dataset is transparently increased. Subsequently if the priority of an application is raised, then the flash allocation may actually be larger than the hard disk allocation, almost eliminating access from non-flash media. Further, if an upgrade of an application's QoS occurs once more, then its dataset is 100% allocated from flash, eliminating any non-flash media access. [0083] Tiers are not limited to flash and hard disks. For example, DRAM could be accessed as another tier of storage that can be allocated to these various types of QoS policies, allowing for even greater storage performance prioritization.
[0084] QoS is also not limited to performance. Another QoS parameter could be set for data protection levels. For business critical data, a QoS setting could require that data be asynchronously copied to a second, independent storage system creating a real-time backup. For mission critical data, a QoS setting could require a synchronous copy of data be made to a second system.
[0085] Another data protection capability is limiting the storage devices participating in a volume to a number or to a set that has particular relationships to the sets of storage devices used for other volumes, in order to limit the total effect of particular storage devices or computers with storage devices failing. For example in a distributed hash table based storage system because all volumes keep data on all nodes, one more failure than the system is designed for will almost certainly destroy data on all volumes in the system, whereas in the invention, even without special policies in this regard, the data destroyed is only that which certain volumes keep on the failed device. The sophistication of this mechanism can be improved over time by coordination between actors that have choices in which storage devices to use for a volume.
[0086] This concludes the description of the various preferred embodiments of the invention, which are not to be limited by the specific embodiments described. Rather, the invention is only limited by the claims that now follow.

Claims

Claims:
1. A software defined storage network comprising: one or more storage nodes, each storage node including a computer processor and one or more data storage devices; said one or more storage devices including a computer readable medium storing data partitioned into one or more volumes; wherein said one or more volumes are visible to at least a subset of said storage nodes and to non-storage nodes on the network; a computer system in communication with the network having a computer processor executing instructions stored on a computer readable medium to define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of one volume and acts as an exclusive or non-exclusive controller for each of said at least one data storage devices where part or all of the one volume is stored; wherein each of said plurality of actors places data for each volume on said storage devices according to at least one policy.
2. The storage network of claim 1, wherein the at least one policy includes one of optimizing for a target, maintaining restrictions on latency, input/output operations per second and/or bandwidth.
3. The storage network of claim 1, wherein said software service determines performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices.
4. The storage network of claim 3, wherein the storage service implements the placement of data for each volume on said storage devices based on the performance target for each volume and on the determined performance characteristics for each storage device available to the volume.
5. The storage system of claim 4, wherein multiple storage services are amalgamated into a single storage service.
6. The storage system of claim 1, wherein the storage service permits replicated data to be placed on storage devices violating the performance target determined for each volume, wherein a copy of the replicated data is available to maintain the performance target.
7. The storage system of claim 1, wherein the software service provides a name of each volume consistent among each node where the volume is visible to applications.
8. The storage system of claim 1, wherein placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
9. The storage system of claim 8, wherein the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
10. The storage system of claim 1, wherein more than one actor defines a virtual representation of the same volume.
11. The storage system of claim 1 , wherein two or more of said plurality of actors are merged into a single process.
12. A method for storing computer data on a storage network, the storage network comprising one or more storage nodes, each node including a computer processor and one or more storage device and each storage device including a computer readable medium storing data partitioned into one or more volumes visible to storage and non- storage nodes on the network, the method comprising: implementing via computer executable instructions that when executed by a processor define a plurality of actors providing a storage service; wherein each actor defines a virtual representation of one volume and acts as an exclusive or non-exclusive controller for each of said at least one data storage devices where the one volume is stored; placing, via at least one of said plurality of actors, data for each volume on said storage devices according to at least one policy.
13. The method of claim 12, wherein the at least one policy includes one of optimizing for a target, maintaining restrictions on latency, input/output operations per second and/or bandwidth.
14. The method of claim 12, further comprising determining performance characteristics of each storage device based in part on the experience of one or more users of a volume accessing each of the storage devices.
15. The method network of claim 14, further comprising storing data for each volume on said storage devices based on the performance policy for each volume and on the determined performance characteristics for each storage device available to the volume.
16. The method of claim 12, further comprising violating the performance policy determined for each volume when storing replicated data, provided a copy of the replicated data is available to maintain compliance with the performance policy.
17. The method of claim 12, wherein the software service provides a name of each volume consistent among each node where the volume is visible to applications.
18. The method of claim 12, wherein placement information required to access or store data on each of the storage devices is in part available to the storage service and in part determined through a discovery protocol.
19. The method of claim 16, wherein the software service provides the capability to determine whether the placement information determined through the discovery protocol is accurate, and upon determining the placement information is inaccurate, reinitializing the discovery protocol or otherwise determining correct placement information.
20. The method of claim 12, wherein more than one actor defines a virtual representation of the same volume.
21. The method of claim 12, wherein two or more of said plurality of actors are merged into a single process.
PCT/CA2015/050847 2014-09-04 2015-09-04 Application centric distributed storage system and method WO2016033691A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2960150A CA2960150C (en) 2014-09-04 2015-09-04 Application centric distributed storage system and method
US15/506,334 US20170251058A1 (en) 2014-09-04 2015-09-04 Application Centric Distributed Storage System and Method
EP15837248.2A EP3195129A4 (en) 2014-09-04 2015-09-04 Application centric distributed storage system and method
CN201580050089.5A CN106716385A (en) 2014-09-04 2015-09-04 Application centric distributed storage system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462045927P 2014-09-04 2014-09-04
US62/045,927 2014-09-04

Publications (1)

Publication Number Publication Date
WO2016033691A1 true WO2016033691A1 (en) 2016-03-10

Family

ID=55438953

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2015/050847 WO2016033691A1 (en) 2014-09-04 2015-09-04 Application centric distributed storage system and method

Country Status (5)

Country Link
US (1) US20170251058A1 (en)
EP (1) EP3195129A4 (en)
CN (1) CN106716385A (en)
CA (1) CA2960150C (en)
WO (1) WO2016033691A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10496447B2 (en) 2017-06-08 2019-12-03 Western Digital Technologies, Inc. Partitioning nodes in a hyper-converged infrastructure

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10574559B2 (en) 2016-11-10 2020-02-25 Bank Of America Corporation System for defining and implementing performance monitoring requirements for applications and hosted computing environment infrastructure
US10169019B2 (en) 2016-11-22 2019-01-01 International Business Machines Corporation Calculating a deployment risk for a software defined storage solution
US10599559B2 (en) * 2016-11-22 2020-03-24 International Business Machines Corporation Validating a software defined storage solution based on field data
US11163626B2 (en) 2016-11-22 2021-11-02 International Business Machines Corporation Deploying a validated data storage deployment
US10303362B2 (en) * 2017-02-15 2019-05-28 Netapp, Inc. Methods for reducing initialization duration and performance impact during configuration of storage drives
CN107193501B (en) * 2017-05-27 2020-07-07 郑州云海信息技术有限公司 Storage acceleration method, device and storage system
CN107422991B (en) * 2017-07-31 2020-07-07 郑州云海信息技术有限公司 Storage strategy management system
US11240306B2 (en) 2017-11-06 2022-02-01 Vast Data Ltd. Scalable storage system
US10656857B2 (en) 2018-06-07 2020-05-19 Vast Data Ltd. Storage system indexed using persistent metadata structures
US10678461B2 (en) 2018-06-07 2020-06-09 Vast Data Ltd. Distributed scalable storage
US11234157B2 (en) * 2019-04-08 2022-01-25 T-Mobile Usa, Inc. Network latency aware mobile edge computing routing
US11227016B2 (en) 2020-03-12 2022-01-18 Vast Data Ltd. Scalable locking techniques
US20220326992A1 (en) * 2021-03-31 2022-10-13 Netapp, Inc. Automated quality of service management mechanism
US11314436B1 (en) * 2021-04-22 2022-04-26 Dell Products, L.P. Method and apparatus for dynamically adjusting differentiated share prioritization in a storage system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019907A1 (en) * 1998-07-17 2002-02-14 Mcmurdie Michael Cluster buster
CA2469624A1 (en) * 2001-12-10 2003-06-19 Monosphere Limited Managing storage resources attached to a data network
US20120011500A1 (en) * 2010-07-09 2012-01-12 Paolo Faraboschi Managing a memory segment using a memory virtual appliance
US20120066446A1 (en) * 2010-09-15 2012-03-15 Sabjan Check A Physical to virtual disks creation (p2v) method, by harvesting data from critical sectors
US20130282994A1 (en) * 2012-03-14 2013-10-24 Convergent.Io Technologies Inc. Systems, methods and devices for management of virtual memory systems

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4318902B2 (en) * 2002-10-15 2009-08-26 株式会社日立製作所 Storage device system control method, storage device system, and program
US7526527B1 (en) * 2003-03-31 2009-04-28 Cisco Technology, Inc. Storage area network interconnect server
JP4575059B2 (en) * 2004-07-21 2010-11-04 株式会社日立製作所 Storage device
WO2012042509A1 (en) * 2010-10-01 2012-04-05 Peter Chacko A distributed virtual storage cloud architecture and a method thereof
US8533523B2 (en) * 2010-10-27 2013-09-10 International Business Machines Corporation Data recovery in a cross domain environment
US8676763B2 (en) * 2011-02-08 2014-03-18 International Business Machines Corporation Remote data protection in a networked storage computing environment
US9239786B2 (en) * 2012-01-18 2016-01-19 Samsung Electronics Co., Ltd. Reconfigurable storage device
US20140130055A1 (en) * 2012-02-14 2014-05-08 Aloke Guha Systems and methods for provisioning of storage for virtualized applications
US9747034B2 (en) * 2013-01-15 2017-08-29 Xiotech Corporation Orchestrating management operations among a plurality of intelligent storage elements

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019907A1 (en) * 1998-07-17 2002-02-14 Mcmurdie Michael Cluster buster
CA2469624A1 (en) * 2001-12-10 2003-06-19 Monosphere Limited Managing storage resources attached to a data network
US20120011500A1 (en) * 2010-07-09 2012-01-12 Paolo Faraboschi Managing a memory segment using a memory virtual appliance
US20120066446A1 (en) * 2010-09-15 2012-03-15 Sabjan Check A Physical to virtual disks creation (p2v) method, by harvesting data from critical sectors
US20130282994A1 (en) * 2012-03-14 2013-10-24 Convergent.Io Technologies Inc. Systems, methods and devices for management of virtual memory systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3195129A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10496447B2 (en) 2017-06-08 2019-12-03 Western Digital Technologies, Inc. Partitioning nodes in a hyper-converged infrastructure

Also Published As

Publication number Publication date
CN106716385A (en) 2017-05-24
EP3195129A4 (en) 2018-05-02
EP3195129A1 (en) 2017-07-26
CA2960150C (en) 2018-01-02
US20170251058A1 (en) 2017-08-31
CA2960150A1 (en) 2016-03-10

Similar Documents

Publication Publication Date Title
CA2960150C (en) Application centric distributed storage system and method
US10248448B2 (en) Unified storage/VDI provisioning methodology
US10489343B2 (en) Cluster file system comprising data mover modules having associated quota manager for managing back-end user quotas
US9338228B1 (en) I/O scheduling and load balancing across the multiple nodes of a clustered environment utilizing data volume based scheduling priorities
US10353730B2 (en) Running a virtual machine on a destination host node in a computer cluster
US11609884B2 (en) Intelligent file system with transparent storage tiering
US9400664B2 (en) Method and apparatus for offloading storage workload
US8621178B1 (en) Techniques for data storage array virtualization
US9798497B1 (en) Storage area network emulation
US10616134B1 (en) Prioritizing resource hosts for resource placement
US20060161752A1 (en) Method, apparatus and program storage device for providing adaptive, attribute driven, closed-loop storage management configuration and control
US20150317556A1 (en) Adaptive quick response controlling system for software defined storage system for improving performance parameter
US20140075111A1 (en) Block Level Management with Service Level Agreement
US9792050B2 (en) Distributed caching systems and methods
US20140310434A1 (en) Enlightened Storage Target
US10761726B2 (en) Resource fairness control in distributed storage systems using congestion data
US10956442B1 (en) Dedicated source volume pool for accelerated creation of block data volumes from object data snapshots
US20230214364A1 (en) Data placement selection among storage devices associated with nodes of a distributed file system cluster
US11269792B2 (en) Dynamic bandwidth management on a storage system
Meyer et al. Impact of single parameter changes on Ceph cloud storage performance
US11803425B2 (en) Managing storage resources allocated to copies of application workloads
US10983820B2 (en) Fast provisioning of storage blocks in thin provisioned volumes for supporting large numbers of short-lived applications
US11853586B2 (en) Automated usage based copy data tiering system
Wen Improving Application Performance in the Emerging Hyper-converged Infrastructure
CN118158097A (en) Managing bandwidth allocation for replication sessions within a storage system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15837248

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
WWE Wipo information: entry into national phase

Ref document number: 15506334

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2015837248

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015837248

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2960150

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE