EP2867763B1 - Data storage with virtual appliances - Google Patents

Data storage with virtual appliances Download PDF

Info

Publication number
EP2867763B1
EP2867763B1 EP13731793.9A EP13731793A EP2867763B1 EP 2867763 B1 EP2867763 B1 EP 2867763B1 EP 13731793 A EP13731793 A EP 13731793A EP 2867763 B1 EP2867763 B1 EP 2867763B1
Authority
EP
European Patent Office
Prior art keywords
storage
node
universal
nodes
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP13731793.9A
Other languages
German (de)
French (fr)
Other versions
EP2867763A1 (en
Inventor
William OPPERMANN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MPSTOR Ltd
Original Assignee
MPSTOR Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MPSTOR Ltd filed Critical MPSTOR Ltd
Priority to EP17186335.0A priority Critical patent/EP3279789A1/en
Publication of EP2867763A1 publication Critical patent/EP2867763A1/en
Application granted granted Critical
Publication of EP2867763B1 publication Critical patent/EP2867763B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • G06F11/1484Generic software techniques for error detection or fault masking by means of middleware or OS functionality involving virtual machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1469Backup restoration techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/203Failover techniques using migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2035Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant without idle spare hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2046Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2087Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring with a common controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0664Virtualisation aspects at device level, e.g. emulation of a storage device or system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/815Virtual
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data

Definitions

  • the invention relates to data storage and more particularly to organisation of functional nodes in providing storage to consumers.
  • VA Virtual Appliances
  • Hypervisor also known as Virtual Machines
  • VA are created through the use of a Hypervisor application, Hypervisor, network, and compute and storage resources. They are described for example in US2010/0228903 (Chandrasekaran ). Resources for the virtual appliances are provided by software and hardware for network, compute and storage functions.
  • the generally accepted definition of a VA is an aggregation of a guest operating system, using virtualised compute, memory, network and storage resources within a Hypervisor environment.
  • Network resources include networks, virtual LANs (VLANs), tunneled connections, private and public IP addresses and any other networking structure required to move data from the appliance to the user of the appliance.
  • VLANs virtual LANs
  • IP addresses private and public IP addresses
  • Compute resources include memory and processor lives required to run the appliance guest operation system and its application program.
  • Storage resources consist of storage media mapped to each virtual appliance through an access protocol.
  • the access protocol could be a block storage protocol such as SAS, Fibre Channel, iSCSI or a file access protocol for example CIFS, NFS, and AFT.
  • the cloud may be used to virtualise these resources, in which a Hypervisor Application manages user dashboard requests and creates, launches and manages the VA (virtual appliance) and the resources that the appliance requires.
  • a Hypervisor Application manages user dashboard requests and creates, launches and manages the VA (virtual appliance) and the resources that the appliance requires.
  • This framework can be best understood as a general purpose cloud but is not limited to a cloud.
  • Example implementations are OpenStack TM , EMC Vsphere TM , and Citrix Cloudstack TM .
  • compute, storage and network nodes are arranged in a rack configuration, cabled together and configured so that virtual machines can be resourced from the datacenter infrastructure, launched and used by the end user.
  • Fig. 1 and Fig. 2 share storage between nodes and a storage array, in which failure on the storage array will result in loss of all the dependent appliances on that storage.
  • Fig. 1 shows an arrangement with compute nodes accessing through a fabric integrated HA (high availibility) storage systems with a dual redundant controller.
  • Fig. 2 shows an arrangement with compute nodes accessing through a fabric an integrated HA storage system, in which each storage system accesses the disk media through a second fabric, improving failure coverage.
  • Resiliency and fault tolerance is provided by the storage node using dual controllers (eg. Fig1 C#1.1& C#1.2). In the case of controller failure the volume resources that fail will be taken over and managed by the remaining controller.
  • US2010/0228903 (Chandrasekaran et al ) discloses disk operations by a VA from a virtual machine (VM).
  • WO2011/049574 Hewlett-Packard ) describes a method of virtualized migration control, including conditions for blocking a VM frm accessing data.
  • WO2011/046813 (Veeam Software ) describes a system for verifying VM data files.
  • US2011/0196842 (Veeam Software ) describes a system for restoring a file system object from an image level backup.
  • the invention is directed towards providing an improved data storage system with more versatility in its architecture.
  • said CPU, memory, network interface and storage virtualizer resources are connected between buses within each universal node, wherein at least one of said buses links said resources with virtual appliance instances, and wherein each universal node comprises a Hypervisor application for the virtual appliance instances.
  • the storage virtualiser is attached to storage devices through a storage bus organised so that a plurality of universal nodes have the same access to a fabric and drives attached to the fabric.
  • a plurality of storage devices can be discovered by a plurality of universal nodes.
  • each storage virtualiser behaves as if it were a locally attached storage array with coupling between the storage devices and the universal node.
  • system controller is adapted to partition and fit the virtual appliances within each universal node.
  • the universal nodes are configured so that in the case of a system failure each paired universal node will failover resources and workloads to each other.
  • a Hypervisor application manages requesting and allocation of these resources within each universal node.
  • the system further comprises a provisioning engine, and a Hypervisor application is adapted to use an API to request storage from the provisioning engine, which is in turn adapted to request a storage array to create a storage volume and export it to the Hypervisor application through the storage virtualiser.
  • each local storage array is adapted to respond to requests from a storage provisioning requester running on the universal node.
  • the universal nodes are identical.
  • system controller is adapted to dispatch workloads including virtual appliances to the universal nodes interfacing directly with the system controller or with a Hypervisor application.
  • system controller is responsible for dispatching workloads including virtual blocks to the universal nodes interfacing directly with a Hypervisor application of the universal node.
  • the Hypervisor application has an API which allows creation and execution of virtual appliances, and the Hypervisor application requests CPU, memory, and storage resources from the CPU, memory and storage managers, and a storage representation is implemented as if the storage were local, in which the storage virtualization virtual block is a virtualisation of a storage provider resource.
  • system controller is adapted to hold information about the system to allow each node to make decisions regarding optimal distribution of workloads.
  • the system controller is responsible for partioning and fitting of storgage provider resources to each universal node, and in the case of a failure it detects the failure and migrates failed storage virtualizer virtual blocks to available universal nodes, dn the system controller maintains a map and dependency list of storage virtualizer resources to every storage provider storage array.
  • Fig 3 , 4 and 5 show a system 1 of the invention with a number of U-nodes 2 linked by a fabric 3 to storage providers 4.
  • the latter include for example JBOD drives.
  • the U-node 2 is shown in Fig. 4 , and Fig. 5 shows more detail about how it links with consumers and storage providers (via buses N_IOC and S_IOC).
  • Each U-node 2 has a storage virtualiser 20 along with CPU, memory, and network resources 12, 13, and 14. Each U-node also includes VAs 17, a Hypervisor application 18, a Hypervisor 15 above the resources 12-14 and 20. The N-IOC and the S_IOC interfaces 20 and 19 are linked with the operating system 16.
  • Fig. 4 illustrates a U-Node 1 in more detail. It is used as one of the basic building blocks to build virtual appliances from a pool of identical U-Nodes. Each U-Node provides CPU, memory, storage and network resources for each appliance. CPU managers 12, memory managers 13, and network managers 14 are coupled very tightly within the U-Node across local high speed buses to a Hypervisor layer 15 and an Operating System (OP) layer 16.
  • OP Operating System
  • the storage resources provided by the SV layer 20 appear as if the storage was a local DAS.
  • the U-Node allows Virtual Appliances 17(a) to run within virtual networks 17(b) in a very tightly coupled configuration of compute-storage-networking which is fault tolerant.
  • the U-node via its storage virtualiser (SV), is a universal consumer of storage providers (SP) and a provider of virtual block devices (VB) to a universal set of storage consumers (SC).
  • the storage virtualiser is implemented on each node as an inline storage layer that provides VB storage to a local storage consumer or a consumer across a fabric.
  • Storage virtualiser 20 instances are managed by a separate controller (the "MetaC” controller) 31 which controls a number of U-nodes2 and holds all the SV context and state. Refering again to Fig. 5 in a system 30 the U-nodes 2 are linked to an N_IOC bus as is the metaC controller 31. SPs 34 are linked with the S_IOC bus.
  • the storage virtualisers SV 20 are implemented as slave devices without context or state.
  • the SV 20 is composed of storage consumer managers and storage provider managers, however all context and state are stored in the meta_C component 31. This allows the node 2 to fail without loss of critical metadata and the metaC controller 31 can reconstitute all the resources provided by the slave SV linstance.
  • the SV decouples the mapping between the SPs and the SCs. By introducing the SV link the SP and the SC are now mobile.
  • Fig. 1 the consumer nodes above the fabric maintain mappings to storage in the SP.
  • the SV 20 decouples these mappings and the U-nodes communicate with each other and the MetaC controller 31.
  • Fig. 3 and Fig. 4 if a U-node 2 fails there is no meta data or state information in the failed node. All meta data and state is stored in the metaC controller 31; this allows the resources (VBs) managed by the failed SV to be recreated on any other U-node.
  • the SV 20 has functions for targets, managers, and provider management. These functions communicate via an API to the metaC controller 31.
  • the metaC controller 31 maintains state and context information across all of the U-nodes of the system.
  • the SV is a combinaton of the SV slave functionality on the U-node and functionality on the metaC 31. There is one metaC per multiple U-nodes.
  • the system manages a storage pool that can scale from simple DAS storage to multiple horizontally-scaled SANS across multiple fabrics and protocols.
  • the system of the invention uses an SV on each node to represent resources on the SPs.
  • the resources created by the SV are virtual block devices (VB).
  • a virtual block device (VB) is a virtualisation of an SP resource.
  • the SV is managed by the metaC controller 31.
  • the U-nodes 2 provide greater flexibility than conventional storage architectures.
  • an array of SP eg. JBOD or storage Arrays
  • any U-node can fail and the resources managed by that node can be managed by any remaining node.
  • This allows N+1 failover operation of any U-node.
  • Each SV instance is attached to a set of provider devices by the MetaC controller, if any U-Node fails any other U-Node can be reconfigured by the MetaC controller to take over the provider devices, recreate the VB resources for the recreated consumer VAs. No loss of any U-Node leads to a system failure as all context, state and data can be recovered through the MetaC controller.
  • All U-Node SV instances together form a HA cluster, each U-node having a failover buddy.
  • Figs. 6 to 8 illustrate joining the cluster and finding a default failover "buddy". All members of the cluster are logically linked vertically and horizontally so that in the event of a node failure the cluster is aware of the failure and the appropriate failover of resources to another node can occur.
  • Fig. 3 The cost of the Fig. 3 is lower than Fig 1 and 2 .
  • the cost for the system of Fig. 3 in terms of rack space required and hardware is the lowest as no dedicated storage array appliances are required. All VA nodes are identical, in the simplest implementation only JBOD storage is required.
  • RV Rack Value
  • This equation describes the value of the Rack in terms of its number of CPU Cores, spinning disks and their size, and the number of Virtual Appliances per core.
  • the "U-nodes" 2 each provide compute and storage resources to run the VAs 17.
  • the system 1 increases the Rack Value by a U-node which integrates all resources for the VAs in 1 node. Further integration is possible with network switching but for clarity the main part of the following description is of integration of the storage and compute nodes to provide the U-node.
  • the SV of the U-nodes 2 accesses the provider disk devices resources via a fabric 3.
  • the U-node 2 is a universal node where compute and storage run on the CPU core resource of the same machine.
  • the storage management "SV" is collapsed to the same node as the compute node.
  • a U-node is not the same as a compute node with DAS storage.
  • a U-node SV manages provider devices that that have the same high coupling as DAS storage, however the SV is tolerant to fault conditions and is physically decoupled fron the SP.
  • the fault tolerance is achieved by the ability of the SV resource to migrate from any one U-node to any other U-node. In this way the U-node SV appears as an N+1 failover controller. Under failure conditions, failover is achieved between the N particpating U-nodes by moving the resource management, the SV and the its product the VB and not by the traditional method of providing multiple failover paths from a storage array to the storage consumer.
  • a user of the system requests a virtual appliance (VA) to be run.
  • the MetaC component 31 is responsible for dispatching workloads (such as VBs) to the U-Nodes 2 interfacing directly with the Hypervisor application 18 of the U-node 2.
  • the MetaC controller 31 is not the manager of the U-Node infrastructure it is simply the dispatcher of loads to the U-Nodes.
  • Fig. 6 also shows disk resources 34 linked with the U-nodes 2 via a fabric 35.
  • the Hypervisor application 18 has an API which allows creation and execution of virtual machines (VM) 17 within their assigned networks.
  • the Hypervisor application 18 requests CPU, memory, and storage resources from the CPU, memory and storage managers 12-14.
  • the storage representation is implemented as if the storage were local, that is the SV VB is a virtualisation of a storage provider resource.
  • the storage provider 34 is generally understood to be disks or storage arrays attached directly or through a fabric.
  • the SV manages all storage provider devices such as disks, storage arrays or object stores. In this way the SV is a universal consumer of storage from any storage provider and provides VB block devices to any consumer.
  • Fig 11 shows the how the SV and MetaC controller manage storage providers.
  • the MetaC has a provisioning plane which can create storage array volumes (SAVs). These SAVs can be imported over a fabric/protocol to the SV.
  • the SV virtualises the SAVs through its manager functions to virtual block devices (VBs). VBs are then exported to whatever consumer requires them.
  • SAVs storage array volumes
  • the local SV is composed of a number of slave managers which implement the tasks of importing SAVs, creating VBs and exporting to storage consumers or storage centric services.
  • the SV does not keep context or state information.
  • the MetaC controller keeps this information. This allows the slave SV layer to fail and no loss of information occurs in the system.
  • the SV 20 of each U-node 2 is attached to storage providers through an S_IOC bus 35.
  • the S_IOC bus 35 is a fabric organised so that all U-Nodes 2 have the same access to the fabric 35 and the attached provider devices of the fabric 35.
  • An example of an S-bus fabric 35 is where all devices can be discovered by all of the U-Nodes 2.
  • Each SV 20 in each U-Node 2 is allocated a number of provider resources (drives or SAVs) that it manages by the MetaC controller 31. Once configured, the SV 20 behaves as if it were a locally attached storage array with high coupling (eg. SAS bus) between the disks 34 and the U-Node 2.
  • Fig. 5 shows how multiple U-Nodes 2 provide resources to create multiple appliances on a set of U-Nodes.
  • Each node "leadership role" follows the state machine as shown in Fig. 7 and 8 .
  • the leader is elected by all particpating nodes in the system.
  • a leader node can fail without causing the system to fail.
  • the elected leader is responsible for logically organising the U-nodes 2 into two teams with vertical and horizotal failure links as shown in Fig. 6 .
  • the steady state of the system is "Nodes Paired", once a leader is elected the leader's role is to create a configuration of nodes as shown in Fig 6 .
  • Fig. 8 shows a configured system after leadership election and configuration of horizontal and vertical pairing. Any node that fails will have a failover partner. Failover partners are from Team A to Team B. Should two paired nodes fail at the same time the vertical pairing will detect the failure and initiate failover procedures. Should a learder fail a leadership election process occurs as nodes will return to the Voter state.
  • the MetaC controller 31 is also shown in Fig. 9 . It holds information about the system to allow each node 2 to make decisions regarding the optimal distribution of workloads.
  • the virtual appliances (VA) that use the storage provided by the SV 20 may run locally on the U-Node 2 where the SV 20 has migrated to or can be run on another U-Node 2.
  • the storage resource is provided to the SV as a network volume over the fabric protocol (such as iSCSI over TCP/IP).
  • Fig.6 also illustrates this mechanism in which:
  • the node may be killed (DEAD) depending on the severity of the failure.
  • the node H_Paired device will recover the workload.
  • the vertical V-Pair device will detect and node failure and instantiate a recovery process. Should no H_Paired device exist the V_Paired device will recover the workload.
  • a compute node with DAS storage is similar to a U-Node except the storage node and compute node are bound together and if one fails the other also fails.
  • the virtual appliances 7 can re-start on an alternative node as discussed in the failure modes above.
  • the U-Node architecture allows one to increase the value RV (Rack Value) by moving the storage array software from a dedicated storage appliance into the same node.
  • RV Riv Value
  • This node provides compute, network and storage resources to each VLAN within the node.
  • the metaC software control entity 31 is responsible for the partioning and fitting of SP resources to each U-Node. In the case of a failure it detects the U-Node failure and migrates failed SV VBs to available U_nodes.
  • the metaC maintains a map and dependency list of SV resources to every SP storage array.
  • the SV provides storage either to its dependent appliances locally through the HyperVisor 15 or if the Virtual Appliance 17 cannot be run locally storage is provided using a network protocol on the N-IOC (network IOC bus).
  • the Hypervisor application 18 manages the requesting and allocation of these resources.
  • the Hypervisor application 18 uses an API (Storage Provisioning Requester API (SPR)) to request storage from the MetaC provisioning engine, the MetaC creates volumes on the SP disks 34 and exports the storage over a number of conventional protocols (an iSCSI, CIFS or NFS share) to the SV 20.
  • SPR Storage Provisioning Requester API
  • the SV 20 than exports the storage resource to the VA through the Hypervisor 15 or as an operating system 16 block device to a storage centric service.
  • a VA may also use the SPR API directly for self provisioning.
  • U-nodes are identical in the sense that they rank equally between each other and if required run the same workloads. However U-nodes can be built using hardware systems of different capabilities (i.e #CPU cores, #Gigabytes of memory, S_IOC/N_IOC adaptors). This difference in hardware capabilities means that pairing is not arbitrary but pairs are created according to a pairing policy. Pairing policies may be best-with-best or best-with-worst or random etc.
  • U-Nodes can then in the nominal case be ranked with highest to lowest SLA (Service Level Agreement, eg Gold,Silver, Bronze).
  • SLA Service Level Agreement
  • the average pair SLA of all pairs are approximatly equivalent.
  • the MetaC controller manages workload dispatching according to policies setup in the MetaC controller.
  • Fig 10 shows how the policies are used to dispatch workloads to the paired U-nodes.
  • U-Nodes are associated by capability into various SLA groups.
  • the MetaC controller 31 will dispatch the workload to the appropriate U-node.
  • the MetaC controller 31 is responsible for understanding the existing workloads, the U-node failure coverage & resiliency, the required SLA and dispatching new workoads to the most appropriate U-Node.
  • the workload SLA may require High Availibility and therefore only functioning paired nodes are candidates to run the workload.

Description

    Field of the Invention
  • The invention relates to data storage and more particularly to organisation of functional nodes in providing storage to consumers.
  • Virtual Appliances (VA), also known as Virtual Machines, are created through the use of a Hypervisor application, Hypervisor, network, and compute and storage resources. They are described for example in US2010/0228903 (Chandrasekaran ). Resources for the virtual appliances are provided by software and hardware for network, compute and storage functions. The generally accepted definition of a VA is an aggregation of a guest operating system, using virtualised compute, memory, network and storage resources within a Hypervisor environment.
  • Network resources include networks, virtual LANs (VLANs), tunneled connections, private and public IP addresses and any other networking structure required to move data from the appliance to the user of the appliance.
  • Compute resources include memory and processor ressources required to run the appliance guest operation system and its application program.
  • Storage resources consist of storage media mapped to each virtual appliance through an access protocol. The access protocol could be a block storage protocol such as SAS, Fibre Channel, iSCSI or a file access protocol for example CIFS, NFS, and AFT.
  • At present, the cloud may be used to virtualise these resources, in which a Hypervisor Application manages user dashboard requests and creates, launches and manages the VA (virtual appliance) and the resources that the appliance requires.
  • This framework can be best understood as a general purpose cloud but is not limited to a cloud. Example implementations are OpenStack, EMC Vsphere, and Citrix Cloudstack.
  • In many current implementations compute, storage and network nodes are arranged in a rack configuration, cabled together and configured so that virtual machines can be resourced from the datacenter infrastructure, launched and used by the end user.
  • The architectures of Fig. 1 and Fig. 2 share storage between nodes and a storage array, in which failure on the storage array will result in loss of all the dependent appliances on that storage. Fig. 1 shows an arrangement with compute nodes accessing through a fabric integrated HA (high availibility) storage systems with a dual redundant controller. Fig. 2 shows an arrangement with compute nodes accessing through a fabric an integrated HA storage system, in which each storage system accesses the disk media through a second fabric, improving failure coverage.
  • Resiliency and fault tolerance is provided by the storage node using dual controllers (eg. Fig1 C#1.1& C#1.2). In the case of controller failure the volume resources that fail will be taken over and managed by the remaining controller.
  • These known architectures suffer from a number of drawbacks which can be best understood through an FMEA (Failure Mode Effects Analysis) table, below. FMEA Analysis Table
    Failure Critical Remarks
    Single controller failure within a storage node No Redundant 2nd controller can manage storage. Fig. 1
    Dual controller failure within a storage node Yes No controller available to manage storage, all attached appliances will fail. Fig. 1
    Dual controller failure within a storage node No Dual controller storage nodes functioning as a cluster can recover disk resource Fig. 2 Requires host and disk fabrics
    All storage nodes fail Yes No available storage node to manage storage Fig 2 requires host and disk fabrics
  • US2010/0228903 (Chandrasekaran et al ) discloses disk operations by a VA from a virtual machine (VM).
  • WO2011/049574 ( Hewlett-Packard ) describes a method of virtualized migration control, including conditions for blocking a VM frm accessing data.
  • WO2011/046813 (Veeam Software ) describes a system for verifying VM data files.
  • US2011/0196842 (Veeam Software ) describes a system for restoring a file system object from an image level backup.
  • US2010/037089 (Krishnan ) describes a fault tolerant storage system in a cluster, and having nodes with storage virtualization components.
  • The invention is directed towards providing an improved data storage system with more versatility in its architecture.
  • GLOSSARY
    • DAS, disk array storage
    • FMEA, Failure Mode Effects Analysis
    • HA, high availability
    • QoS, quality of service
    • SAV, storage area volume
    • SC, storage consumers
    • SLA, service level agreement
    • SP, storage providers
    • SPR, storage provisioning requester API
    • SV, storage visualizer
    • U-niode, universal node
    • VM, Virtual machine
    • VA, Virtual appliance
    • VB, virtual block devices
    SUMMARY OF THE INVENTION
  • According to the invention, there is provided a data storage system as set out in claim 1.
  • In one embodiment, said CPU, memory, network interface and storage virtualizer resources are connected between buses within each universal node, wherein at least one of said buses links said resources with virtual appliance instances, and wherein each universal node comprises a Hypervisor application for the virtual appliance instances.
  • In one embodiment, the storage virtualiser is attached to storage devices through a storage bus organised so that a plurality of universal nodes have the same access to a fabric and drives attached to the fabric. Preferably, a plurality of storage devices can be discovered by a plurality of universal nodes. Preferably, each storage virtualiser behaves as if it were a locally attached storage array with coupling between the storage devices and the universal node.
  • In one embodiment, the system controller is adapted to partition and fit the virtual appliances within each universal node.
  • In one embodiment, the universal nodes are configured so that in the case of a system failure each paired universal node will failover resources and workloads to each other.
  • In one embodiment, a Hypervisor application manages requesting and allocation of these resources within each universal node.
  • In one embodiment, the system further comprises a provisioning engine, and a Hypervisor application is adapted to use an API to request storage from the provisioning engine, which is in turn adapted to request a storage array to create a storage volume and export it to the Hypervisor application through the storage virtualiser.
  • In one embodiment, to satisfy storage requirements of virtual appliances in a universal node, each local storage array is adapted to respond to requests from a storage provisioning requester running on the universal node.
  • In one embodiment, the universal nodes are identical.
  • In one embodiment, the system controller is adapted to dispatch workloads including virtual appliances to the universal nodes interfacing directly with the system controller or with a Hypervisor application.
  • In one embodiment, the system controller is responsible for dispatching workloads including virtual blocks to the universal nodes interfacing directly with a Hypervisor application of the universal node.
  • In one embodiment, the Hypervisor application has an API which allows creation and execution of virtual appliances, and the Hypervisor application requests CPU, memory, and storage resources from the CPU, memory and storage managers, and a storage representation is implemented as if the storage were local, in which the storage virtualization virtual block is a virtualisation of a storage provider resource.
  • In one embodiment, the system controller is adapted to hold information about the system to allow each node to make decisions regarding optimal distribution of workloads.
  • In one embodiment, the system controller is responsible for partioning and fitting of storgage provider resources to each universal node, and in the case of a failure it detects the failure and migrates failed storage virtualizer virtual blocks to available universal nodes, dn the system controller maintains a map and dependency list of storage virtualizer resources to every storage provider storage array.
  • DETAILED DESCRIPTION OF THE INVENTION Brief Description of the Drawings
  • The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:-
    • Fig. 1 shows a prior art arrangement as discussed above, with compute nodes accessing through a fabric integrated HA (High Availibility) storage systems with a dual redundant controller;
    • Fig. 2 shows a prior art arrangement as discussed above, with compute nodes accessing through a fabric an integrated HA storage system, in which each storage system acesses the disk media through a second fabric, inproving failure coverage;
    • Fig. 3 shows overall architecture of a system of the invention, in which a number of universal nodes (U-nodes) are linked via a fabric with storage resources,
    • Fig. 4 shows an individual U-node broken out into its components;
    • Fig. 5 shows how multiple U-nodes are arranged in a system, in one embodiment;
    • Figs. 6 to 8 show linking of resources;
    • Fig. 9 shows failure recovery scenarios;
    • Fig. 10 shows how policies are used to dispatch workloads to paired U-nodes; and
    • Fig. 11 is a flow diagram illustrating operation of a U-node in one embodiment.
    Description of the Embodiments
  • Fig 3, 4 and 5 show a system 1 of the invention with a number of U-nodes 2 linked by a fabric 3 to storage providers 4. The latter include for example JBOD drives. The U-node 2 is shown in Fig. 4, and Fig. 5 shows more detail about how it links with consumers and storage providers (via buses N_IOC and S_IOC).
  • Each U-node 2 has a storage virtualiser 20 along with CPU, memory, and network resources 12, 13, and 14. Each U-node also includes VAs 17, a Hypervisor application 18, a Hypervisor 15 above the resources 12-14 and 20. The N-IOC and the S_IOC interfaces 20 and 19 are linked with the operating system 16.
  • Fig. 4 illustrates a U-Node 1 in more detail. It is used as one of the basic building blocks to build virtual appliances from a pool of identical U-Nodes. Each U-Node provides CPU, memory, storage and network resources for each appliance. CPU managers 12, memory managers 13, and network managers 14 are coupled very tightly within the U-Node across local high speed buses to a Hypervisor layer 15 and an Operating System (OP) layer 16.
  • The storage resources provided by the SV layer 20 appear as if the storage was a local DAS. The U-Node allows Virtual Appliances 17(a) to run within virtual networks 17(b) in a very tightly coupled configuration of compute-storage-networking which is fault tolerant.
  • The U-node, via its storage virtualiser (SV), is a universal consumer of storage providers (SP) and a provider of virtual block devices (VB) to a universal set of storage consumers (SC). The storage virtualiser is implemented on each node as an inline storage layer that provides VB storage to a local storage consumer or a consumer across a fabric. Storage virtualiser 20 instances are managed by a separate controller (the "MetaC" controller) 31 which controls a number of U-nodes2 and holds all the SV context and state. Refering again to Fig. 5 in a system 30 the U-nodes 2 are linked to an N_IOC bus as is the metaC controller 31. SPs 34 are linked with the S_IOC bus.
  • The storage virtualisers SV 20 are implemented as slave devices without context or state. In one embodiment the SV 20 is composed of storage consumer managers and storage provider managers, however all context and state are stored in the meta_C component 31. This allows the node 2 to fail without loss of critical metadata and the metaC controller 31 can reconstitute all the resources provided by the slave SV linstance. The SV decouples the mapping between the SPs and the SCs. By introducing the SV link the SP and the SC are now mobile.
  • In the prior the art, for example Fig. 1" the consumer nodes above the fabric maintain mappings to storage in the SP. In the invention however, the SV 20 decouples these mappings and the U-nodes communicate with each other and the MetaC controller 31. Referring to Fig. 3 and Fig. 4 if a U-node 2 fails there is no meta data or state information in the failed node. All meta data and state is stored in the metaC controller 31; this allows the resources (VBs) managed by the failed SV to be recreated on any other U-node.
  • The SV 20 has functions for targets, managers, and provider management. These functions communicate via an API to the metaC controller 31. In this embodiment the metaC controller 31 maintains state and context information across all of the U-nodes of the system.
  • In summary, what we term the SV is a combinaton of the SV slave functionality on the U-node and functionality on the metaC 31. There is one metaC per multiple U-nodes.
  • Referring to Figs. 5 and 11, in the system:
    • The U-nodes 2 have Storage Consumers (SC) such as Virtual Appliances (VAs) or Storage Centric Services (SCS) such as object storage, Hadoop storage, Lustre storage etc
    • There are links with storage providers 34 (SPs) such as disks, storage arrays and Storage Centric Services
    • The SV 20 consumes storage from the SPs in the system and provides virtual block devices (VB) to the SCs in the system.
    • The (out of band) controller metaC 31 manages the creation of storage luns on the SP devices, and manages the importing of storage from the storage providers SP, and manages the creation of VB devices and exporting the VB devices to the SC.
    • The metaC provides a high level API (HL_API) interface to SCs.
  • The system manages a storage pool that can scale from simple DAS storage to multiple horizontally-scaled SANS across multiple fabrics and protocols. Unlike conventional storage systems, the system of the invention uses an SV on each node to represent resources on the SPs. The resources created by the SV are virtual block devices (VB). A virtual block device (VB) is a virtualisation of an SP resource. The SV is managed by the metaC controller 31.
  • By introducing a stateless storage middleware on each node the following benefits are derived.
    • The stateless SV having no context or state allows the node to fail with only transient impact to the system since the MetaC controller 31 can reconstitute all resources on available nodes from the MetaC context and state.
    • The SV can consume any storage from any provider across any protocol and fabric; knowledge of the fabric is not required in the SV, only in the MetaC controller.
    • The SV as a middleware between the storage consumer and storage provider allows a range of added value functions such as
      • o Data protection by mapping and replicating the VB to multiple Storage Array Volumes (SAV)
      • o Data scaling by striping the VB across multiple SAVs
      • o Redundant multipathing by mapping the VB to different instances of the SAV on alternate paths
      • ∘ Node side SSD caching by introducing an SSD caching layer between the VB and the SAV
      • ∘ VB rate limiting, by introducing input/output and bandwidth throttling per VB.
      • ∘ System fairness by managing the node system resource allocation to the IO subsystem used for storage.
      • ∘ VB virtualisation from SAV volumes, i.e many small VBs from one large SAV
      • ∘ VB tiering by building a VB across multiple SAV tiers of varying QoS
  • The U-nodes 2 provide greater flexibility than conventional storage architectures. To illustrate one such use case, consider Fig. 9, an array of SP (eg. JBOD or storage Arrays) is connected to all U-Nodes. In this configuration since no U-Node holds any specific storage context, state or physically attached storage, any U-node can fail and the resources managed by that node can be managed by any remaining node. This allows N+1 failover operation of any U-node. Each SV instance is attached to a set of provider devices by the MetaC controller, if any U-Node fails any other U-Node can be reconfigured by the MetaC controller to take over the provider devices, recreate the VB resources for the recreated consumer VAs. No loss of any U-Node leads to a system failure as all context, state and data can be recovered through the MetaC controller.
  • All U-Node SV instances together form a HA cluster, each U-node having a failover buddy. Figs. 6 to 8 illustrate joining the cluster and finding a default failover "buddy". All members of the cluster are logically linked vertically and horizontally so that in the event of a node failure the cluster is aware of the failure and the appropriate failover of resources to another node can occur.
  • Referring again to the prior art architectures of Figs. 1 and 2 we provide the following analysis. The cost of the Fig. 3 is lower than Fig 1 and 2. The cost for the system of Fig. 3 in terms of rack space required and hardware is the lowest as no dedicated storage array appliances are required. All VA nodes are identical, in the simplest implementation only JBOD storage is required. We can define the value of a Rack Value (RV) by an equation which calculates the number of software appliances that can run within a rack, as follows:
    • RV (RackValue) = (V*(C*Uc)*S*(D*Ud)*Kc/(k*1); Uc+Ud=42, 42 is the height of an Industrial Rack in U units.
    • V is the number of Virtual Appliances per Core (C) in the RACK
    • C is the number of Cores per U of Rack Space
    • Uc is the number of U space allocated to Cores
    • D is the number of Disks per U of Rack Space
    • S is the average size of the disks
    • Ud is the number of U space allocated to Disks
    • Kc is the coupling constant between Virtual appliances and storage, a larger Kc implies faster coupling between storage media virtual appliance.
    • k is a function k=f(C/D)
    • 1 is a function 1=f(C/BladeMemoryGigs)
  • This equation describes the value of the Rack in terms of its number of CPU Cores, spinning disks and their size, and the number of Virtual Appliances per core.
  • To increase the Rack Value this equation needs to be maximised. This invention increases the Rack Value for any given appliance type by:
    1. A) increasing the coupling constant Kc
    2. B) maximizing the amount of U space available for storage and compute nodes.
    The invention described maximises Rack Value.
  • The "U-nodes" 2 each provide compute and storage resources to run the VAs 17.. The system 1 increases the Rack Value by a U-node which integrates all resources for the VAs in 1 node. Further integration is possible with network switching but for clarity the main part of the following description is of integration of the storage and compute nodes to provide the U-node. The SV of the U-nodes 2 accesses the provider disk devices resources via a fabric 3.
  • The U-node 2 is a universal node where compute and storage run on the CPU core resource of the same machine. In the U-node configuration the storage management "SV" is collapsed to the same node as the compute node. A U-node is not the same as a compute node with DAS storage. A U-node SV manages provider devices that that have the same high coupling as DAS storage, however the SV is tolerant to fault conditions and is physically decoupled fron the SP. The fault tolerance is achieved by the ability of the SV resource to migrate from any one U-node to any other U-node. In this way the U-node SV appears as an N+1 failover controller. Under failure conditions, failover is achieved between the N particpating U-nodes by moving the resource management, the SV and the its product the VB and not by the traditional method of providing multiple failover paths from a storage array to the storage consumer.
  • Again referring to Fig. 5 in a storage system 30 a user of the system ("Tenant") requests a virtual appliance (VA) to be run. The MetaC component 31 is responsible for dispatching workloads (such as VBs) to the U-Nodes 2 interfacing directly with the Hypervisor application 18 of the U-node 2. The MetaC controller 31 is not the manager of the U-Node infrastructure it is simply the dispatcher of loads to the U-Nodes. Fig. 6 also shows disk resources 34 linked with the U-nodes 2 via a fabric 35.
  • The Hypervisor application 18 has an API which allows creation and execution of virtual machines (VM) 17 within their assigned networks. The Hypervisor application 18 requests CPU, memory, and storage resources from the CPU, memory and storage managers 12-14. The storage representation is implemented as if the storage were local, that is the SV VB is a virtualisation of a storage provider resource.
  • The storage provider 34 is generally understood to be disks or storage arrays attached directly or through a fabric. The SV manages all storage provider devices such as disks, storage arrays or object stores. In this way the SV is a universal consumer of storage from any storage provider and provides VB block devices to any consumer. Fig 11 shows the how the SV and MetaC controller manage storage providers. The MetaC has a provisioning plane which can create storage array volumes (SAVs). These SAVs can be imported over a fabric/protocol to the SV. The SV virtualises the SAVs through its manager functions to virtual block devices (VBs). VBs are then exported to whatever consumer requires them. The local SV is composed of a number of slave managers which implement the tasks of importing SAVs, creating VBs and exporting to storage consumers or storage centric services. The SV does not keep context or state information. The MetaC controller keeps this information. This allows the slave SV layer to fail and no loss of information occurs in the system.
  • The SV 20 of each U-node 2 is attached to storage providers through an S_IOC bus 35. The S_IOC bus 35 is a fabric organised so that all U-Nodes 2 have the same access to the fabric 35 and the attached provider devices of the fabric 35. An example of an S-bus fabric 35 is where all devices can be discovered by all of the U-Nodes 2. Each SV 20 in each U-Node 2 is allocated a number of provider resources (drives or SAVs) that it manages by the MetaC controller 31. Once configured, the SV 20 behaves as if it were a locally attached storage array with high coupling (eg. SAS bus) between the disks 34 and the U-Node 2. Fig. 5 shows how multiple U-Nodes 2 provide resources to create multiple appliances on a set of U-Nodes.
  • It is advantageous if all nodes are logically identical and therefore the configuration of the U-nodes 2 for failover operation requires alogorithms for leadership election between peers. Each node "leadership role" follows the state machine as shown in Fig. 7 and 8. The leader is elected by all particpating nodes in the system. A leader node can fail without causing the system to fail. The elected leader is responsible for logically organising the U-nodes 2 into two teams with vertical and horizotal failure links as shown in Fig. 6. The steady state of the system is "Nodes Paired", once a leader is elected the leader's role is to create a configuration of nodes as shown in Fig 6. In the case of a U-node failure, the remaining configured nodes use their knowledge of pairing to recover from the U-node failure. Failover and failback of resources occurs between horizontally paired nodes. The leader is responsible for creating pairs, and all nodes 2 are responsible for making sure their vertical and horizontal pairs are present and functioning. Each node's pairing state will follow the state machine as shown in Fig. 8. Fig. 6 shows a configured system after leadership election and configuration of horizontal and vertical pairing. Any node that fails will have a failover partner. Failover partners are from Team A to Team B. Should two paired nodes fail at the same time the vertical pairing will detect the failure and initiate failover procedures. Should a learder fail a leadership election process occurs as nodes will return to the Voter state.
  • System Failure.
  • Rack systems are in general very sensitive to component failures. In the case of a U-Node 2 since all components are identical any failure of a node requires that the paired controller runs the failed U-node's workload.
  • In the case of a system failure, as shown in Fig. 9 since all U-Nodes are identical any node failure will cause the workload to start on a remaining paired controller. Should a pair fail then the team is responsible for creating a new pair of controllers and distributing the workload.
  • The MetaC controller 31 is also shown in Fig. 9. It holds information about the system to allow each node 2 to make decisions regarding the optimal distribution of workloads.
  • The virtual appliances (VA) that use the storage provided by the SV 20 may run locally on the U-Node 2 where the SV 20 has migrated to or can be run on another U-Node 2. In the case of a VA 17 running on a remote U-Node the storage resource is provided to the SV as a network volume over the fabric protocol (such as iSCSI over TCP/IP).
  • System Recovery.
  • In the event of a U-Node 2 recovering from a system failure it will negotiate with its pair to failback its workload.
  • Fig.6 also illustrates this mechanism in which:
    • U-node 2 aand U-node 3 are horizontally paired, and
    • U-node 1 and U-node 2 are vertically paired.
    Failure F1.
  • In this failure mode the CPU no longer functions and the node 2 is detected as DEAD The node H_Paired device will recover the workload.
  • Failure F2.
  • In this failure mode the memory no longers functions and the node is detected as DEAD The node H_Paired device will recover the workload.
  • Failure F3/F4.
  • In these failure modes the network no longers functions and the node is detected as alive but not communicating (example a network cable/switch has failed). In this mode the node may be killed (DEAD) depending on the severity of the failure.
    The node H_Paired device will recover the workload.
  • Failure F5.
  • In this failure mode the access to the disk bus no longers functions and the node 2 is detected as alive but storage is not available. In this mode the node will failover its s-Array function (SV 20) to its H_paired device which will recover the storage functio and export the storage devices to the U-node through the N-IOC bus.
  • Failure F6 (U-Node2 and U-Node4 Failure).
  • In this Failure mode the vertical V-Pair device will detect and node failure and instantiate a recovery process. Should no H_Paired device exist the V_Paired device will recover the workload.
  • U-Node v/s Compute with DAS
  • A compute node with DAS storage is similar to a U-Node except the storage node and compute node are bound together and if one fails the other also fails. In the U-node configuration if the U-node fails the virtual appliances 7 can re-start on an alternative node as discussed in the failure modes above.
  • The U-Node architecture allows one to increase the value RV (Rack Value) by moving the storage array software from a dedicated storage appliance into the same node. This node (U-Node) provides compute, network and storage resources to each VLAN within the node.
  • The increase in Rack Value comes from
    1. A) Less wasted space on storage appliances
    2. B) Higher coupling speed between compute and storage
    Controller 31 Operation.
  • The metaC software control entity 31 is responsible for the partioning and fitting of SP resources to each U-Node. In the case of a failure it detects the U-Node failure and migrates failed SV VBs to available U_nodes. The metaC maintains a map and dependency list of SV resources to every SP storage array. The SV provides storage either to its dependent appliances locally through the HyperVisor 15 or if the Virtual Appliance 17 cannot be run locally storage is provided using a network protocol on the N-IOC (network IOC bus).
  • To satisfy the resource requirements of the Virtual Appliances (VA) in each VLAN, local CPU, memory and networking resources are consumed from the available CPU, memory, and networking resources. The Hypervisor application 18 manages the requesting and allocation of these resources. The Hypervisor application 18 uses an API (Storage Provisioning Requester API (SPR)) to request storage from the MetaC provisioning engine, the MetaC creates volumes on the SP disks 34 and exports the storage over a number of conventional protocols (an iSCSI, CIFS or NFS share) to the SV 20. The SV 20 than exports the storage resource to the VA through the Hypervisor 15 or as an operating system 16 block device to a storage centric service. A VA may also use the SPR API directly for self provisioning.
  • In the case of a failure mode occuring a paired node will recover the workload of the failed device. In the case of a failed pair of nodes the metaC controller 31 will distribute the workloads over the remaining nodes. U-nodes are identical in the sense that they rank equally between each other and if required run the same workloads. However U-nodes can be built using hardware systems of different capabilities (i.e #CPU cores, #Gigabytes of memory, S_IOC/N_IOC adaptors). This difference in hardware capabilities means that pairing is not arbitrary but pairs are created according to a pairing policy. Pairing policies may be best-with-best or best-with-worst or random etc. In a best-with-best pairing policy U-Nodes can then in the nominal case be ranked with highest to lowest SLA (Service Level Agreement, eg Gold,Silver, Bronze). In a best-with-worst pairing policy the average pair SLA of all pairs are approximatly equivalent. The MetaC controller manages workload dispatching according to policies setup in the MetaC controller.
  • Fig 10 shows how the policies are used to dispatch workloads to the paired U-nodes. In this example U-Nodes are associated by capability into various SLA groups. Depending on the workload, required SLA and resource availibility on the existing U-Nodes the MetaC controller 31 will dispatch the workload to the appropriate U-node. For any workload the MetaC controller 31 is responsible for understanding the existing workloads, the U-node failure coverage & resiliency, the required SLA and dispatching new workoads to the most appropriate U-Node. For example the workload SLA may require High Availibility and therefore only functioning paired nodes are candidates to run the workload.
  • The invention is not limited to the embodiments described, but may be varied in construction and detail.

Claims (14)

  1. A data storage system (30) comprising:
    at least two universal nodes each comprising:
    CPU resources (12),
    memory resources (13),
    network interface resources (14), and
    a storage virtualiser (20); and
    a system controller (31),
    each storage virtualizer (31) in each universal node is allocated by the system controller a number of storage provider resources (34) that it manages, the system controller (31) maintaining a map for dependency of virtual appliances (17) to storage providers recources, and storing context and state of each storage virtualizer wherein each storage virtualizer is a slave device,
    each storage virtualiser provides storage (34) to dependent virtual appliances (17), said storage being to said storage providers through a network protocol (35, N_IOC, S_IOC),
    each storage virtualizer (20) is adapted to manage storage providers (34) and is tolerant to fault conditions and the fault tolerance is achieved by an ability of the storage virtualiser (20) to migrate from one universal node to any other universal node, in which if any universal node fails any other universal node can be reconfigured by the system controller to take over the storage providers by recovering storage virtualizer context and state held by the system controller (31); virtual appliances (17(a)) may run locally on a universal node where the storage virtualizer has migrated to or can be run on another universal node, and in that the system controller (31) is adapted to execute an algorithm for leadership election between peer universal nodes for failover protection, wherein the system controller (31) is adapted to allow each universal node to paritcipate in a leadership election, and wherein each universal node is adapted to execute a leadership role which follows a state machine, and in which an elected leader is responsible for logically organising the universal nodes into teams with failure links, and is adapted to create a configuration of nodes, and in the case of a node failure, the remaining configured nodes use their knowledge of pairing to recover from the failure, and wherein failover and/or failback of resources occurs between paired nodes, the leader is responsible for creating pairs, and all nodes are responsible for ensuring that their pairs are present and functioning.
  2. A storage system as claimed in claim 1, wherein said CPU, memory, network interface and storage virtualizer resources (12-14, 20) are connected between buses within each universal node (2), wherein at least one of said buses links said resources with virtual appliance instances (17), and wherein each universal node comprises a Hypervisor application (15) for the virtual appliance instances (17(a)).
  3. A storage system as claimed in any preceding claim, wherein the storage virtualiser (20) is attached to said storage providers (34) through a storage bus (S_IOC) organised so that a plurality of universal nodes (2) have the same access to a fabric (35) and storage providers (34) attached to the fabric, and wherein a plurality of storage providers (34) can be discovered by a plurality of universal nodes (2), and wherein each storage virtualiser (20) behaves as if it were a locally attached storage array with coupling between the storage devices and the universal node.
  4. A storage system as claimed in any preceding claim, wherein the system controller (31) is adapted to partition and fit the virtual appliances (17(a)) within each universal node.
  5. A storage system as claimed in any preceding claim, wherein the universal nodes (2) are configured so that in the case of a system failure each paired universal node (2) will failover resources and workloads to each other.
  6. A storage system as claimed in any preceding claim, wherein a Hypervisor application (18) manages requesting and allocation of these resources within each universal node.
  7. A storage system as claimed in any of claims 2 to 6, wherein the system further comprises a provisioning engine (12, 13, 14), and a Hypervisor application (15) is adapted to use an API to request storage from the provisioning engine, which is in turn adapted to request a storage array as a virtualization of a storage provider resource to create a storage volume and export it to the Hypervisor application (15) through the storage virtualiser.
  8. A storage system as claimed in claim 7 wherein, to satisfy storage requirements of virtual appliances (17(a)) in a universal node, each local storage array is adapted to respond to requests from a storage provisioning requester running on the universal node.
  9. A storage system as claimed in any preceding claim, wherein the universal nodes are identical.
  10. A storage system as claimed in any preceding claim, wherein the system controller (31) is adapted to dispatch workloads including virtual appliances to the universal nodes interfacing directly with the system controller or with a Hypervisor application.
  11. A storage system as claimed in any of claims 2 to 10, wherein the system controller is responsible for dispatching workloads including virtual blocks to the universal nodes interfacing directly with a Hypervisor application of the universal node.
  12. A storage system as claimed in any of claims 2 to 11, wherein the Hypervisor application (18) has an API which allows creation and execution of virtual appliances (17(a)), and the Hypervisor application requests CPU, memory, and storage resources (12-14) from the CPU, memory and storage managers, and a storage representation is implemented as if the storage were local, in which the storage virtualization virtual block is a virtualisation of a storage provider resource.
  13. A storage system as claimed in any preceding claim, wherein the system controller (31) is adapted to hold information about the system (30) to allow each node (2) to make decisions regarding optimal distribution of workloads.
  14. A storage system as claimed in any preceding claim, wherein the system controller (31) is responsible for partioning and fitting of storgage provider resources (34) to each universal node (2), and in the case of a failure it detects the failure and migrates failed storage virtualizer virtual blocks to available universal nodes, and the system controller maintains a map and dependency list of storage virtualizer resources to every storage provider storage array.
EP13731793.9A 2012-06-29 2013-06-26 Data storage with virtual appliances Active EP2867763B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17186335.0A EP3279789A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IE20120300 2012-06-29
PCT/EP2013/063437 WO2014009160A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP17186335.0A Division-Into EP3279789A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances
EP17186335.0A Division EP3279789A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances

Publications (2)

Publication Number Publication Date
EP2867763A1 EP2867763A1 (en) 2015-05-06
EP2867763B1 true EP2867763B1 (en) 2017-09-27

Family

ID=48699811

Family Applications (2)

Application Number Title Priority Date Filing Date
EP17186335.0A Withdrawn EP3279789A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances
EP13731793.9A Active EP2867763B1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP17186335.0A Withdrawn EP3279789A1 (en) 2012-06-29 2013-06-26 Data storage with virtual appliances

Country Status (3)

Country Link
US (2) US9747176B2 (en)
EP (2) EP3279789A1 (en)
WO (1) WO2014009160A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107209642B (en) * 2015-01-15 2021-02-12 瑞典爱立信有限公司 Method and entity for controlling resources in a cloud environment
US10437506B2 (en) 2015-08-17 2019-10-08 Microsoft Technology Licensing Llc Optimal storage and workload placement, and high resiliency, in geo-distributed cluster systems
WO2017066940A1 (en) * 2015-10-21 2017-04-27 华为技术有限公司 Monitoring method and monitoring device under network virtualization environment, and network node
CN108599979B (en) * 2018-03-05 2021-05-28 京信通信系统(中国)有限公司 Method and device for converting non-HA mode into HA mode
US10963356B2 (en) 2018-04-18 2021-03-30 Nutanix, Inc. Dynamic allocation of compute resources at a recovery site
US10846079B2 (en) 2018-11-14 2020-11-24 Nutanix, Inc. System and method for the dynamic expansion of a cluster with co nodes before upgrade
US11513690B2 (en) * 2020-07-10 2022-11-29 EMC IP Holding Company LLC Multi-dimensional I/O service levels

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6708175B2 (en) * 2001-06-06 2004-03-16 International Business Machines Corporation Program support for disk fencing in a shared disk parallel file system across storage area network
US6950855B2 (en) * 2002-01-18 2005-09-27 International Business Machines Corporation Master node selection in clustered node configurations
US7191357B2 (en) * 2002-03-29 2007-03-13 Panasas, Inc. Hybrid quorum/primary-backup fault-tolerance model
US7290260B2 (en) * 2003-02-20 2007-10-30 International Business Machines Corporation Dynamic processor redistribution between partitions in a computing system
US7366166B2 (en) * 2003-04-25 2008-04-29 Alcatel Usa Sourcing, L.P. Data switching using soft configuration
US7519008B2 (en) * 2003-06-05 2009-04-14 International Business Machines Corporation Ineligible group member status
US20050108593A1 (en) * 2003-11-14 2005-05-19 Dell Products L.P. Cluster failover from physical node to virtual node
US7669032B2 (en) * 2003-11-26 2010-02-23 Symantec Operating Corporation Host-based virtualization optimizations in storage environments employing off-host storage virtualization
US7751327B2 (en) * 2004-02-25 2010-07-06 Nec Corporation Communication processing system, packet processing load balancing device and packet processing load balancing method therefor
US7770059B1 (en) * 2004-03-26 2010-08-03 Emc Corporation Failure protection in an environment including virtualization of networked storage resources
US7529097B2 (en) * 2004-05-07 2009-05-05 Rackable Systems, Inc. Rack mounted computer system
US20060155912A1 (en) * 2005-01-12 2006-07-13 Dell Products L.P. Server cluster having a virtual server
JP4733399B2 (en) * 2005-01-28 2011-07-27 株式会社日立製作所 Computer system, computer, storage device and management terminal
US7953457B2 (en) * 2006-04-28 2011-05-31 Research In Motion Limited Methods and apparatus for reducing power consumption for mobile devices using broadcast-to-unicast message conversion
US8225134B2 (en) * 2007-04-06 2012-07-17 Cisco Technology, Inc. Logical partitioning of a physical device
US8341623B2 (en) * 2007-05-22 2012-12-25 International Business Machines Corporation Integrated placement planning for heterogenous storage area network data centers
US8019965B2 (en) * 2007-05-31 2011-09-13 International Business Machines Corporation Data migration
US8015432B1 (en) * 2007-09-28 2011-09-06 Symantec Corporation Method and apparatus for providing computer failover to a virtualized environment
US8307239B1 (en) * 2007-10-26 2012-11-06 Maxsp Corporation Disaster recovery appliance
US8352950B2 (en) * 2008-01-11 2013-01-08 International Business Machines Corporation Algorithm to share physical processors to maximize processor cache usage and topologies
US8230256B1 (en) * 2008-06-06 2012-07-24 Symantec Corporation Method and apparatus for achieving high availability for an application in a computer cluster
US7886183B2 (en) * 2008-08-07 2011-02-08 Symantec Operating Corporation Providing fault tolerant storage system to a cluster
EP2350837A4 (en) * 2008-09-15 2012-10-17 Virsto Software Corp Storage management system for virtual machines
US8260925B2 (en) * 2008-11-07 2012-09-04 International Business Machines Corporation Finding workable virtual I/O mappings for HMC mobile partitions
US8549516B2 (en) * 2008-12-23 2013-10-01 Citrix Systems, Inc. Systems and methods for controlling, by a hypervisor, access to physical resources
US10203993B2 (en) * 2009-02-18 2019-02-12 International Business Machines Corporation Method and system for continuous optimization of data centers by combining server and storage virtualization
US8578083B2 (en) 2009-03-03 2013-11-05 Vmware, Inc. Block map based I/O optimization for storage virtual appliances
US8307116B2 (en) * 2009-06-19 2012-11-06 Board Of Regents Of The University Of Texas System Scalable bus-based on-chip interconnection networks
US9031081B2 (en) * 2009-08-06 2015-05-12 Broadcom Corporation Method and system for switching in a virtualized platform
WO2011046813A2 (en) 2009-10-12 2011-04-21 Veeam Software International Ltd. Item-level restoration and verification of image level backups
WO2011049574A1 (en) 2009-10-22 2011-04-28 Hewlett-Packard Development Company, L.P. Virtualized migration control
EP2510436A1 (en) * 2009-12-11 2012-10-17 Deutsche Telekom AG Computer cluster and method for providing a disaster recovery functionality for a computer cluster
US9015129B2 (en) 2010-02-09 2015-04-21 Veeam Software Ag Cross-platform object level restoration from image level backups
US8510590B2 (en) * 2010-03-17 2013-08-13 Vmware, Inc. Method and system for cluster resource management in a virtualized computing environment
US8751857B2 (en) * 2010-04-13 2014-06-10 Red Hat Israel, Ltd. Monitoring of highly available virtual machines
US8966020B2 (en) * 2010-11-02 2015-02-24 International Business Machines Corporation Integration of heterogeneous computing systems into a hybrid computing system
US8984330B2 (en) * 2011-03-28 2015-03-17 Siemens Corporation Fault-tolerant replication architecture
US8984121B1 (en) * 2011-04-21 2015-03-17 Intuit Inc. Dependency visualization and fault diagnosis using multidimensional models for software offerings
US8635493B2 (en) * 2011-05-17 2014-01-21 Vmware, Inc. High availability system allowing conditionally reserved computing resource use and reclamation upon a failover
US8751675B2 (en) * 2011-06-21 2014-06-10 Cisco Technology, Inc. Rack server management
WO2013042271A1 (en) * 2011-09-22 2013-03-28 富士通株式会社 Electronic computer system and virtual machine deployment method
WO2013160933A1 (en) * 2012-04-23 2013-10-31 Hitachi, Ltd. Computer system and virtual server migration control method for computer system
US10831728B2 (en) * 2012-05-29 2020-11-10 International Business Machines Corporation Application-controlled sub-LUN level data migration
US9703482B2 (en) * 2012-06-29 2017-07-11 Vmware, Inc. Filter appliance for object-based storage system
US9052828B2 (en) * 2013-05-31 2015-06-09 International Business Machines Corporation Optimal volume placement across remote replication relationships
TW201530423A (en) * 2014-01-22 2015-08-01 Kung-Lan Wang Touch method and touch system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US20170315883A1 (en) 2017-11-02
US20150186226A1 (en) 2015-07-02
WO2014009160A1 (en) 2014-01-16
EP2867763A1 (en) 2015-05-06
EP3279789A1 (en) 2018-02-07
US9747176B2 (en) 2017-08-29

Similar Documents

Publication Publication Date Title
EP2867763B1 (en) Data storage with virtual appliances
JP6199514B2 (en) Scheduling fabric distributed resources
US10091295B1 (en) Converged infrastructure implemented with distributed compute elements
US8914546B2 (en) Control method for virtual machine and management computer
US9871851B2 (en) Migrating private infrastructure services to a cloud
US8874749B1 (en) Network fragmentation and virtual machine migration in a scalable cloud computing environment
US9569242B2 (en) Implementing dynamic adjustment of I/O bandwidth for virtual machines using a single root I/O virtualization (SRIOV) adapter
US9400664B2 (en) Method and apparatus for offloading storage workload
US9274817B1 (en) Storage quality-of-service control in distributed virtual infrastructure
US10942759B2 (en) Seamless virtual standard switch to virtual distributed switch migration for hyper-converged infrastructure
US9134915B2 (en) Computer system to migrate virtual computers or logical paritions
US10162681B2 (en) Reducing redundant validations for live operating system migration
US20160170773A1 (en) Data processing
US9602341B1 (en) Secure multi-tenant virtual control server operation in a cloud environment using API provider
US11481356B2 (en) Techniques for providing client interfaces
US10007673B1 (en) Cluster file system comprising data mover module arranged between front-end and back-end file systems
US11012510B2 (en) Host device with multi-path layer configured for detecting target failure status and updating path availability
US20150052535A1 (en) Integrated computer system and its control method
US11550613B2 (en) Computer system
US20220174021A1 (en) Endpoint notification of storage area network congestion
US11119801B2 (en) Migrating virtual machines across commonly connected storage providers
Zhu et al. Building High Performance Storage for Hyper-V Cluster on Scale-Out File Servers using Violin Windows Flash Arrays
Yen et al. Roystonea: A cloud computing system with pluggable component architecture
US9740527B2 (en) Load distribution of logical switch routers in a distributed system
Signoretti Down to earth report: HP 3PAR StoreServ storage

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150121

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20160629

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

INTG Intention to grant announced

Effective date: 20170526

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 932565

Country of ref document: AT

Kind code of ref document: T

Effective date: 20171015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013027125

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: ISLER AND PEDRAZZINI AG, CH

Ref country code: CH

Ref legal event code: PCOW

Free format text: NEW ADDRESS: UNIVERSITY TECHNOLOGY CENTRE BUILDING 2 CURRAHEEN ROAD, CORK T12 NY5T (IE)

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: MPSTOR LIMITED

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602013027125

Country of ref document: DE

Representative=s name: MITSCHERLICH, PATENT- UND RECHTSANWAELTE PARTM, DE

Ref country code: DE

Ref legal event code: R081

Ref document number: 602013027125

Country of ref document: DE

Owner name: MPSTOR LIMITED, IE

Free format text: FORMER OWNER: MPSTOR LTD., BISHOPSTOWN CORK, IE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171227

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 932565

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171227

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20171228

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20180127

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013027125

Country of ref document: DE

REG Reference to a national code

Ref country code: FR

Ref legal event code: CA

Effective date: 20180622

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

26N No opposition filed

Effective date: 20180628

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20180630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180626

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180626

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

Ref country code: MK

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170927

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130626

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170927

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20230515

Year of fee payment: 11

Ref country code: IE

Payment date: 20230510

Year of fee payment: 11

Ref country code: FR

Payment date: 20230510

Year of fee payment: 11

Ref country code: DE

Payment date: 20230502

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230504

Year of fee payment: 11

Ref country code: CH

Payment date: 20230702

Year of fee payment: 11