WO2016068982A1 - Fourniture de services de fichier sur réseau de zone de stockage - Google Patents

Fourniture de services de fichier sur réseau de zone de stockage Download PDF

Info

Publication number
WO2016068982A1
WO2016068982A1 PCT/US2014/063333 US2014063333W WO2016068982A1 WO 2016068982 A1 WO2016068982 A1 WO 2016068982A1 US 2014063333 W US2014063333 W US 2014063333W WO 2016068982 A1 WO2016068982 A1 WO 2016068982A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
storage
file services
virtual machine
given
Prior art date
Application number
PCT/US2014/063333
Other languages
English (en)
Inventor
Ronald John LUMAN II
Matthew David BONDURANT
Hrishikesh Talgery
Patrick Moore
Sumit Sarkar
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2014/063333 priority Critical patent/WO2016068982A1/fr
Publication of WO2016068982A1 publication Critical patent/WO2016068982A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • G06F11/1484Generic software techniques for error detection or fault masking by means of middleware or OS functionality involving virtual machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1438Restarting or rejuvenating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1441Resetting or repowering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2015Redundant power supplies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage

Definitions

  • a computer may access a storage area network (SAN) for purposes of storing and retrieving large amounts of data.
  • SAN storage area network
  • the typical SAN includes a consolidated pool of mass storage devices (magnetic tape drives, hard drives, optical drives, and so forth); and the SAN typically provides relatively high speed block level storage, which may be advantageous for backup applications, archival applications, database applications and other such purposes.
  • FIG 1 is a schematic diagram of a computer system having a storage area network (SAN) appliance that provides block level storage and file services according to an example implementation.
  • SAN storage area network
  • FIG. 2 is an illustration of data layers used by the SAN appliance of Fig. 1 according to an example implementation.
  • Fig. 3A is an illustration of the use of configuration data to transfer file services from one file services virtual machine to another virtual machine according to an example implementation.
  • FIGs. 3B,3C and 3D are flow diagrams depicting techniques to takeover file services for a file services virtual machine that has an associated failure according to example implementations.
  • FIG. 4A is an illustration of a file persona used by the SAN appliance of Fig. 1 according to an example implementation.
  • FIG. 4B is an illustration of a file share creation process used by the SAN appliance of Fig. 1 according to an example implementation.
  • FIG. 5 is a flow diagram depicting a technique to provision block storage for a file share according to an example implementation.
  • FIG. 6 is a schematic diagram of a physical machine of a SAN appliance according to an example implementation.
  • a computer system 100 provides hosted services, such as (as examples) datacenter or cloud services for clients (not shown).
  • the clients of the system 100 may include thin clients, tablets, portable computers, smartphones, desktop computers, servers and so forth and may access (via a network fabric not shown in Fig. 1 ) the computer system 100 for purpose of using the hosted services.
  • the computer system 100 may be used to one or multiple host cloud services, such as Software as a Service (SaaS), Infrastructure as a Service (laaS) and Platform as a Service (PaaS).
  • the computer system 100 includes hosts 180 (application servers, for example) that communicate with the clients and storage that is provided by a storage area network (SAN) appliance 1 10 of the computer system 100.
  • SAN storage area network
  • FIG. 1 Although the computer system 100 is schematically depicted in Fig. 1 as being contained in a box, the components of the computer system 1 00 may be locally disposed at a given site or may be geographically distributed at multiple locations, depending on the particular implementation.
  • the SAN appliance 1 10 provides block level storage and file services for the computer system 100. More specifically, the SAN appliance 1 10 includes actual, or physical, storage nodes 120 (eight storage nodes 120-1 to 1 20-8 being depicted in Fig. 1 , as examples), which are intercoupled by internal network fabric 140.
  • the storage nodes 120 are, in general, constructed to perform block level storage services for the SAN appliance 1 10; and at least some of the storage nodes 120 serve as physical platforms that host guest virtual machines (described herein) that provide file services for the SAN appliance 1 10.
  • the hosts 180 communicate with the storage nodes 1 20 via network fabric 178, which may include cabling, switches, gateways, and so forth.
  • a given storage node 120 is a physical machine that has associated compute resources and storage.
  • the compute resources may include one or multiple blade servers; and the storage for the node 120 may be provided by physical mass storage devices, such as magnetic tape drives, hard drives, optical drives, solid state drives, and so forth and which are disposed onboard the node 120 and/or coupled to the node 120.
  • one or more mass storage devices may be coupled to a given storage node 1 20 using a Serial Attached Small Computer System Interface (SCSI), or "SAS,” as an example.
  • SCSI Serial Attached Small Computer System Interface
  • the internal network fabric 140 of the SAN 1 10 allows block storage to be shared by multiple storage nodes 120.
  • This shared storage along with the writing of data to persistent storage in a consistent state, allow the storage nodes 1 20 to be grouped into failover arrays, where a given failover array includes two to eight storage nodes 120, depending on the particular implementation.
  • a given failover array includes two to eight storage nodes 120, depending on the particular implementation.
  • another storage node 200 of its failover array may take over.
  • the failed storage node 120 may be then be serviced or replaced for purposes of restoring the node 120 to allow the node 120 to once again provide reliable storage services.
  • the SAN appliance 1 10 provides file services, in accordance with example implementations.
  • the file services that are provided by the SAN appliance 1 10 permit the hosts 180 to use a file protocol, such as a Network File System (NFS) or Service Message Block (SMB) protocol, to create, modify and retrieve files that are stored on the SAN appliance 1 10.
  • NFS Network File System
  • SMB Service Message Block
  • the SAN appliance 1 1 0 includes file service virtual machines (VMs) 130 (four file service VMs 130-1 , 1 30-2, 1 30-3 and 130-4, being depicted as examples in Fig. 1 ).
  • the file service VMs are guest VMs that execute on respective storage nodes 120, which provide the physical platforms for the VMs.
  • file service VMs 130-1 , 130-2, 1 30-3 and 1 30-4 are hosted by the storage nodes 120-1 , 120-2, 120-5 and 120-6, respectively.
  • systems and techniques are disclosed herein that leverage the common block storage among the storage nodes 120 to allow file services for a given file services VM 130 that executes on a given storage node 1 20 to be transferred, or pushed, to another file services VM 130.
  • a given file services VM 130 may be adversely affected by a software, hardware or connectivity failure, which, in turn, impacts the VM's ability to provide its file services.
  • these file services may be at least temporality pushed to a file services VM 130 on another storage node 1 20 until the problem causing the failure has been cured.
  • the SAN appliance 1 10 may include a service processor 172, which may be an actual physical machine or a VM that is hosted on a physical machine, depending on the particular implementation.
  • the service processor 172 is in communication with the storage nodes 1 20 via management network fabric 174 to perform such functions as remote error detection and reporting and to support diagnostic and maintenance activities.
  • the SAN appliance 1 10 may also include a management processor 176, which may be another actual physical machine.
  • the management processor 176 is coupled to the file services VMs 130 via the network fabric 178 to perform file services management, including generating such commands as commands to create, show, set and remove file shares.
  • Fig. 2 illustrates data layers 200 of the SAN appliance 1 1 0, in accordance with example implementations.
  • the data layers 200 include a base, physical storage layer 210 that is formed from physical mass storage devices, or physical drives 21 1 .
  • the physical drives 21 1 are drives that are built into or closely coupled to the storage nodes 120 (drives 21 1 coupled to the storage nodes 120 via Serial Attached SCSI connections, for example), in accordance with example implementations.
  • the data layers 200 include the following layers in order of increasing abstraction: chunklets 220; logical disks 230; common provisioning groups 240; and virtual volumes 251 . [0021 ]
  • chunklets 220 chunklets 220
  • logical disks 230 common provisioning groups 240
  • virtual volumes 251 virtual volumes
  • each data layer is created from the elements of the layer above it.
  • the chunklets 220 are created from the physical storage layer 210; the logical disks 230 are created from groups of chunklets 220; the common provisioning groups 240 are groups of logical disks 230; and the virtual volumes 251 use storage space provided by the common provisioning groups 240 and are exported to the hosts 180.
  • each chunklet 220 occupies a contiguous space on a physical drive 21 1 , and a given chunklet 220 is assigned to a particular logical disk 230.
  • the logical disk 230 in accordance with example implementations, is a collection of disk chunklets 220 arranged as rows of
  • each RAID set may be formed from chunklets 220 from different physical drives 21 1 .
  • logical disk 230-1 may be a RAID 5 logical disk and may be formed from chunklets 220-1 , 220-2, 220-5 and 220-6.
  • the logical disks 230 are pooled together to form the common provisioning groups 240, which allocate space to the virtual volumes 251 .
  • a given common provisioning group 240 is a virtual pool of logical disks 230, which allocates space to virtual volumes 251 on demand.
  • the virtual volumes 251 in general, are exported as Logical Unit Numbers (LUNs) to the hosts 180.
  • LUNs Logical Unit Numbers
  • the virtual volumes 251 include thinly-provisioned virtual volumes 250 and fully-provisioned virtual volumes 252.
  • the fully-provisioned virtual volume 252 is a volume that uses logical disks 230 that belong to a common provisioning group 240 and which has a set amount of user space that is allocated for user data.
  • the fully-provisioned volume size is fixed.
  • the thinly- provisioned virtual volume 250 also belongs to a common provisioning group 240 and draws space from the common provisioning group pool as needed, allocating space on demand in small increments.
  • the SAN appliance 1 10 manages objects pursuant to a file persona 400 that is depicted in Fig. 4A.
  • a file services interface 41 0 of the SAN appliance 1 1 0 is used to create and manage file shares 420 and corresponding file stores 430.
  • a file share 420 is a managed group of files for a given customer, and the file stores 430 set forth policies and settings for the managed file objects.
  • the file persona 400 further includes file provisioning groups 440, where each file provisioning group 440 is associated with a file services VM 130.
  • the file services VM 130 parallels an actual appliance and has an associated set of virtual interfaces/Internet protocol (IP) addresses for connections. Moreover, the file services VM 130 has an associated set of properties, such as quotas and snapshots. There may be multiple file provisioning groups 440 per storage node 120, which are serviced by the associated file services VM 1 30.
  • IP Internet protocol
  • the file services VMs 130 store configuration data in persistent storage, which, in
  • a given file services VM 130 may at least temporarily push its file services to another storage node 120 should the file services VM 130 be affected with a failure that impacts the ability of the VM 130 to reliably provide its file services. In this manner, a given file services VM 130 may be unable to reliably provide file services due to a hardware, software or connectivity failure.
  • the file services VM 1 30 directly or indirectly learns of such a failure, the VM 130 pushes its file services to another storage node 120 so that a file services VM 130 on that other storage node 1 20 may takeover providing the file services while corrective action is being performed to address the failure.
  • the file services VM 130 stores data in persistent storage, which describes the configuration of its file services and allows takeover of these file services.
  • Fig. 3A depicts configuration data 330 that an example file services VM 130-1 stores in persistent storage 320.
  • the configuration data 330 may include data 331 representing the virtual ports that are being used by the file services VM 130 to communicate with the hosts 180 (see Fig. 1 ); and the configuration data 330 may include data 333 that represents the storage that is attached to the VM 130.
  • the configuration data 330 may include data 335 that represents access control information that regulates whether the file services VM 130-1 may access given attached storage.
  • access control may be used to prevent the file services VM 130-1 from accessing certain attached storage in the event that volumes are being checked after a failure occurs, as access to these volumes may be prevented until the volumes are verified as being correct or reliable.
  • the data 355 may be used to prevent storage undergoing power failure recovery from being attached.
  • the file services VM 130-1 pushes its file services to a file services VM on another storage node 1 20, such as file services VM 1 30-2 for the example of Fig. 3A; and the file services VM 130-2 uses the configuration data 330 to takeover file services for the file services VM 130-1 .
  • a file services management entity, or engine 131 of the file service VM 130-1 , monitors events in the SAN appliance 1 1 0 to either directly detect a failure that impacts the VM 130-1 or be notified of such a failure.
  • the file services management engine 1 31 may push ownership of the corresponding file services to another storage node 120.
  • a file services management engine 131 of the file services VM 130-2 in response to having detected an associated failure, pushes its file services to the VM 130-2; and the VM 130-2 accesses the configuration data 330 to configure the VM 130-2 to take over the file services.
  • the file services management engine 131 of the file services VM 130 may initiate migration of the file services VM 130 to another storage node, where this migration occurs using configuration data 330.
  • configuration data 330 may be used to store configuration data 330.
  • the file services may be upgraded one storage node 120 at a time, in parallel with the SAN upgrade of the hosting hardware node 120, in accordance with example implementations.
  • the above-described failover capability maintains data availability.
  • a technique 350 includes determining (decision block 354) whether a failure
  • VM file services virtual machine
  • the file services management engine 131 may perform a technique 364 for purposes of performing a VM file services recovery in the event of a power failure.
  • the file services management engine 131 begins a VM pre-start validation by waiting (block 365) for an array system manager to start and host nodes to form a cluster.
  • the file services management engine 131 spawns per node tasks to validate and start the VM.
  • the file services management engine validates (block 366) the boot volume and determines (decision block 367) whether the boot volume has been corrupted. If so, the file services management engine 131 raises an alert that corrective action is needed, pursuant to block 374. Otherwise, the file services management engine 131 proceeds to validate (block 368) the cluster quorum volume and determine (decision block 369) whether the quorum volume has been corrected. If corruption of the quorum volume is determined, then the file services management engine 131 proceeds to raise an alert that corrective action is needed, pursuant to block 374. Otherwise, the file services management engine 1 31 checks and detaches any failed provisioning group virtual volumes, pursuant to block 370 and validates the VM data network interfaces and bonds, pursuant to block 371 .
  • the file services management engine 131 attempts to start (block 372) the VM and determines (decision block 373) whether the VM has started. If not, the file services management engine 131 raises an alert that corrective action is needed, pursuant to block 374.
  • the file services management engine 1 31 begins (block 375) VM post start validation.
  • the file services management engine 131 reattaches the file provisioning group virtual volumes, which were failed in the pre-start phase but are now transitioned to the normal state, pursuant to block 376.
  • the file services management engine 131 then issues (block 377) device discovery to make the virtual volumes visible on the VMs, and the engine 1 31 activates and mounts file provisioning groups, pursuant to block 378.
  • the file services management engine 131 performs a technique 380 for purposes of performing a recovery to rescue a node and a VM.
  • the file services management engine 1 31 fetches (block 381 ) VM configuration from persistent storage and recreates the VM.
  • the file services engine 131 next attempts to start (block 382) the VM.
  • the file services management engine 1 31 waits for the VM to begin running and join the cluster.
  • the file services management engine 1 31 waits for the cluster to have an active VM.
  • the file services management engine attaches (block 387) the quorum device to the VMs; attaches (block 388) the file provisioning group virtual volumes to the VMs; issues (block 389) device discovery to make the virtual volumes visible on the VMs; and activates and mounts (block 390) the file provisioning groups.
  • the storage node 120 uses a battery backup system to temporarily power the node 120 to allow the contents of a write cache of the node 120 to be written out to persistent storage.
  • This mechanism ensures that the block storage underlying the file shares may be recreated in a consistent state.
  • This solution is supported by dynamically updating the access control part 335 (see Fig. 3A) of the configuration data 330 so that the VM taking over may boot without the data volumes that are undergoing checks and recovery after power failure.
  • the file services management entity 131 running on the VM taking over may then periodically attempt to re-attach any missing storage so that the entity 131 may activate missing storage over time as the storage becomes available and update the configuration data 330 accordingly.
  • the configuration data 300 is cached in a location common across the entire SAN appliance 1 10 to accommodate the potential replacement of a storage node 120.
  • the SAN appliance may use a set of software processes to support the recreation of the VM state from the cached configuration, in accordance with example implementations.
  • the SAN appliance 1 10 is constructed to use a top down approach to allocate and manage underlying block storage for file shares in a manner that is transparent to the human, SAN
  • a file share is created using a top down process 460 in which a user submits a file share creation request 462, which assigns a number, or identifier, to the file share (or multiple file shares to be created) along with an estimated capacity.
  • a human, SAN administrator 464 responds to the request 462 by generating a command 466 for the SAN appliance 1 10.
  • the command 466 may be generated using a graphical user interface (GUI) of the management processor 176, in accordance with example implementations.
  • GUI graphical user interface
  • the management processor 176 or any other processor may serve as a command generator that generates the command 466.
  • the command 466 indicates logical volume unit(s) for the file share(s) to be created, along with a capacity for the file share(s), in accordance with example
  • a provisioning engine 177 (a provisioning engine 1 77 of the management processor 176 (Fig. 1 ), for example), in response to the command 466, allocates 470 one or more logical units of block storage and allocates those to the use of the specified set of file share(s), i.e., allocates block storage for one or multiple file provisioning groups associated with the set of file share(s).
  • the provisioning engine 177 also, in response to the command 466, exposes the allocated logical units of block storage to the file services virtual machine 130. Specifically, the provisioning engine 177 registers the desire for the newly-attached storage to be discovered and allocated for the file provisioning group(s) associated with the identified file share(s). As part of the provisioning process, the provisioning engine 177 may attempt to place the associated file provisioning group(s) such that the resulting allocation is evenly or near evenly distributed across the file service nodes.
  • the SAN administrator 464 may respond to this request by submitting another command to cause the provisioning engine 177 to increase this storage allocation for the file share, following a similar procedure to the process 460. It is noted that the process 460 separates the allocation and management of the underlying block storage from the abstracted storage pool, which is used to support share storage. This approach may be particularly advantageous, as compared to a tightly-coupled storage approach that proceeds in a bottom up fashion for purposes of allocating and provisioning storage for a file share.
  • a technique 500 includes receiving (block 504) a command to create a file share, where the command identifies logical volume unit(s) and a capacity for the file share.
  • the technique 500 one or more block storage units are provisioned for the file share in response to the command, pursuant to block 508.
  • the storage node 1 20 is an actual physical machine that is made up of actual hardware 610 and actual machine executable instructions 650, or "software.”
  • the hardware 61 0 may include, as examples, one or multiple central processing units (CPUs) 614, memory 616 (non-volatile and volatile memory, as examples) and the physical drives 21 1 (solid state drives, optical drives, magnetic media drives, and so forth).
  • the hardware 610 may include one or multiple network interfaces 624.
  • the machine executable instructions 650 may include, as examples, instructions that when executed by the CPU(s) 614, cause the CPU(s) 614 to provide a hypervisor, or virtual machine monitor (VMM) 664, as well as various VM guests, such as one or multiple file services VMs 1 30.
  • the instructions 650 when executed by the CPU(s) 614 may further cause the CPU(s) 614 to form an operating system 654, a block storage engine 656, and so forth.
  • the service processor 172 and management processor 176 of Fig. 1 may have similar architectures to perform functions described herein (such as file share creation, for example), in accordance with example implementations.

Abstract

La présente invention concerne une technique qui comprend la fourniture de services de fichier sur un réseau de zone de stockage à l'aide d'une pluralité de machines virtuelles disposées sur une pluralité de nœuds physiques du réseau de zone de stockage. La technique consiste, en réponse à la survenue d'une détection d'une défaillance associée à une machine virtuelle donnée exécutant sur un nœud de stockage donné, à pousser des services de fichier fournis par la machine virtuelle donnée à un autre nœud de stockage.
PCT/US2014/063333 2014-10-31 2014-10-31 Fourniture de services de fichier sur réseau de zone de stockage WO2016068982A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/063333 WO2016068982A1 (fr) 2014-10-31 2014-10-31 Fourniture de services de fichier sur réseau de zone de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/063333 WO2016068982A1 (fr) 2014-10-31 2014-10-31 Fourniture de services de fichier sur réseau de zone de stockage

Publications (1)

Publication Number Publication Date
WO2016068982A1 true WO2016068982A1 (fr) 2016-05-06

Family

ID=55858083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/063333 WO2016068982A1 (fr) 2014-10-31 2014-10-31 Fourniture de services de fichier sur réseau de zone de stockage

Country Status (1)

Country Link
WO (1) WO2016068982A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213246B1 (en) * 2002-03-28 2007-05-01 Veritas Operating Corporation Failing over a virtual machine
US20110246627A1 (en) * 2010-04-01 2011-10-06 International Business Machines Corporation Data Center Affinity Of Virtual Machines In A Cloud Computing Environment
US20110252271A1 (en) * 2010-04-13 2011-10-13 Red Hat Israel, Ltd. Monitoring of Highly Available Virtual Machines
US8839031B2 (en) * 2012-04-24 2014-09-16 Microsoft Corporation Data consistency between virtual machines
WO2014160479A1 (fr) * 2013-03-13 2014-10-02 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona, Acting For And On Behalf Of Arizone State University Systèmes et appareils pour cadre d'applications mobiles sécurisées en nuage pour calcul et communication mobiles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7213246B1 (en) * 2002-03-28 2007-05-01 Veritas Operating Corporation Failing over a virtual machine
US20110246627A1 (en) * 2010-04-01 2011-10-06 International Business Machines Corporation Data Center Affinity Of Virtual Machines In A Cloud Computing Environment
US20110252271A1 (en) * 2010-04-13 2011-10-13 Red Hat Israel, Ltd. Monitoring of Highly Available Virtual Machines
US8839031B2 (en) * 2012-04-24 2014-09-16 Microsoft Corporation Data consistency between virtual machines
WO2014160479A1 (fr) * 2013-03-13 2014-10-02 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona, Acting For And On Behalf Of Arizone State University Systèmes et appareils pour cadre d'applications mobiles sécurisées en nuage pour calcul et communication mobiles

Similar Documents

Publication Publication Date Title
US11314543B2 (en) Architecture for implementing a virtualization environment and appliance
US11922157B2 (en) Virtualized file server
US9753761B1 (en) Distributed dynamic federation between multi-connected virtual platform clusters
US9575894B1 (en) Application aware cache coherency
US10853121B2 (en) Virtual machine recovery in shared memory architecture
US9733958B2 (en) Mechanism for performing rolling updates with data unavailability check in a networked virtualization environment for storage management
US10379759B2 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
US9851906B2 (en) Virtual machine data placement in a virtualized computing environment
US9286344B1 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
EP2778919A2 (fr) Système, procédé et support lisible par ordinateur pour le partage de mémoire cache dynamique dans une solution de mise en antémémoire à mémoire flash supportant des machines virtuelles
US20140115579A1 (en) Datacenter storage system
US10061669B2 (en) Mechanism for providing real time replication status information in a networked virtualization environment for storage management
US9602341B1 (en) Secure multi-tenant virtual control server operation in a cloud environment using API provider
US20230418716A1 (en) Anti-entropy-based metadata recovery in a strongly consistent distributed data storage system
WO2016068982A1 (fr) Fourniture de services de fichier sur réseau de zone de stockage
Tate et al. Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8. 2.1
US20240143462A1 (en) Monitoring input/output and persistent reservation activity patterns to detect degraded performance of a high availability and fault tolerant application
US20230176884A1 (en) Techniques for switching device implementations for virtual devices
US10620845B1 (en) Out of band I/O transfers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14904656

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14904656

Country of ref document: EP

Kind code of ref document: A1