WO2016122608A1 - Virtual machines and file services - Google Patents

Virtual machines and file services Download PDF

Info

Publication number
WO2016122608A1
WO2016122608A1 PCT/US2015/013813 US2015013813W WO2016122608A1 WO 2016122608 A1 WO2016122608 A1 WO 2016122608A1 US 2015013813 W US2015013813 W US 2015013813W WO 2016122608 A1 WO2016122608 A1 WO 2016122608A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual machines
san
file
services
file services
Prior art date
Application number
PCT/US2015/013813
Other languages
French (fr)
Inventor
Ronald John LUMAN
Matthew David BONDURANT
Hrishikesh Talgery
Craig M. HADA
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/013813 priority Critical patent/WO2016122608A1/en
Publication of WO2016122608A1 publication Critical patent/WO2016122608A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • a SAN is a set of interconnected devices and servers that are connected to a common communication and data transfer infrastructure such as Fibre Channel.
  • the purpose of the SAN is to allow multiple servers access to a pool of block storage in which any server can potentially access any storage unit with the block storage.
  • FIG. 1 A is an illustration of an example of a computer system including a storage area network (SAN) with virtual machines located in each node of the SAN;
  • SAN storage area network
  • FIG. 1 B is an illustration of an example of a computer system including a storage area network (SAN) with a pair of virtual machines located in a pair of nodes of the SAN;
  • SAN storage area network
  • FIG. 2A is a process flow diagram of an example of a method of enabling file services within a SAN
  • FIG. 2B is a process flow diagram of an example of a method of generating and clustering virtual machines (VMs) on a SAN;
  • FIG. 3 is a block diagram of a tangible, non-transitory, computer- readable medium that holds code to direct a processor to enable file services on a virtual machine of a storage area network (SAN).
  • SAN storage area network
  • Examples disclosed herein provide techniques for enabling file services using a virtual management system located in a storage area network (SAN).
  • SAN storage area network
  • IT information technology
  • a SAN provides block-level storage, e.g., blocks of data, that can be accessed by applications running on network servers connected to the SAN.
  • a SAN does not provide file-level storage, e.g., storage of files/folders using a file directory protocol.
  • a file system may be built as a separate entity, on top of a SAN, to perform file services.
  • the combination of both the file system and the SAN, which includes the block storage, may be considered as a SAN file system or shared disk file system.
  • an infrastructure that includes file services isolated from block-level storage services, where both services are included in an existing SAN appliance, may be used as a storage management solution.
  • the SAN may be embedded with dormant file services software that may be enabled on nodes within the SAN. Once the file services are invoked for use, virtual machines may be generated and located within the nodes of the SAN using a template approach.
  • Each virtual machine as a file services server, may have the ability to substantially maintain the activities of the file services so as to isolate the file services from the block services of the SAN.
  • Fig. 1 A is an illustration of an example of a computer system 100 including a storage area network (SAN) with virtual machines located in each node of the SAN.
  • the computer system 100 may be used to provide hosted services, for example, cloud services for clients.
  • the computer system 100 may be used by one or more host cloud services, such as Software as a Service (SaaS), Infrastructure as a Service (laaS), and Platform as a Service (PaaS).
  • SaaS Software as a Service
  • laaS Infrastructure as a Service
  • PaaS Platform as a Service
  • the computer system 1 00 may include one or more servers, for example, application host servers 102, that communication with one or more clients 104 and with storage, for example, a SAN 106.
  • the SAN 106 is a network of multiple storages devices that connects the storage devices with the application host servers 102 using an interconnection technology, such as SCSI switches, or in the case of the present examples, both a SAN fabric and a file services network fabric 108.
  • applications running on the application host servers 1 02 may access the storage devices of the SAN 106 to retrieve data.
  • the network of storage device within the SAN 106 can include hard drives, tape libraries, and, more commonly, disk-based devices, for example, RAID hardware.
  • the SAN 106 provides both block-level storage and file services for the computer system 100.
  • the SAN 106 includes physical nodes 1 10 used for the storage of data.
  • the number of nodes 1 1 0 may vary depending on the particular implementation.
  • the SAN 106 includes six nodes 1 10-1 through 1 10-6 that are intercoupled by a network communications channel 1 12 to provide secure and efficient management of data among the nodes 1 10.
  • the network communication channel 1 1 2 may include point-to-point connections in a mesh network, among others.
  • the nodes 1 10 are physical machines that include computing resources and storage resources. For example, the computing resources may provide
  • the storage resources of the nodes 1 10 may include a physical mass storage device, such as a magnetic tape drive, a hard drive, an optical drive, among others.
  • the storage resources may be coupled to the nodes 1 10.
  • one or more mass storage devices may be coupled to a given node 1 10 using a Serial Attached Small Computer System Interface (SCSI), i.e., a "SAS.”
  • SCSI Serial Attached Small Computer System Interface
  • the nodes 1 10 of the present examples may be configured to provide dual services.
  • the nodes 1 10 are configured to perform block- level storage services and at least one node 1 10 is configured to provide both file and block services within the SAN 106.
  • the node 1 10 providing file services may be redundantly provisioned, e.g., with two or more nodes 1 10 providing identical or overlapping file services within the SAN 1 06.
  • the host application servers 102 may communicate with the nodes 1 1 0 using the file services network fabric 108 to retrieve either block-level storage or file-level storage.
  • the network communications channel 1 12 of the SAN 106 provides storage to be shared amongst the nodes 1 1 0.
  • This shared storage along with the writing of data to persistent storage in a consistent state, allows the nodes 1 10 to be grouped into failover arrays as part of a redundancy approach, where a given failover array includes two or more nodes 1 10, depending on the implementation.
  • the failover arrays may include nodes 1 1 0 grouped in pairs. Accordingly, if one of the nodes 1 10 within a failover pair array enters a fail state, another storage node 1 10 with the failover pair array may take over.
  • the fail state of the example may include a system shutdown or unreliable system performance due to a power failure, hardware failure, or software failure, among others. A failed storage node may be serviced or replaced for purposes of providing reliable storage services.
  • the SAN 1 06 of the present example also provides file-level storage, e.g., the file services.
  • file services enable servers of the SAN 106 to index and share data files, e.g., word processing documents, spreadsheets.
  • the data files may be stored on file services servers within the SAN 106 and accessed by the host application servers 1 02 of the computer system 100.
  • the file services servers of the SAN 106 may run basic services, such as data storage, folder sharing, and share permissions.
  • the file services servers of the present example may permit the host application servers 1 02 to use a file protocol, such as a Network File System (NFS) or Service Message Block (SMB) protocol, to create, share, and to retrieve data files that are stored on the SAN 106.
  • a file protocol such as a Network File System (NFS) or Service Message Block (SMB) protocol
  • the present examples may create virtual machines (VMs) 1 14, e.g., 1 14-1 to 1 14-6, within the nodes 1 10 of the SAN 106 using a virtualization infrastructure.
  • the VMs 1 14 may be automatically created using a configuration template.
  • the newly created VMs 1 14 may claim file services resources that have been reserved by the SAN 106 and attach the resources to the VMs 1 14.
  • each of the VMs 1 14 may be considered as a file services VM 1 14 that is executed on a respective node 1 1 0 within the SAN 106 to provide file-level storage.
  • the SAN 106 provides a physical platform for the files services VMs 1 14 to be created and to enable and provide file services for the storage of files and folders that can be directly accessed and managed.
  • file services may include file-sharing capabilities such as server message block (SMB) file sharing, or network file system (NFS) file sharing, among other type of file sharing techniques.
  • SMB server message block
  • NFS network file system
  • the file sharing systems may run as a virtualization guest.
  • the file services may be substantially isolated from the block-level storage services since the files services VMs 1 14 are located in an isolated environment within the SAN 106.
  • the network communications channel 1 12 may be used to provide communication between the nodes 1 10 using hardware of the SAN 106, such as interconnect hardware.
  • communications between each of the file services VMs 1 14 may also be established through the network communications channel 1 12.
  • the channel 1 12 may act as standard network interface for the file services VMs 1 14.
  • the network communications channel 1 1 2 may be under VM guest control, with minimal diagnostic information provided by the SAN 106.
  • the SAN 1 06 may include a service processor 1 16, which may be an actual physical machine or a virtual machine. As shown in Fig. 1 , the service processor 1 16 is in communication with the nodes 1 1 0 via a network fabric 1 18. The service processor 1 16 may perform such functions as reporting and supporting diagnostic and maintenance activities. Further, each file services VM 1 14 may be coupled to a management processor 1 20 via the network fabric 1 18 to perform file services management including generating commands such as creating, showing, setting, and removing file shares.
  • FIG. 1 B is an illustration of an example of a computer system 122 including a storage area network (SAN) with a pair of virtual machines located in a pair of nodes of the SAN.
  • SAN storage area network
  • the number of files services VMs 1 14 within the nodes 106 may be limited.
  • the VMs 1 14 may be provided in pairs to automatically enable file services within the nodes 106.
  • the pairing of files services VMs 1 14 provides a redundancy approach in the case of power failure or disruption. For example, a failed VM may be replaced by the second VM in the pair to continue the file services.
  • the examples described provide for the management of reserved resources and the creation and management of virtual machines that are capable of enabling and providing file services capabilities on a SAN that also includes block storage.
  • the virtual machines of the SAN may provide an isolated and separate environment in which file services are enabled for data storage and file sharing.
  • FIG. 1 A and Fig. 1 B are depicted as self-contained entities within a box, the components of the systems 1 00 and 122 may be locally disposed at a given site or may be distributed at multiple locations, depending on the particular implementation. Further, it is to be understood that the illustrations of Fig. 1 A and Fig. 1 B are not intended to indicate that the computer systems 100 and 122 are to include all of the components shown in the figures in every example. Further, any number of additional components can be included within the computer systems 100 and 122 depending on the details of the specific implementations, including SAN discovery collectors, switches, servers, among others.
  • Fig. 2A is a process flow diagram of an example of a method of enabling file services within a SAN.
  • the SAN may be configured to include server- computing capabilities on each of its nodes.
  • the SAN may include hardware for block-level storage services, hardware for providing a file sharing data path, and a physical infrastructure to implement interconnections between the nodes.
  • the software of the SAN may be capable of acting as a virtualization host and providing file services that are capable of running as a virtualization guest.
  • a command may be invoked to activate dormant file services.
  • the computer system may automatically create and install a virtual machine (VM) infrastructure where a VM may be located on each of the nodes of the SAN.
  • VM virtual machine
  • the VMs may be created on less than all of the nodes. Instead, the VMs may be created on at least two of the nodes of the SAN so as to be created in pairs for a redundancy approach.
  • the VMs may take control of the file services and thus, may be considered as file services VMs.
  • an enable command starts each file services VM and enables a files services system on each of the file services VMs for use.
  • the file services system may be substantially maintained within the file services VMs and isolated from block services.
  • a user may decide to disable file services, at which point the file services VMs are destroyed.
  • file services software of the SAN may be dormant when delivered to a user. For example, dormant resources for file services may be identified in advance and reserved for later use. The use of such pre-allocated file resources may insure that file services may be enabled within the SAN on an as- needed basis.
  • the pre-allocation of resources for file services ahead of their use may allow for large contiguous blocks of memory to be reserved. In some cases where the setting aside of resources is delayed until deployment, the resources may be fragmented so that smaller blocks of memory may exist.
  • the reserved file resources may include central processing unit (CPU) cores, memory, and network interfaces, among other resources.
  • CPU central processing unit
  • the reservation for the resources may be removed and the file services resources may be claimed by the files services VMs for attachment prior to activating the files services VMs.
  • the SAN can provide both block-level storage and file-level storage using a virtualized infrastructure created directly within the nodes of the SAN. Accordingly, file access and sharing of data can be handled directly within the SAN since file services software may be enabled on virtual machines located within the SAN. Accordingly, the SAN of the present examples provide mass file storage, as well as the ability to connect to storage via a file level protocol.
  • process flow diagram of Fig. 2A is not intended to indicate that the method is to include all of the blocks shown in Fig. 2A, in every case. Further, any number of additional blocks can be included within the method, depending on the details of the specific implementation. In addition, it is to be understood that the process flow diagram of Fig. 2A is not intended to indicate that the method is only to proceed in the order indicated by the blocks shown in Fig. 2A in every case.
  • FIG. 2B is a process flow diagram of an example of a method of generating and clustering virtual machines (VMs) on a SAN.
  • VMs virtual machines
  • a number of VMs may be generated in the SAN using a configuration template.
  • configuration template is the configuration of a single virtual machine that may be shared among other users to create new virtual machines (VMs) and to compile common configuration details and software tools to be delivered across each new VMs.
  • the new VMs may inherit the same contents and configuration from the configuration template.
  • the configuration template may contain basic configuration, such as the number of virtual CPUs, the size of memory, virtual disks, virtual network interfaces, and so on.
  • Various techniques may use a configuration template including a Clone to Template method or Convert to Template method.
  • the Clone to Template method duplicates a virtual machine and changes the machine to match a template format.
  • the Convert to Template method does not change the duplicated virtual machine into a template format but leaves the original virtual machine intact for further use.
  • each generated VM has the capacity to attach the file services resources reserved by the SAN, and thus, create a file services VM in order to store and share file-level data.
  • a network communications channel connects the nodes of the SAN in order to provide a pathway for the nodes to communicate with one another.
  • the network communication channel may also establish communications between the file services VMs.
  • a clustering method may be used to establish communications and maintain common
  • the clustering method may cluster the groups of VMs together into groups. Accordingly, at block 210, the file services VMs may be grouped into cluster formations.
  • the file services VMs may identify a network interface using a media access control (MAC) address and may assign an internet protocol (IP) address to each file services VMs. The IP addresses may be used by the file services VM to locate one another and to form groups of file services VMs into cluster formations.
  • MAC media access control
  • IP internet protocol
  • process flow diagram of Fig. 2B is not intended to indicate that the method is to include all of the blocks shown in Fig. 2B in every case. Further, any number of additional blocks can be included within the method, depending on the details of the specific implementation. In addition, it is to be understood that the process flow diagram of Fig. 2B is not intended to indicate that the method is only to proceed in the order indicated by the blocks shown in Fig. 2B in every case.
  • FIG. 3 is a block diagram of a tangible, non-transitory, computer- readable medium that holds code to direct a processor to enable file services on a virtual machine of a storage area network (SAN).
  • the computer-readable medium 300 can be accessed by a processor 302 over a system bus 304.
  • the code may direct the processor 302 to perform the steps of the current method as described with respect to FIGS. 2A and 2B.
  • the SAN may implement a virtualized management system to provide block-level storage services and file-level storage services, where the file-level storage services operate in an isolated environment separate from the block-level services.
  • the computer-readable medium 300 can include an invoke module 306 to activate file services on at least one node of the SAN.
  • the computer-readable medium can include an install module 308 to install a virtual machine infrastructure on each of the nodes of the SAN.
  • the virtual machine can include an invoke module 306 to activate file services on at least one node of the SAN.
  • the computer-readable medium can include an install module 308 to install a virtual machine infrastructure on each of the nodes of the SAN.
  • the computer- readable medium 300 may include a generate virtual machines module 310.
  • the SAN may generate multiple virtual machines using a template approach using the virtual machine infrastructure.
  • a virtual machine may be installed on each of the nodes of the SAN.
  • the computer-readable medium 300 may include an enable module 312 within the SAN to enable file services, on an as needed basis, on each of the virtual machines.
  • the enabling of the files services on the virtual machines may create file services virtual machines (VMs), which may provide an isolated environment to process a file services request while providing separation from block services.
  • VMs file services virtual machines
  • a virtualized SAN may provide file services that run
  • the computer-readable medium 300 may include an activate module 314 where the files services may be activated to run on each of the virtual machines.
  • file services resources may be reserved in advance on a node of the SAN.
  • the SAN may reserve resources in anticipation of performing a service, such as file-level storage services, file sharing services, and the like.
  • the operating system (OS) software of the SAN may reserve file services resources to several nodes of the SAN in anticipation of enabling a file services system within the SAN.
  • the OS software may be modified to specifically support the reservation.
  • the reserved file services resources may be claimed by the virtual machines via a claim resources module 316 at the time of activating the file services.
  • the computer-readable medium 300 may include a cluster virtual machines module 318 to cluster the virtual machines, for example, into pairs or groups to provide communication among the virtual machines and to provide a redundancy approach.
  • a cluster virtual machines module 318 to cluster the virtual machines, for example, into pairs or groups to provide communication among the virtual machines and to provide a redundancy approach.
  • a two-node cluster, a three-node cluster, a four- node cluster, or a five-node cluster and so on may be established where the number of virtual machines is based on the specific implementation.
  • FIG. 3 The block diagram of FIG. 3 is not intended to indicate that the computer-readable medium 300 is to include all of the components or modules shown in FIG. 3. Further, any number of additional components may be included within the computer-readable medium 300, depending on the details of the specific

Abstract

In one example, a storage area network (SAN) with file services is disclosed. The method includes invoking, via a processor, a command to activate file services on at least one node of the SAN, where the SAN comprises a plurality of nodes. The method includes installing, via the processor, a virtual machine infrastructure on each of the plurality of nodes, where the virtual machine infrastructure includes a plurality of virtual machines. The method includes enabling, via the processor, the file services on each of the virtual machines of the plurality of nodes.

Description

VIRTUAL MACHINES AND FILE SERVICES
BACKGROUND
[0ΘΘ1] When a physical computer and a host operating system are configured appropriately, a virtual machine can access resources located on a storage area network (SAN). A SAN is a set of interconnected devices and servers that are connected to a common communication and data transfer infrastructure such as Fibre Channel. The purpose of the SAN is to allow multiple servers access to a pool of block storage in which any server can potentially access any storage unit with the block storage.
BRIEF DESCRIPTION OF THE DRAWINGS
[ΘΘΘ2] Certain exemplary examples are described in the following detailed description and in reference to the figures, in which:
[0003] Fig. 1 A is an illustration of an example of a computer system including a storage area network (SAN) with virtual machines located in each node of the SAN;
[0ΘΘ4] Fig. 1 B is an illustration of an example of a computer system including a storage area network (SAN) with a pair of virtual machines located in a pair of nodes of the SAN;
[ΘΘΘ5] Fig. 2A is a process flow diagram of an example of a method of enabling file services within a SAN;
[0006] Fig. 2B is a process flow diagram of an example of a method of generating and clustering virtual machines (VMs) on a SAN; and
[0007] Fig. 3 is a block diagram of a tangible, non-transitory, computer- readable medium that holds code to direct a processor to enable file services on a virtual machine of a storage area network (SAN).
DETAILED DESCRIPTION OF SPECIFIC EXAMPLES
[0008] Examples disclosed herein provide techniques for enabling file services using a virtual management system located in a storage area network (SAN). As the information technology (IT) field is faced with challenges associated with data growth, the ability to merge block-level storage and file-level storage to provide a stream-lined management system becomes increasingly important.
[ΘΘΘ9] Generally, a SAN provides block-level storage, e.g., blocks of data, that can be accessed by applications running on network servers connected to the SAN. However, a SAN does not provide file-level storage, e.g., storage of files/folders using a file directory protocol. Instead, a file system may be built as a separate entity, on top of a SAN, to perform file services. The combination of both the file system and the SAN, which includes the block storage, may be considered as a SAN file system or shared disk file system.
[ΘΘ1 Θ] However, the use of separate, dedicated hardware for both block-level storage and the file services often involves duplicate server hardware and may slow system performance for larger applications. Other techniques to provide both block- level storage and file services may include converging the two storage techniques to create an integrated service system under a single server and operating system. However, such integrated solutions may lead to a reduction in security, system performance, and fault isolation.
[0Θ11] As disclosed herein, an infrastructure that includes file services isolated from block-level storage services, where both services are included in an existing SAN appliance, may be used as a storage management solution. The SAN may be embedded with dormant file services software that may be enabled on nodes within the SAN. Once the file services are invoked for use, virtual machines may be generated and located within the nodes of the SAN using a template approach.
Each virtual machine, as a file services server, may have the ability to substantially maintain the activities of the file services so as to isolate the file services from the block services of the SAN.
[0012] Fig. 1 A is an illustration of an example of a computer system 100 including a storage area network (SAN) with virtual machines located in each node of the SAN. The computer system 100 may be used to provide hosted services, for example, cloud services for clients. In particular, the computer system 100 may be used by one or more host cloud services, such as Software as a Service (SaaS), Infrastructure as a Service (laaS), and Platform as a Service (PaaS). The computer system 1 00 may include one or more servers, for example, application host servers 102, that communication with one or more clients 104 and with storage, for example, a SAN 106. The SAN 106 is a network of multiple storages devices that connects the storage devices with the application host servers 102 using an interconnection technology, such as SCSI switches, or in the case of the present examples, both a SAN fabric and a file services network fabric 108. In operation, applications running on the application host servers 1 02 may access the storage devices of the SAN 106 to retrieve data. The network of storage device within the SAN 106 can include hard drives, tape libraries, and, more commonly, disk-based devices, for example, RAID hardware.
[0Θ13] In the present examples, the SAN 106 provides both block-level storage and file services for the computer system 100. Specifically, the SAN 106 includes physical nodes 1 10 used for the storage of data. The number of nodes 1 1 0 may vary depending on the particular implementation. In the example depicted in Fig. 1 , the SAN 106 includes six nodes 1 10-1 through 1 10-6 that are intercoupled by a network communications channel 1 12 to provide secure and efficient management of data among the nodes 1 10. In some examples, the network communication channel 1 1 2 may include point-to-point connections in a mesh network, among others. The nodes 1 10 are physical machines that include computing resources and storage resources. For example, the computing resources may provide
management and end-to-end control of input/output from the application host servers 102 to storage. The storage resources of the nodes 1 10 may include a physical mass storage device, such as a magnetic tape drive, a hard drive, an optical drive, among others. In an example, the storage resources may be coupled to the nodes 1 10. For example, one or more mass storage devices may be coupled to a given node 1 10 using a Serial Attached Small Computer System Interface (SCSI), i.e., a "SAS."
[001 ] The nodes 1 10 of the present examples may be configured to provide dual services. In some examples, the nodes 1 10 are configured to perform block- level storage services and at least one node 1 10 is configured to provide both file and block services within the SAN 106. The node 1 10 providing file services may be redundantly provisioned, e.g., with two or more nodes 1 10 providing identical or overlapping file services within the SAN 1 06. Thus, the host application servers 102 may communicate with the nodes 1 1 0 using the file services network fabric 108 to retrieve either block-level storage or file-level storage. [0015] As previously discussed, the network communications channel 1 12 of the SAN 106 provides storage to be shared amongst the nodes 1 1 0. This shared storage, along with the writing of data to persistent storage in a consistent state, allows the nodes 1 10 to be grouped into failover arrays as part of a redundancy approach, where a given failover array includes two or more nodes 1 10, depending on the implementation. For example, the failover arrays may include nodes 1 1 0 grouped in pairs. Accordingly, if one of the nodes 1 10 within a failover pair array enters a fail state, another storage node 1 10 with the failover pair array may take over. The fail state of the example may include a system shutdown or unreliable system performance due to a power failure, hardware failure, or software failure, among others. A failed storage node may be serviced or replaced for purposes of providing reliable storage services.
[0016] As previously stated, in addition to block-level storage, the SAN 1 06 of the present example also provides file-level storage, e.g., the file services. In general, file services enable servers of the SAN 106 to index and share data files, e.g., word processing documents, spreadsheets. In this manner, the data files may be stored on file services servers within the SAN 106 and accessed by the host application servers 1 02 of the computer system 100. The file services servers of the SAN 106 may run basic services, such as data storage, folder sharing, and share permissions. In particular, the file services servers of the present example may permit the host application servers 1 02 to use a file protocol, such as a Network File System (NFS) or Service Message Block (SMB) protocol, to create, share, and to retrieve data files that are stored on the SAN 106.
[0017] To create the file services servers, the present examples may create virtual machines (VMs) 1 14, e.g., 1 14-1 to 1 14-6, within the nodes 1 10 of the SAN 106 using a virtualization infrastructure. As discussed herein, the VMs 1 14 may be automatically created using a configuration template. The newly created VMs 1 14 may claim file services resources that have been reserved by the SAN 106 and attach the resources to the VMs 1 14. Accordingly, each of the VMs 1 14 may be considered as a file services VM 1 14 that is executed on a respective node 1 1 0 within the SAN 106 to provide file-level storage. In this manner, the SAN 106 provides a physical platform for the files services VMs 1 14 to be created and to enable and provide file services for the storage of files and folders that can be directly accessed and managed. Such file services may include file-sharing capabilities such as server message block (SMB) file sharing, or network file system (NFS) file sharing, among other type of file sharing techniques. The file sharing systems may run as a virtualization guest. In the present techniques, the file services may be substantially isolated from the block-level storage services since the files services VMs 1 14 are located in an isolated environment within the SAN 106. [ΘΘ18] As previously described, the network communications channel 1 12 may be used to provide communication between the nodes 1 10 using hardware of the SAN 106, such as interconnect hardware. In this case, communications between each of the file services VMs 1 14 may also be established through the network communications channel 1 12. In examples, the channel 1 12 may act as standard network interface for the file services VMs 1 14. The network communications channel 1 1 2 may be under VM guest control, with minimal diagnostic information provided by the SAN 106.
[ΘΘ1 ] The SAN 1 06 may include a service processor 1 16, which may be an actual physical machine or a virtual machine. As shown in Fig. 1 , the service processor 1 16 is in communication with the nodes 1 1 0 via a network fabric 1 18. The service processor 1 16 may perform such functions as reporting and supporting diagnostic and maintenance activities. Further, each file services VM 1 14 may be coupled to a management processor 1 20 via the network fabric 1 18 to perform file services management including generating commands such as creating, showing, setting, and removing file shares.
[0020] Fig. 1 B is an illustration of an example of a computer system 122 including a storage area network (SAN) with a pair of virtual machines located in a pair of nodes of the SAN. Like numbers are as described with respect to Fig. 1 A. As shown in Fig. 1 B, the number of files services VMs 1 14 within the nodes 106 may be limited. In examples, the VMs 1 14 may be provided in pairs to automatically enable file services within the nodes 106. The pairing of files services VMs 1 14 provides a redundancy approach in the case of power failure or disruption. For example, a failed VM may be replaced by the second VM in the pair to continue the file services.
[0021] The examples described provide for the management of reserved resources and the creation and management of virtual machines that are capable of enabling and providing file services capabilities on a SAN that also includes block storage. The virtual machines of the SAN may provide an isolated and separate environment in which file services are enabled for data storage and file sharing.
[0022] Although the computer systems 100 and 122 of Fig. 1 A and Fig. 1 B are depicted as self-contained entities within a box, the components of the systems 1 00 and 122 may be locally disposed at a given site or may be distributed at multiple locations, depending on the particular implementation. Further, it is to be understood that the illustrations of Fig. 1 A and Fig. 1 B are not intended to indicate that the computer systems 100 and 122 are to include all of the components shown in the figures in every example. Further, any number of additional components can be included within the computer systems 100 and 122 depending on the details of the specific implementations, including SAN discovery collectors, switches, servers, among others.
[0023] Fig. 2A is a process flow diagram of an example of a method of enabling file services within a SAN. The SAN may be configured to include server- computing capabilities on each of its nodes. For example, the SAN may include hardware for block-level storage services, hardware for providing a file sharing data path, and a physical infrastructure to implement interconnections between the nodes. Further, the software of the SAN may be capable of acting as a virtualization host and providing file services that are capable of running as a virtualization guest.
[0024] When a user of a computer system identifies a need to use file services, at block 202, a command may be invoked to activate dormant file services. Based on the command, at block 204, the computer system may automatically create and install a virtual machine (VM) infrastructure where a VM may be located on each of the nodes of the SAN. In some examples, the VMs may be created on less than all of the nodes. Instead, the VMs may be created on at least two of the nodes of the SAN so as to be created in pairs for a redundancy approach. In some examples, the VMs may take control of the file services and thus, may be considered as file services VMs. At block 206, an enable command starts each file services VM and enables a files services system on each of the file services VMs for use. In the present examples, the file services system may be substantially maintained within the file services VMs and isolated from block services. In some examples, after using file services, a user may decide to disable file services, at which point the file services VMs are destroyed. [0025] In some cases, file services software of the SAN, may be dormant when delivered to a user. For example, dormant resources for file services may be identified in advance and reserved for later use. The use of such pre-allocated file resources may insure that file services may be enabled within the SAN on an as- needed basis. This may conserve energy when file services are not in use, as opposed to running file services on a continuous basis. Further, the pre-allocation of resources for file services ahead of their use may allow for large contiguous blocks of memory to be reserved. In some cases where the setting aside of resources is delayed until deployment, the resources may be fragmented so that smaller blocks of memory may exist.
[0028] The reserved file resources may include central processing unit (CPU) cores, memory, and network interfaces, among other resources. In some cases, the reservation for the resources may be removed and the file services resources may be claimed by the files services VMs for attachment prior to activating the files services VMs.
[0027] Based on the present examples, the SAN can provide both block-level storage and file-level storage using a virtualized infrastructure created directly within the nodes of the SAN. Accordingly, file access and sharing of data can be handled directly within the SAN since file services software may be enabled on virtual machines located within the SAN. Accordingly, the SAN of the present examples provide mass file storage, as well as the ability to connect to storage via a file level protocol.
[0028] It is to be understood that the process flow diagram of Fig. 2A is not intended to indicate that the method is to include all of the blocks shown in Fig. 2A, in every case. Further, any number of additional blocks can be included within the method, depending on the details of the specific implementation. In addition, it is to be understood that the process flow diagram of Fig. 2A is not intended to indicate that the method is only to proceed in the order indicated by the blocks shown in Fig. 2A in every case.
[0029] Fig. 2B is a process flow diagram of an example of a method of generating and clustering virtual machines (VMs) on a SAN. At block 208, a number of VMs may be generated in the SAN using a configuration template. The
configuration template is the configuration of a single virtual machine that may be shared among other users to create new virtual machines (VMs) and to compile common configuration details and software tools to be delivered across each new VMs. In some examples, the new VMs may inherit the same contents and configuration from the configuration template. For example, the configuration template may contain basic configuration, such as the number of virtual CPUs, the size of memory, virtual disks, virtual network interfaces, and so on. Various techniques may use a configuration template including a Clone to Template method or Convert to Template method. The Clone to Template method duplicates a virtual machine and changes the machine to match a template format. The Convert to Template method does not change the duplicated virtual machine into a template format but leaves the original virtual machine intact for further use. In the present examples, each generated VM has the capacity to attach the file services resources reserved by the SAN, and thus, create a file services VM in order to store and share file-level data.
[ΘΘ3Θ] A network communications channel connects the nodes of the SAN in order to provide a pathway for the nodes to communicate with one another. The network communication channel may also establish communications between the file services VMs. In addition to the network communication channel, a clustering method may be used to establish communications and maintain common
configurations across the file services VMs. In examples, the clustering method may cluster the groups of VMs together into groups. Accordingly, at block 210, the file services VMs may be grouped into cluster formations. In operation, after the file services VMs are installed within the nodes of the SAN, the file services VMs may identify a network interface using a media access control (MAC) address and may assign an internet protocol (IP) address to each file services VMs. The IP addresses may be used by the file services VM to locate one another and to form groups of file services VMs into cluster formations.
[0031] It is to be understood that the process flow diagram of Fig. 2B is not intended to indicate that the method is to include all of the blocks shown in Fig. 2B in every case. Further, any number of additional blocks can be included within the method, depending on the details of the specific implementation. In addition, it is to be understood that the process flow diagram of Fig. 2B is not intended to indicate that the method is only to proceed in the order indicated by the blocks shown in Fig. 2B in every case.
[ΘΘ32] Fig. 3 is a block diagram of a tangible, non-transitory, computer- readable medium that holds code to direct a processor to enable file services on a virtual machine of a storage area network (SAN). The computer-readable medium 300 can be accessed by a processor 302 over a system bus 304. In some examples, the code may direct the processor 302 to perform the steps of the current method as described with respect to FIGS. 2A and 2B. In examples, the SAN may implement a virtualized management system to provide block-level storage services and file-level storage services, where the file-level storage services operate in an isolated environment separate from the block-level services.
[0033] The computer-readable medium 300 can include an invoke module 306 to activate file services on at least one node of the SAN. The computer-readable medium can include an install module 308 to install a virtual machine infrastructure on each of the nodes of the SAN. In some examples, the virtual machine
infrastructure includes a plurality of virtual machines. Accordingly, the computer- readable medium 300 may include a generate virtual machines module 310. For example, after a user has invoked a command to use file services and to install a virtual machine infrastructure on the nodes of the SAN, the SAN may generate multiple virtual machines using a template approach using the virtual machine infrastructure. In some cases, a virtual machine may be installed on each of the nodes of the SAN.
[0034] The computer-readable medium 300 may include an enable module 312 within the SAN to enable file services, on an as needed basis, on each of the virtual machines. In some examples, the enabling of the files services on the virtual machines may create file services virtual machines (VMs), which may provide an isolated environment to process a file services request while providing separation from block services. A virtualized SAN may provide file services that run
independently in terms of management, availability, and migration of resources between file services virtual machines. Overall, by implementing a storage array with both block storage and files services, a computer system may stream-line its storage allocation and improve data protection and storage management. [0035] The computer-readable medium 300 may include an activate module 314 where the files services may be activated to run on each of the virtual machines. In some cases, file services resources may be reserved in advance on a node of the SAN. For example, the SAN may reserve resources in anticipation of performing a service, such as file-level storage services, file sharing services, and the like. In the present examples, the operating system (OS) software of the SAN may reserve file services resources to several nodes of the SAN in anticipation of enabling a file services system within the SAN. In some cases, the OS software may be modified to specifically support the reservation. The reserved file services resources may be claimed by the virtual machines via a claim resources module 316 at the time of activating the file services.
[0038] The computer-readable medium 300 may include a cluster virtual machines module 318 to cluster the virtual machines, for example, into pairs or groups to provide communication among the virtual machines and to provide a redundancy approach. As a result, a two-node cluster, a three-node cluster, a four- node cluster, or a five-node cluster and so on, may be established where the number of virtual machines is based on the specific implementation.
[0037] The block diagram of FIG. 3 is not intended to indicate that the computer-readable medium 300 is to include all of the components or modules shown in FIG. 3. Further, any number of additional components may be included within the computer-readable medium 300, depending on the details of the specific
implementations as described herein.
[ΘΘ38] While the present techniques may be susceptible to various
modifications and alternative forms, the examples discussed above have been shown only by way of example. However, it should again be understood that the techniques are not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and
equivalents falling within the scope of the appended claims.

Claims

CLAIMS What is claimed is:
1 . A method, comprising
invoking, via a processor, a command to activate file services on at least one node of a storage area network (SAN), wherein the SAN comprises a plurality of nodes;
installing, via the processor, a virtual machine infrastructure on each of the plurality of nodes, wherein the virtual machine infrastructure comprises a plurality of virtual machines; and enabling, via the processor, the file services on each of the virtual machines.
2. A method of claim 1 , comprising,
activating, via the processor, the file services to run on each of the virtual machines, wherein file services resources are reserved in advance on at least one node of the SAN; and
claiming, via the processor, the file services resources for attachment to the plurality of virtual machines at the time of activating the file services.
3. The method of claim 1 , comprising,
generating, via the processor, the plurality of virtual machines via a template, wherein at least two or more virtual machines include file service capabilities; and
clustering, via the processor, the plurality of virtual machines to enable
communications between the plurality of virtual machines.
4. The method of claim 1 , comprising,
establishing, via the processor, a network communications channel between each node of the SAN, wherein the network communications channel is exposed to the plurality of virtual machines as a network interface.
5. The method of claim 1 , comprising,
running, via the processor, block services on the SAN, wherein the block services are isolated from the file services via a virtual machine infrastructure.
6. The method of claim 1 , comprising,
substantially maintaining, via the processor, the file services within each of the virtual machines.
7. The method of claim 1 , comprising grouping, via the processor, the plurality of virtual machines in pairs to provide a redundancy approach, wherein the file services are attached to a pair of virtual machines.
8. A method of claim 4, where the network communications channel is utilized by the processor to detect and notify each node of the SAN in the event of failure.
9. The method of claim 3, wherein the clustering comprises automatically identifying and assigning an Internet Protocol (IP) address to each of the virtual machines.
10. A system, comprising
a storage array comprising reserved file service resources;
a plurality of nodes located within the storage array; and
a plurality of virtual machines located within the storage array, wherein at least two of the virtual machines comprise file service capabilities.
1 1 . The system of claim 10, wherein the file service capabilities are isolated from block services of the storage array.
12. The system of claim 10, wherein the plurality of virtual machines are created based on a configuration template.
13. The system of claim 10, comprising an interconnect channel to provide network communications between the plurality of virtual machines.
14. A tangible, non-transitory, computer-readable medium comprising code to direct a processor to:
invoke a command to activate file services on at least one node of a storage area network (SAN), wherein the SAN comprises a plurality of nodes;
install a virtual machine infrastructure on each of the plurality of nodes, wherein the virtual machine infrastructure comprises a plurality of virtual machines; and
enable the file services on each of the virtual machines.
15. The tangible, non-transitory, computer-readable medium of claim 14 comprising code to direct a processor to:
activate the file services to run on each of the virtual machines,
wherein file services resources are reserved in advance on each node of the SAN; and
claim the file services resources for attachment to the plurality of virtual machines at the time of activating the file services.
PCT/US2015/013813 2015-01-30 2015-01-30 Virtual machines and file services WO2016122608A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013813 WO2016122608A1 (en) 2015-01-30 2015-01-30 Virtual machines and file services

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013813 WO2016122608A1 (en) 2015-01-30 2015-01-30 Virtual machines and file services

Publications (1)

Publication Number Publication Date
WO2016122608A1 true WO2016122608A1 (en) 2016-08-04

Family

ID=56544026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/013813 WO2016122608A1 (en) 2015-01-30 2015-01-30 Virtual machines and file services

Country Status (1)

Country Link
WO (1) WO2016122608A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878457A (en) * 2017-03-24 2017-06-20 网宿科技股份有限公司 The attached storage method of distributed network and system
US11150810B2 (en) 2018-01-26 2021-10-19 International Business Machines Corporation I/O data transmission in a hyper-converged storage system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009145764A1 (en) * 2008-05-28 2009-12-03 Hewlett-Packard Development Company, L.P. Providing object-level input/output requests between virtual machines to access a storage subsystem
US20100154054A1 (en) * 2001-06-05 2010-06-17 Silicon Graphics, Inc. Clustered File System for Mix of Trusted and Untrusted Nodes
US20120278450A1 (en) * 2000-12-22 2012-11-01 Dataplow, Inc. Storage area network file system
US20140165062A1 (en) * 2009-07-23 2014-06-12 Brocade Communications Systems, Inc. Method and Apparatus for Providing Virtual Machine Information to a Network Interface
US8910156B1 (en) * 2011-04-29 2014-12-09 Netapp, Inc. Virtual machine dependency

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120278450A1 (en) * 2000-12-22 2012-11-01 Dataplow, Inc. Storage area network file system
US20100154054A1 (en) * 2001-06-05 2010-06-17 Silicon Graphics, Inc. Clustered File System for Mix of Trusted and Untrusted Nodes
WO2009145764A1 (en) * 2008-05-28 2009-12-03 Hewlett-Packard Development Company, L.P. Providing object-level input/output requests between virtual machines to access a storage subsystem
US20140165062A1 (en) * 2009-07-23 2014-06-12 Brocade Communications Systems, Inc. Method and Apparatus for Providing Virtual Machine Information to a Network Interface
US8910156B1 (en) * 2011-04-29 2014-12-09 Netapp, Inc. Virtual machine dependency

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878457A (en) * 2017-03-24 2017-06-20 网宿科技股份有限公司 The attached storage method of distributed network and system
CN106878457B (en) * 2017-03-24 2019-11-29 网宿科技股份有限公司 The attached storage method of distributed network and system
US11150810B2 (en) 2018-01-26 2021-10-19 International Business Machines Corporation I/O data transmission in a hyper-converged storage system

Similar Documents

Publication Publication Date Title
US11144415B2 (en) Storage system and control software deployment method
US9426218B2 (en) Virtual storage appliance gateway
US8458413B2 (en) Supporting virtual input/output (I/O) server (VIOS) active memory sharing in a cluster environment
US9575894B1 (en) Application aware cache coherency
US8473692B2 (en) Operating system image management
US8996837B1 (en) Providing multi-tenancy within a data storage apparatus
US9116737B2 (en) Conversion of virtual disk snapshots between redo and copy-on-write technologies
US9778865B1 (en) Hyper-converged infrastructure based on server pairs
US9262087B2 (en) Non-disruptive configuration of a virtualization controller in a data storage system
CN111506267A (en) Operation method of distributed memory disk cluster storage system
US20120151095A1 (en) Enforcing logical unit (lu) persistent reservations upon a shared virtual storage device
US20120246642A1 (en) Management of File Images in a Virtual Environment
US20120066678A1 (en) Cluster-aware virtual input/output server
WO2019148841A1 (en) Distributed storage system, data processing method and storage node
US9875059B2 (en) Storage system
US11481356B2 (en) Techniques for providing client interfaces
US11496547B2 (en) Storage system node communication
US8661089B2 (en) VIOS cluster alert framework
US10628055B2 (en) Management of storage replication consistency groups using automatic replication group identifiers
US11467778B2 (en) Creating high availability storage volumes for software containers
WO2016122608A1 (en) Virtual machines and file services
WO2018065847A1 (en) Consistent hashing configurations supporting multi-site replication
RU2646312C1 (en) Integrated hardware and software system
KR101673882B1 (en) Storage system with virtualization using embedded disk and method of operation thereof
US20240036988A1 (en) Disaster recovery pipeline for block storage and dependent applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880498

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880498

Country of ref document: EP

Kind code of ref document: A1