WO2014158326A1 - Capacity accounting for heterogeneous storage systems - Google Patents

Capacity accounting for heterogeneous storage systems Download PDF

Info

Publication number
WO2014158326A1
WO2014158326A1 PCT/US2014/013025 US2014013025W WO2014158326A1 WO 2014158326 A1 WO2014158326 A1 WO 2014158326A1 US 2014013025 W US2014013025 W US 2014013025W WO 2014158326 A1 WO2014158326 A1 WO 2014158326A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage
accounting
capacity
consumer
objects
Prior art date
Application number
PCT/US2014/013025
Other languages
French (fr)
Inventor
Yarom Gabay
Nagananda Sriramaiah Anur
Alexander Vinnik
Original Assignee
Netapp, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netapp, Inc. filed Critical Netapp, Inc.
Priority to EP14775322.2A priority Critical patent/EP2973064A4/en
Publication of WO2014158326A1 publication Critical patent/WO2014158326A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0653Monitoring storage devices or systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management

Definitions

  • FIG. 1 is a block diagram illustrating a system environment for a capacity accountability system
  • FIG. 2A is a block diagram illustrating a network storage system which may provide a portion of the managed storage space of the capacity accountability system in one embodiment
  • FIG. 2B is a block diagram illustrating a distributed or clustered network storage system which may provide a portion of the managed storage space of the capacity accountability system in one embodiment
  • FIG. 3 is a block diagram illustrating an embodiment of a storage server
  • FIG. 4 is a block diagram illustrating a control flow of a capacity
  • FIG. 5 is a block diagram illustrating an example of a mechanism to avoid duplication of capacity accounting for storage objects in different storage object hierarchy levels
  • FIG. 6 is a flow diagram illustrating an example of a flow chart of a method of operating the capacity accountability system
  • FIG. 7 is a flow diagram illustrating another example of a flow chart of a method of operating the capacity accountability system.
  • FIG. 8 is a user interface diagram illustrating an example of a user interface of the capacity accountability system.
  • Storage capacity consumers can be, for example, applications, business entities, or physical or virtual hosts.
  • the storage infrastructure across the data centers can be based on multiple storage device vendors utilizing multiple storage architectures.
  • the storage infrastructure can maintain different storage tiers differing in terms the storage access capability and storage service capability.
  • the storage infrastructure can also include multiple protocol access mechanisms allowing block access, file access, or both.
  • the disclosed capacity accountability system tracks the relationships amongst multiple storage capacity consumers and heterogeneous storage objects. The tracked relationship data structure is then used to normalize the storage object hierarchy/containment levels of the heterogeneous storage objects when accounting for storage capacity.
  • the normalization technique introduced here allows for transparent addition of new storage technologies into the managed storage space of the capacity accountability system, requiring almost no development time for the addition. Having multiple technologies in a single capacity accounting datamart allows storage administrators to quickly determine how new storage space is utilized.
  • the capacity accounting datamart here refers to an accessible data store capable of returning specific capacity accounting data for specific storage consumer(s).
  • the disclosed capacity accountability system further provides an on-the-fly generation of capacity accounting reports. Because of the normalization technique, users of the system can quickly retrieve the necessary data regarding storage costs without technical knowledge of the storage architecture implementations in the managed storage space.
  • a capacity trending mechanism that provides valuable business analytics for both a storage provider and a capacity consumer.
  • the capacity trending mechanism enables the storage provider to accurately allocate storage devices and storage capacity tailor-fitted for various storage capacity consumers based on the trending information.
  • the capacity consumer can efficiently select a cost-effective capacity usage plan from the storage providers based on the trending information and generated capacity provision modification from the capacity trending mechanisms.
  • FIG. 1 is a block diagram illustrating a system environment 100 for a capacity accountability system 102.
  • the capacity accountability system 102 can be connected via a network channel 104 to a managed storage space 106.
  • the capacity accountability system 102 can be a general or special purpose computer system.
  • the capacity accountability system 102 includes one or more devices with computer- functionalities, each device including a computer-readable storage medium (e.g., a non- transitory storage medium) storing executable instructions and a processor for executing the executable instructions.
  • the managed storage space 106 includes a plurality of storage devices.
  • the managed storage space 106 can include at least one data center 108.
  • the network channel 104 can be any form of communication network that is capable of providing access to a data storage system.
  • the network channel 104 can be wired, wireless, or a combination of both.
  • the network channel 104 can include Ethernet networks, cellular networks, storage networks, or any combination thereof.
  • the network channel 104 may be, for example, a local area network (LAN), wide area network (WAN), metropolitan area network (MAN), global area network (GAN) such as the Internet, a Fiber Channel fabric, or any combination of such interconnects.
  • the network channel 104 may include multiple network storage protocols including a media access layer of network drivers (e.g., gigabit Ethernet drivers) that interface with network protocol layers, such as the Internet Protocol (IP) layer and its supporting transport mechanisms, the Transmission Control Protocol (TCP) layer and the User Datagram Protocol (UDP) layer.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • the network channel 104 may include a file system protocol layer providing multi-protocol file access and, to that end, includes support for one or more of the Direct
  • a VI layer can be implemented together with the network channel 104 to provide direct access transport (DAT) capabilities, such as Remote Direct Memory Access (RDMA), as required by the DAFS protocol.
  • DAT direct access transport
  • RDMA Remote Direct Memory Access
  • An Internet Small Computer System Interface (iSCSI) driver layer can be implemented with the network channel 104 to provide block protocol access over the TCP/IP network protocol layers, while a Fibre Channel (FC) driver layer receives and transmits block access requests and responses to and from the storage server.
  • a Fibre Channel over Ethernet layer may also be operative in the network channel 104 to receive and transmit requests and responses to and from the storage server.
  • the FC and iSCSI drivers provide respective FC- and iSC Si-specific access control to the blocks and, thus, manage exports of logical unit numbers (LUNs) to either iSCSI or FCP or, alternatively, to both iSCSI and FCP when accessing data blocks on the storage server.
  • LUNs logical unit numbers
  • Each datacenter can include at least a filesystem 110 that accounts for the hosts and storage objects within the filesystem 110.
  • the filesystem 110 can be an interactive store that is capable of providing access to a set of storage objects, such as files, Logical Unit Numbers (LUNs), partitions, qtrees, and volumes.
  • a qtree is a subset of a volume to which a quota can be applied to limit its size.
  • the filesystem 110 can include multiple hierarchical levels of storage objects.
  • a storage object hierarchical level is to an enumerated level of containment for a storage object.
  • a LUN can be at a higher storage object hierarchical level than a Q-tree and a Q-tree can be at a higher storage object hierarchical level than a volume.
  • a storage object is a form of data container.
  • the highest storage object hierarchical level can denote the largest accessible data container, capable of storing smaller containers, all the way down to the smallest accessible data container denoted by the lowest storage object hierarchical level.
  • the filesystem 110 can be hosted by a cluster 112 of storage hosts 114.
  • the storage hosts 114 can be storage servers, such as the storage servers described in FIGs. 2A, 2B, and 3 discussed below.
  • the capacity accountability system 102 is for keeping an accurate capacity accounting of the managed storage space 106.
  • the capacity accounting can include accounting for storage object consumption of storage consumers in the managed storage space 106 across heterogeneous storage objects.
  • the capacity accounting can further include accounting for storage capacity allocation of the storage consumers in the managed storage space 106 across the heterogeneous storage objects.
  • the capacity accounting can also include generating reporting of other metadata relating to the storage usage by each of the storage consumers, including idle capacity and storage usage trends. The storage usage trends can be used to calculate storage usage estimations and to recommend changes to the storage capacity provisions.
  • a storage consumer in this disclosure is defined as an account on the capacity accountability system 102 associated with an entity having control over the use of certain storage spaces on the managed storage space 106.
  • the storage consumer can be a business entity, a service application of a business entity, a division of a business entity, a physical host, or a virtual host.
  • "Heterogeneous" storage objects in this disclosure are defined as storage objects, virtual or physical, that have at least two different manners of storing data.
  • heterogeneous storage objects can be accessible by at least two different access protocols.
  • heterogeneous storage objects can be stored under at least two different storage architectures.
  • heterogeneous storage objects can be stored on at least two different storage devices.
  • the storage objects can be LUNs, fixed partitions or flexible partitions, virtual volumes, or physical volumes across different types of filesystem architectures.
  • a client device 116 can access the capacity accountability system 102 across the network channel 104.
  • the client device 116 can be any electronic device with a processor capable of data communication through the network channel 104.
  • the client device 116 can access the capacity accounting reports generated by the capacity
  • the client device 116 can be a computer operated by a storage network administrator or a computer operated by one of the storage consumer accounts.
  • FIG. 2A is a block diagram illustrating a network storage system 200 which may provide a portion of the managed storage space 106 of the capacity accountability system 102.
  • Each of storage servers 210 (storage servers 21 OA, 210B) manages multiple storage units 270 (storage 270A, 270B) that include mass storage devices.
  • the storage servers 210 provide data storage services to one or more clients 202 through a network 230.
  • Network 230 may be, for example, LAN, WAN, MAN, GAN such as the Internet, a Fiber Channel fabric, or any combination of such interconnects.
  • Each of clients 202 may be, for example, a conventional personal computer (PC), server-class computer, workstation, handheld computing or communication device, a virtual machine, or other special or general purpose computer.
  • PC personal computer
  • server-class computer workstation
  • handheld computing or communication device a virtual machine, or other special or general purpose computer.
  • Storage of data in storage units 270 is managed by storage servers 210 which receive and respond to various I/O requests from clients 202, directed to data stored in or to be stored in storage units 270.
  • Data is accessed (e.g., in response to the I/O requests) in units of blocks, which in the present embodiment are 4KB in size, although other block sizes (e.g., 512 bytes, 2KB, 8KB, etc.) may also be used.
  • 4KB as used herein refers to 4,096 bytes.
  • 4KB refers to 4,000 bytes.
  • 270 constitute mass storage devices which can include, for example, flash memory, magnetic or optical disks, or tape drives, illustrated as disks 271 (271 A, 271B).
  • the storage devices can include, for example, flash memory, magnetic or optical disks, or tape drives, illustrated as disks 271 (271 A, 271B).
  • a storage server 210 and storage unit 270 may be a part of/housed within a single device.
  • Storage servers 210 can provide file-level service such as used in a network- attached storage (NAS) environment, block-level service such as used in a storage area network (SAN) environment, or both file-level and block-level service, or other data access services.
  • NAS network- attached storage
  • SAN storage area network
  • storage servers 210 are each illustrated as single units in FIG. 2 A, a storage server can, in other embodiments, be a distributed entity; for example, a storage server may include a separate network element or module (an "N-module") and disk element or module (a "D-module").
  • the D-module includes storage access components configured to service client requests.
  • the N-module includes functionality that enables client access to storage access components (e.g., the D-module) and may include protocol components, such as CIFS, NFS, or an IP module, for facilitating such connectivity. Details of a distributed architecture environment involving D-modules and N-modules are described further below with respect to FIG. 2B.
  • storage servers 210 are referred to as network storage subsystems.
  • a network storage subsystem provides networked storage services for a specific application or purpose. Examples of such applications include database applications, web applications, Enterprise Resource Planning (ERP) applications, etc., e.g., which maybe at least partially implemented in a client. Examples of such purposes include file archiving, backup, mirroring, and etc., provided, for example, on archive, backup, or secondary storage server connected to a primary storage server.
  • ERP Enterprise Resource Planning
  • a network storage subsystem can also be implemented with a collection of networked resources provided across multiple storage servers and/or storage units.
  • one of the storage servers e.g., storage server
  • a secondary storage server e.g., storage server 210B
  • storage server 210B takes a standby role in a mirror relationship with the primary storage server, replicating storage objects from the primary storage server to storage objects organized on storage devices of the secondary storage server (e.g., disks 270B).
  • the secondary storage server does not service requests from client 202 until data in the primary storage object becomes inaccessible such as in a disaster with the primary storage server, such event considered a failure at the primary storage server.
  • requests from client 202 intended for the primary storage object are serviced using replicated data (i.e., The secondary storage object) at the secondary storage
  • network storage system 200 may include more than two storage servers.
  • protection relationships may be operative between various storage servers in system 200 such that one or more primary storage objects from storage server 21 OA may be replicated to a storage server other than storage server 210B (not shown in this figure).
  • Secondary storage objects may further implement protection relationships with other storage objects such that the secondary storage objects are replicated, e.g., to tertiary storage objects, to protect against failures with secondary storage objects. Accordingly, the description of a single-tier protection
  • FIG. 2B is a block diagram illustrating a distributed or clustered network storage system 220 which may provide a portion of the managed storage space 106 of the capacity accountability system 102 in one embodiment.
  • System 220 may include storage servers implemented as nodes 210 (nodes 21 OA, 210B) which are each configured to provide access to storage devices 271.
  • nodes 210 are interconnected by a cluster switching fabric 225, which may be embodied as an Ethernet switch.
  • Nodes 210 may be operative as multiple functional components that cooperate to provide a distributed architecture of system 220.
  • each node 210 may be organized as a network element or module (N-module 221A, 221B), a disk element or module (D-module 222A, 222B), and a management element or module (M-host 223 A, 223B).
  • each module includes a processor and memory for carrying out respective module operations.
  • N-module 221 may include functionality that enables node 210 to connect to client 202 via network 230 and may include protocol components such as a media access layer, IP layer, TCP layer, UDP layer, and other protocols known in the art.
  • N-module 221 can be the client module 102 of FIG. 1.
  • D-module 222 may connect to one or more storage devices 271 via cluster switching fabric 225 and may be operative to service access requests on devices 270.
  • the D-module 222 includes storage access components such as a storage abstraction layer supporting multi-protocol data access (e.g., the CIFS protocol, the NFS protocol, and the HTTP), a storage layer implementing storage protocols (e.g., RAID protocol), and a driver layer implementing storage device protocols (e.g., SCSI protocol) for carrying out operations in support of storage access operations.
  • a storage abstraction layer e.g., file system
  • Requests received by node 210 may thus include storage object identifiers to indicate a storage object on which to carry out the request.
  • M-host 223 which provides cluster services for node 210 by performing operations in support of a distributed storage system image, for instance, across system 220.
  • M-host 223 provides cluster services by managing a data structure such as a replicated database (RDB) 224 (RDB 224A, RDB 224B) which contains information used by N-module 221 to determine which D-module 222 "owns" (services) each storage object.
  • RDB 224A, RDB 224B replicated database
  • the various instances of RDB 224 across respective nodes 210 may be updated regularly by M-host 223 using conventional protocols operative between each of the M-hosts (e.g., across network 230) to bring them into synchronization with each other.
  • a client request received by N-module 221 may then be routed to the appropriate D- module 222 for servicing to provide a distributed storage system image.
  • FIG. 2B shows an equal number of N-modules and D-modules making up a node in the illustrative system
  • a different number of N- and D- modules can make up a node in accordance with various embodiments of instantaneous cloning.
  • the description of a node comprising one N-module and one D-module for each node should be taken as illustrative only.
  • FIG. 3 is a block diagram illustrating an embodiment of a storage server 300, such as storage servers 21 OA and 210B of FIG. 2 A, embodied as a general or special purpose computer including a processor 302, a memory 310, a network adapter 320, a user console 312 and a storage adapter 340 interconnected by a system bus 350, such as a convention Peripheral Component Interconnect (PCI) bus.
  • PCI Peripheral Component Interconnect
  • the processor 302 is the central processing unit (CPU) of the storage server 210 and, thus, controls its overall operation. The processor 302 accomplishes this by executing software stored in memory 310. In one embodiment, multiple processors 302 or one or more processors 302 with multiple cores are included in the storage server 210.
  • individual adapters e.g., network adapter 320 and storage adapter 340 each include a processor and memory for carrying out respective module operations.
  • Memory 310 includes storage locations addressable by processor 302, network adapter 320 and storage adapter 340 configured to store processor-executable instructions and data structures associated with implementation of a storage architecture.
  • Storage operating system 314 portions of which are typically resident in memory 310 and executed by processor 302, functionally organizes the storage server 210 by invoking operations in support of the storage services provided by the storage server 210.
  • processing means may be used for executing instructions and other memory means, including various computer readable media, may be used for storing program instructions pertaining to the inventive techniques described herein.
  • some or all of the functionality of the processor 302 and executable software can be implemented by hardware, such as integrated currents configured as programmable logic arrays, ASICs, and the like.
  • Network adapter 320 comprises one or more ports to couple the storage server to one or more clients over point-to-point links or a network.
  • network adapter 320 includes the mechanical, electrical and signaling circuitry needed to couple the storage server to one or more client over a network.
  • the network adapter 320 may include protocol components such as a Media Access Control (MAC) layer, CIFS, NFS, IP layer, TCP layer, UDP layer, and other protocols known in the art for facilitating such connectivity.
  • MAC Media Access Control
  • CIFS Media Access Control
  • NFS IP layer
  • TCP layer Transmission Control
  • UDP layer User Datagram Protocol
  • Each client may communicate with the storage server over the network by exchanging discrete frames or packets of data according to pre-defined protocols, such as TCP/IP.
  • Storage adapter 340 includes one or more of ports having input/output (I/O) interface circuitry to couple the storage devices (e.g., disks) to bus 321 over an I/O interconnect arrangement, such as a conventional high-performance, FC or SAS link topology.
  • Storage adapter 340 typically includes a device controller (not illustrated) comprising a processor and a memory, the device controller configured to control the overall operation of the storage units in accordance with read and write commands received from storage operating system 314.
  • a device controller (not illustrated) comprising a processor and a memory, the device controller configured to control the overall operation of the storage units in accordance with read and write commands received from storage operating system 314.
  • write data data written by (or to be written by) a device controller in response to a write command
  • data read by (or to be read by)device controller responsive to a read command is referred to as "read data.”
  • User console 312 enables an administrator to interface with the storage server to invoke operations and provide inputs to the storage server using a command line interface (CLI) or a graphical user interface (GUI).
  • CLI command line interface
  • GUI graphical user interface
  • user console 312 is implemented using a monitor and keyboard.
  • the storage server When implemented as a node of a cluster, such as cluster 220 of FIG. 2B, the storage server further includes a cluster access adapter 330 (shown in phantom/broken lines) having one or more ports to couple the node to other nodes in a cluster.
  • a cluster access adapter 330 shown in phantom/broken lines
  • Ethernet is used as the clustering protocol and interconnect media, although it will be apparent to one of skill in the art that other types of protocols and interconnects can by utilized within the cluster architecture.
  • FIG. 4 is a block diagram illustrating a control flow of a capacity
  • the capacity accountability system 400 can be the capacity accountability system 102 of FIG. 1.
  • the capacity accountability system 400 can include one or more methods of performing capacity accounting.
  • the one or more methods can be implemented by modules described below.
  • the modules can be implemented as hardware components, software instructions on non-transitory memory executable by a processor, or any combination thereof.
  • the modules described can be software modules implemented as instructions on a non-transitory memory capable of being executed by a processor or a controller on a machine described in FIG. 3.
  • Each of the modules can operate individually and independently of other modules. Some or all of the modules can be combined as one module. A single module can also be divided into sub-modules, each performing separate method step or method steps of the single module. The modules can share access to a memory space. One module can be coupled another module and access data processed by the another module by sharing a physical connection or a virtual connection, directly or indirectly.
  • the capacity accountability system 400 can include additional, fewer, or different modules for various applications.
  • Conventional components such as network interfaces, security functions, load balancers, failover servers, management and network operations consoles, and the like are not shown so as to not obscure the details of the system.
  • the capacity accountability system 400 includes a consumer account store 402, a capacity provision store 404, an allocation module 406, a storage relation module 408, a storage object relation store 410, a capacity accounting module 412, an interface module 414, an application programming interface (API) module 416, a capacity datamart 418, and an analytics module 420.
  • the allocation module 406 and the capacity provision store 404 can instead be outside of the capacity accountability system 400 (not shown), communicating with modules of the capacity accountability system 400 via the API module 416.
  • the consumer account store 402 maintains a record entry for each of the storage consumers including a consumer account profile.
  • the record entry can include one or more of the following, including an identifier unique to the storage consumer, a configuration file defining the reporting format of the capacity accounting generated by the capacity accountability system 400.
  • the storage consumer accounts can be stored in graph structures, relational tables, linked lists, tree structures, or any combination thereof. The structure can denote how one storage consumer account has control over another storage consumer account.
  • the storage consumer accounts can be stored in a hierarchical structure where a root node includes a business entity consumer account, and the specific business divisions, service applications, content groups, and data volumes are consumer accounts constituting branch nodes or leaf nodes. Access to the record entries can be restricted such that a security entry or key associated with the storage consumer is required for access.
  • the capacity provision store 404 maintains a record of capacity allocation provisions for each storage consumer accounts.
  • the capacity allocation provisions can be allocated by the allocation module 406.
  • Each of the capacity allocation provisions specifies an allocation of a storage object to the each storage consumer account.
  • Each capacity allocation provision can include a constant capacity allocation, such as 1TB of data capacity.
  • the capacity allocation can also be variable defined by a capacity provision rule. For example, the capacity allocation can be ten percent of storage capacity in a storage cluster, where the storage capacity of the storage cluster can increase or decrease during operation.
  • the allocation module 406 can further specify a tier level for each capacity allocation provision.
  • the tier level is defined by storage object type and storage object service type.
  • the storage object type for example, can include: a storage device model, such as a
  • NetAppTM 6000 series storage server a storage architecture type, such as NAS or SAN, a file system layout architecture, such as a write anywhere file layout (WAFL), an access protocol, such as NFS, SCSI, or CIFS, a storage device type, such as solid state drive, 7200 RPM hard disk, or 15000 RPM hard disk, or any combination thereof.
  • the storage object service type for example, can include replication service, backup service, mirroring service, deduplication service, or any combination thereof.
  • the allocation module 406 stores a set of rules to determine the specific tier level based on the storage object type or the storage object service type for each storage object or each set of storage objects.
  • Storage objects having different storage object types and/or different storage object service types can be assigned the same storage tier level based on the set of rules.
  • the tier level of a storage object can be re-configured based on available hardware and available storage services. For example, a storage object type of a storage object can be changed by reconfiguring the storage object to utilize a different set of storage host servers.
  • a storage object service type of a storage object can be changed by removing replication service of the storage object.
  • Specific storage objects can be allocated for the storage consumer account through the allocation module 406.
  • the allocation module 406 can store one or more network paths to access the storage objects associated with the storage consumer in the consumer profile stored on the consumer account store 402.
  • the allocation module 406 can generate and store the capacity allocation provisions on the capacity provision store 404.
  • the storage relation module 408 is configured to generate a relationship data structure of heterogeneous storage objects available on the managed storage space 106 of FIG. 1.
  • the relationship data structure can associate each of the heterogeneous storage objects with at least one of the storage consumer accounts known to the capacity
  • the relationship data structure can be stored on a storage object relation store 410.
  • the relationship data structure can be, for example, a data graph, a relational database, or a tree structure.
  • the relationship data structure can also store a specific storage content associated with each of the heterogeneous objects.
  • the specific storage content can be based on a specific service application provided by the storage consumer.
  • the specific storage content can be profile picture photographs provided by an indexed photograph content provider service of a storage consumer.
  • the storage relation module 408 can generate the relationship data structure by traversing each instance of the filesystem 110 across the managed storage space 106.
  • the storage relation module 408 can also generate the relationship data structure based on the associations generated through the allocation module 406.
  • the capacity accounting module 412 performs capacity accounting for one or more of the storage consumers.
  • the capacity accounting module 412 is configured to generate a storage object consumption accounting that is specific for the one or more storage consumers.
  • a storage object consumption accounting is a structured report to present how much storage capacity is used by one or more particular storage consumer.
  • the capacity accounting module 412 is further configured to be able to generate a storage capacity allocation accounting.
  • a storage capacity allocation accounting is a structured report to present how much storage capacity is provisioned/allocated to one or more particular storage consumers.
  • the structured reports can be interactive to answer questions from a report reader about specific storage objects and about specific storage consumers. For example, the report reader can query regarding specific storage consumers. The report reader can also sort or filter based on specific storage consumers or storage object types.
  • the capacity accounting module 412 can be configured and activated via an interface module 414. Once configured, the capacity accounting module 412 can generate the capacity accounting reports
  • the capacity accounting module 412 can also be configured and activated via an API module 416 (application programming interface module).
  • the capacity accounting module 412 When accounting for storage capacity allocation and storage object consumption, the capacity accounting module 412 normalizes the storage capacity allocation data and the storage consumption data from the storage object relation store 410 to avoid duplicate accounting. For example, when a first storage object includes a second storage object or vice versa, the capacity accounting module 412 can discount a first consumption data of a first storage object when a second consumption data of a second storage object has already been accounted for. That is, when the first storage object and the second storage object are within the same branch of storage object containment hierarchy, the consumption data is accounted for once. For another example, the capacity accounting module 412 can account for a single storage object consumption when a plurality of storage hosts maps to a single storage object. A host group table including storage object types of each host storage server and paths to storage objects on the host storage server can be stored on the storage object relation store 410 for the purpose of capacity accounting.
  • the capacity accounting module 412 also includes a mechanism to reconcile duplicate capacity accounting due to the relationships between the storage consumers.
  • the capacity accounting module 412 can normalize the capacity accounting by tracking the relationships amongst the multiple storage capacity consumers, including a relationship tree of the storage consumers in the consumer account store 402. For example when accounting for storage capacity, the capacity accounting module 412 can account a single storage capacity allocation for a plurality of application services of a business entity sharing storage space on a single storage object.
  • the plurality of application services each can have a storage consumer account that is a subservient storage consumer account under the storage consumer account of the business entity.
  • the capacity accounting module 412 identifies a single storage consumer that can account for the entirety of the storage capacity allocation.
  • the association between the application services of the business entity can be identified from the consumer account store 402.
  • the normalized capacity accounting data can be stored in a capacity datamart 418.
  • the capacity datamart 418 can be indexed for easy querying of the capacity accounting for individual or groups of storage consumers.
  • the capacity accountability system 400 can include an analytics module 420.
  • the analytics module 420 can calculate a storage usage trend based on the capacity accounting by the capacity accounting module 412.
  • the storage usage trend can be generated based on ratio of storage capacity consumed over allocated capacity.
  • the storage usage trend can also be based on read/write access frequency of the storage objects for the storage consumer.
  • the storage trend generated can be specific to a service application of the storage consumer.
  • the analytics module 420 can determine a modification to a capacity allocation provision based on the storage usage trend calculated.
  • Each of the storage can be a single physical entity or distributed across multiple physical devices. Each of the storage can be on separate physical device or share the same physical device or devices. Each of the stores can allocate specific storage spaces for run-time applications.
  • the techniques introduced in the modules herein can be implemented by programmable circuitry programmed or configured by software and/or firmware, or they can be implemented by entirely by special-purpose "hardwired” circuitry, or in a combination of such forms.
  • Such special-purpose circuitry can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.
  • FIG. 5 is a block diagram illustrating an example of a mechanism to avoid duplication of capacity accounting for storage objects 502 in different storage object hierarchy levels.
  • Each of the storage objects 502 is associated with a capacity allocation provision 504 that charges the storage object to a storage consumer.
  • a LUN is the highest level of storage object hierarchy levels.
  • the mechanism traverses storage objects, following hierarchy levels from the top level (LUNs), down to Q-trees and volumes, which is the lowest-level chargeable object.
  • the mechanism determines to which storage consumer the storage object belongs, and accounts for the capacity consumption or the capacity allocation in an accounting database, such as the storage object relation store 410.
  • Storage access technology, virtualization type, accessing host, service application identifier, protection service type, and tier-level associated with each storage object can also be saved to the storage object relation store 410.
  • the mechanism ensures that when performing a capacity accounting, a storage object (such as a LUN) is not charged to one storage consumer, while another storage object (such as a Q-tree of the LUN or a volume of the LUN) with a higher storage object hierarchy level ⁇ i.e., a larger data container) is charged to another storage consumer. Any capacity which was not charged to any storage consumer is reported as "not charged capacity," and the storage provider administrator can be prompted about unaccounted for capacity consumption or capacity allocation.
  • a storage object such as a LUN
  • another storage object such as a Q-tree of the LUN or a volume of the LUN with a higher storage object hierarchy level ⁇ i.e., a larger data container
  • LUNs V2 and V3 are first charged to respective associated storage consumers ⁇ i.e., BUI and BU3). Then Q-tree QT2 is charged to the storage consumer BU4 because QT2 has an assigned storage consumer but the LUNs of QT2 does not have an assigned consumer. Then at the volume level, internal volume IV2 is charged to the storage consumer BU5 because none of its child storage objects have an assigned storage consumer.
  • An accounting table 514 illustrates the resulting capacity accounting under the mechanism to avoid duplicates of capacity accounting.
  • each LUN can be restricted to only one storage consumer.
  • the accounting table 514 does not include the storage objects VI, V4, V5, V6, V7, QT3, and QT4 because they do not have an assigned storage consumer.
  • the storage objects QT1 and IV 1 were not included in the accounting table 514 because their children were not included.
  • the mechanism described above implements support for capacity accounting of heterogeneous storage systems in the managed storage space 106, spanning storage access technologies (SAN, NAS, HTTP or any other technology), data centers (the physical storages can be in different geographical or logical locations), storage architectures (different RAIDS, disk types and data protection technologies), virtualization (physical and virtual storages and/or physical and virtual hosts), or any combination thereof.
  • the mechanism described enables capacity accounting where each storage consumer can have assigned storage capacity on different storage systems and each storage system supports multiple storage consumers.
  • FIG. 6 is a flow diagram illustrating an example of a flow chart of a method 600 of operating the capacity accountability system 102.
  • the capacity accountability system 102 ascertains a set of heterogeneous storage objects provisioned for a storage consumer.
  • the heterogeneous storage objects are categorized by storage object hierarchy levels.
  • the capacity accountability system 102 can ascertain the set of heterogeneous storage objects by determining the set by using a relationship data structure, such as the storage object relation store 410, of storage consumer accounts and managed storage objects.
  • the step 605 can be performed by the storage relation module 408.
  • the method 600 continues on to a step 610 where the capacity accountability system 102 identifies an association between the storage consumer and a storage object hierarchy level.
  • the step 610 can be performed via the interface module 414 or the capacity accounting module 412.
  • the association can be selected from the storage object hierarchy levels of the identified set of the heterogeneous storage objects. The selection can be made based on a configuration parameter to the capacity accounting module 412.
  • the capacity accountability system 102 can account for storage object consumption of the storage consumer by normalizing storage consumption data at the storage object hierarchy level across the set of the heterogeneous storage objects.
  • the capacity accountability system 102 can also, in a step 620, account for storage capacity allocation of the storage consumer by normalizing storage capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects.
  • the normalizing step can be based on the normalizing mechanisms described above for the capacity accounting module 412.
  • the step 620 includes calculating an idle capacity of the storage consumer based on the accounting of storage object consumption and the accounting of storage capacity allocation for the storage consumer. Both the step 615 and the step 620 can be performed by the capacity accounting module 412.
  • the capacity accountability system 102 can calculate a storage usage trend based on the accounting of storage object consumption in a step 625.
  • the storage usage trend can be calculated based on a percentage of the storage capacity allocated in a storage object that is actually consumed by the storage consumer. For example, a capacity consumed percentage per time period (such as day, week, or month) can be calculated.
  • the storage usage trend can also be calculated based on access pattern of the storage object, including how frequently the storage object is written to or how frequently the storage object is read.
  • step 630 determines a modification suggestion to a capacity allocation provision of the storage consumer based on the storage usage trend in a step 630. For example, when the provisioned capacity usage percentage is low, a modification suggestion to decrease provisioned capacity can be determined. For another example, when the access frequency of a storage object is low, a modification suggestion to lower the provisioned tier level can be determined, where the suggested modification tier level includes a less frequent replication service. Both step 625 and step 630 can be performed by the analytics module 420.
  • FIG. 7 is a flow diagram illustrating another example of a flow chart of a method 700 of operating the capacity accountability system 102.
  • the method 700 includes determining a storage content relationship between a primary storage object and a replicated storage object of the heterogeneous storage objects, where the primary storage object and the replicated storage object are updated based on same storage content.
  • the method 700 continues to a step 710 of generating a relationship data structure of storage consumer accounts and heterogeneous storage objects.
  • the relationship data structure can include the storage content relationship determined in the step 705.
  • the steps 705 and 710 can be performed by the storage relation module 408.
  • the storage relation module 408 can determine a storage tier label for each of the heterogeneous storage objects based on a storage object service type and a storage object technology type in a step 715.
  • the storage tier label can be associated with a storage cost.
  • the step 715 can be performed by the allocation module 406.
  • the storage cost of the storage tier can be calculated in at least two different ways.
  • the storage cost can be based on a charge-as-you-go model, where the storage cost is presented as a cost per storage capacity consumed.
  • the storage cost can also be based on a provision cost model, where the storage cost is presented as a cost per capacity allocated.
  • the method 700 includes a step 720 of generating a storage cost accounting of a storage consumer by traversing the relationship data structure based on the storage tier label. For example, a list of storage objects connected to the storage consumer or sub-divisions of the storage consumer can be determined from the relationship data structure. The list of storage objects can be normalized by discounting storage objects contained by other storage objects on the list. The list can also be normalized by discounting storage objects associated with sub-divisions of the storage consumer that is already accounted for. Then the storage costs of the tier levels of the remaining storage objects on the normalized list is accrued to determine the storage cost accounting.
  • the storage cost accounting can include a storage cost specifically associated with the storage content referred to in the step 705.
  • the accounting can be performed by traversing through the storage objects having a relationship associated with the storage content.
  • the accounting in the step 720 can be performed by the capacity accounting module 412.
  • the method 700 can also include accounting for storage object consumption of the storage consumer across the heterogeneous storage objects in a step 725.
  • the step 725 can be performed by the capacity accounting module 412.
  • the method 700 can further include the analytics module 420 determining a storage consumption pattern in a step 730.
  • the storage consumption pattern can include a minimum and a maximum storage capacity consumed in the past year.
  • the storage consumption pattern can also include an average storage space consumed by a storage consumer.
  • the storage consumption pattern can also include a storage consumption trend, such as an average storage capacity consumed per day, per month, or per week.
  • the storage consumption pattern allows the storage provider to determine what to charge the storage consumers using what kind of cost model.
  • the accounting can be used to calculate the potential revenue from the storage consumers and the potential cost of maintaining the storage service. From the storage consumer side, the storage consumption pattern allows a storage consumer to determine how much is paid to the storage provider, and whether a change in payment plan or storage tier can benefit the storage consumer.
  • the analytics module 420 can assign a new storage tier for a first storage object of the storage consumer to reduce an original storage cost of the first storage object.
  • the original storage cost can be identified from the storage cost accounting of the step 720.
  • the assignment of the new storage tier includes determining the new storage tier, at a reduced storage cost compared to the original storage cost that can satisfy the storage consumption pattern.
  • FIG. 8 is a user interface diagram illustrating an example of a user
  • the user interface 800 of the capacity accountability system 102.
  • the user interface 800 can be generated by the interface module 414.
  • the user interface 800 provides a storage
  • the user interface 800 facilitates generation of a report 801 to answer a question regarding used or provisioned capacity and the storage cost associated with such use or such provisioned application.
  • the user interface 800 includes an example of the report 801 generated for a number of storage consumers.
  • the report 801 includes a consumer identity 802, such as by business units, a tier level 804, a tier cost 806, a provisioned capacity 808, and a consumed capacity 810.
  • the report 801 can be sorted by any of the above variables.
  • the example interface 800 also includes a menu 811.
  • the menu 811 includes additional ways to sort, filter, and organize the report 801.
  • the menu 811 can include sorting or filtering of the report 801 by a service application 812, a data center 814, a host 816, an internal volume 818, or a virtual machine 820, each of which can be a storage consumer account.
  • the menu 811 can also include sorting or filtering of the report 801 by a protection type 822, a resource name 824, a resource type 826, a service cost 828, a storage object identifier 830, a storage access type 832, a storage pool identifier 834, or a specific containment level 836, each of which can be a storage object type or a storage object service type that defines the tier level 804.
  • the specific containment level 836 enables the capacity accounting module 412 to sort the report 801 by the identifier of a specific storage object hierarchy level, such as a Q-tree.
  • the user interface 800 can be access in a variety of ways. For example, configuration and generation of the report 801 is available to storage provider and storage consumer administrators in at least three ways:
  • Pre-specified reports receiving automatically generated versions of the report 801 from the capacity accountability system 102 pre-configured for the storage administrators.
  • Drag-And-Drop reports configuring the report 801 through the interface module 414 by selecting specific filters and sorting variables as described above to create the report 801 on the fly.
  • the capacity accountability system 102 supports multi-tenancy of storage consumer administrators, limiting a storage consumer administrator user access only to the capacity-related data which was made available for the storage consumer administrator user by the storage provider administrator.
  • the multi-tenancy is achieved by creating groups that include business entities at different levels of hierarchy (can be tenant, line of business, business unity or project) and assigning the storage consumer administrator user to certain groups.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Techniques to account for storage consumption and capacity allocation across heterogeneous storage objects are disclosed. A capacity accountability system can ascertain a set of heterogeneous storage objects provisioned for a storage consumer, where the heterogeneous storage objects is categorized by storage object hierarchy levels. The capacity accountability system can then identify an association between the storage consumer and a storage object hierarchy level and account for storage object consumption and storage capacity allocation of the storage consumer by normalizing storage consumption data and capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects.

Description

CAPACITY ACCOUNTING FOR HETEROGENEOUS STORAGE SYSTEMS
CROSS-REFERENCE TO RELATED APPLICATION [0001] This application claims priority to U.S. Patent Application No. 13/796,847 filed 12 March 2013, which is hereby incorporated by reference in its entirety.
COPYRIGHT NOTICE/PERMISSION
[0002] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings hereto: Copyright © 2013, NetApp, Inc., All Rights Reserved. BACKGROUND
[0003] Storage of large quantities of data for application services is costly and complex. Typically, an information technology (IT) department of an enterprise works with different vendors to individually track purchase and usage of storage capacity for different storage needs. Because of differences in storage needs, a business entity may use different types of storage objects stored on different storage devices, accessible via different storage access protocols, and utilize different storage services. Typically, different manual accounting methods are used for keeping track of storage capacity of storage objects for different types of storage objects. However, a manual process to account for the storage capacity consumption and for the storage capacity allocation often result in inaccurate (e.g., duplicate) accounting due to the heterogeneous storage objects used. The resulting capacity accounting report thus is inaccurate and may result in a failure to optimize for a cost-effective storage solution for the business entity. BRIEF DESCRIPTION OF THE DRAWINGS
[0004] One or more embodiments of the present invention are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
[0005] FIG. 1 is a block diagram illustrating a system environment for a capacity accountability system;
[0006] FIG. 2A is a block diagram illustrating a network storage system which may provide a portion of the managed storage space of the capacity accountability system in one embodiment;
[0007] FIG. 2B is a block diagram illustrating a distributed or clustered network storage system which may provide a portion of the managed storage space of the capacity accountability system in one embodiment;
[0008] FIG. 3 is a block diagram illustrating an embodiment of a storage server;
[0009] FIG. 4 is a block diagram illustrating a control flow of a capacity
accountability system;
[0010] FIG. 5 is a block diagram illustrating an example of a mechanism to avoid duplication of capacity accounting for storage objects in different storage object hierarchy levels;
[0011] FIG. 6 is a flow diagram illustrating an example of a flow chart of a method of operating the capacity accountability system;
[0012] FIG. 7 is a flow diagram illustrating another example of a flow chart of a method of operating the capacity accountability system; and
[0013] FIG. 8 is a user interface diagram illustrating an example of a user interface of the capacity accountability system.
DETAILED DESCRIPTION
[0014] In the following detailed description of embodiments of the invention, reference is made to the accompanying drawings in which like references indicate similar elements, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, functional, and other changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims. References in this specification to "an embodiment," "one embodiment," or the like, mean that the particular feature, structure or characteristic being described is included in at least one embodiment of the present invention. However, occurrences of such phrases in this specification do not necessarily all refer to the same embodiment.
[0015] The techniques introduced here enable storage administrators to account for provisioned and used storage capacity for capacity consumers accurately across data centers having heterogeneous storage objects. Storage capacity consumers can be, for example, applications, business entities, or physical or virtual hosts. The storage infrastructure across the data centers can be based on multiple storage device vendors utilizing multiple storage architectures. The storage infrastructure can maintain different storage tiers differing in terms the storage access capability and storage service capability. The storage infrastructure can also include multiple protocol access mechanisms allowing block access, file access, or both.
[0016] Today's applications use multiple storages across data centers with shared storage infrastructure. Each type of storage objects has a different format in terms of virtualization and indirection, making storage capacity consumption tracking error prone. Hence, tracking capacity across the multiple storages across different storage technologies is subject to inaccuracy.
[0017] To allow for accurate capacity accounting across the heterogeneous storage objects, the techniques introduced here reconcile different storage object
hierarchy/containment levels across the heterogeneous storage objects to accurately reflect associations between storage capacity consumers and provisioned or used storage capacity. The disclosed capacity accountability system tracks the relationships amongst multiple storage capacity consumers and heterogeneous storage objects. The tracked relationship data structure is then used to normalize the storage object hierarchy/containment levels of the heterogeneous storage objects when accounting for storage capacity.
[0018] The normalization technique introduced here allows for transparent addition of new storage technologies into the managed storage space of the capacity accountability system, requiring almost no development time for the addition. Having multiple technologies in a single capacity accounting datamart allows storage administrators to quickly determine how new storage space is utilized. The capacity accounting datamart here refers to an accessible data store capable of returning specific capacity accounting data for specific storage consumer(s).
[0019] The disclosed capacity accountability system further provides an on-the-fly generation of capacity accounting reports. Because of the normalization technique, users of the system can quickly retrieve the necessary data regarding storage costs without technical knowledge of the storage architecture implementations in the managed storage space.
[0020] In various embodiments, a capacity trending mechanism that provides valuable business analytics for both a storage provider and a capacity consumer. The capacity trending mechanism enables the storage provider to accurately allocate storage devices and storage capacity tailor-fitted for various storage capacity consumers based on the trending information. The capacity consumer can efficiently select a cost-effective capacity usage plan from the storage providers based on the trending information and generated capacity provision modification from the capacity trending mechanisms.
[0021] Some embodiments have other aspects, elements, features, and steps in addition to or in place of what is described above. These potential additions and
replacements are described throughout the rest of the specification.
[0022] Turning now to the figures, FIG. 1 is a block diagram illustrating a system environment 100 for a capacity accountability system 102. The capacity accountability system 102 can be connected via a network channel 104 to a managed storage space 106. The capacity accountability system 102 can be a general or special purpose computer system. The capacity accountability system 102 includes one or more devices with computer- functionalities, each device including a computer-readable storage medium (e.g., a non- transitory storage medium) storing executable instructions and a processor for executing the executable instructions. The managed storage space 106 includes a plurality of storage devices. For example, the managed storage space 106 can include at least one data center 108. The network channel 104 can be any form of communication network that is capable of providing access to a data storage system. The network channel 104 can be wired, wireless, or a combination of both. For example, the network channel 104 can include Ethernet networks, cellular networks, storage networks, or any combination thereof.
[0023] The network channel 104 may be, for example, a local area network (LAN), wide area network (WAN), metropolitan area network (MAN), global area network (GAN) such as the Internet, a Fiber Channel fabric, or any combination of such interconnects. The network channel 104 may include multiple network storage protocols including a media access layer of network drivers (e.g., gigabit Ethernet drivers) that interface with network protocol layers, such as the Internet Protocol (IP) layer and its supporting transport mechanisms, the Transmission Control Protocol (TCP) layer and the User Datagram Protocol (UDP) layer. The network channel 104 may include a file system protocol layer providing multi-protocol file access and, to that end, includes support for one or more of the Direct
Access File System (DAFS) protocol, the Network File System (NFS) protocol, the Common Internet File System (CIFS) protocol and the Hypertext Transfer Protocol (HTTP) protocol. A VI layer can be implemented together with the network channel 104 to provide direct access transport (DAT) capabilities, such as Remote Direct Memory Access (RDMA), as required by the DAFS protocol. An Internet Small Computer System Interface (iSCSI) driver layer can be implemented with the network channel 104 to provide block protocol access over the TCP/IP network protocol layers, while a Fibre Channel (FC) driver layer receives and transmits block access requests and responses to and from the storage server. In certain cases, a Fibre Channel over Ethernet layer may also be operative in the network channel 104 to receive and transmit requests and responses to and from the storage server. The FC and iSCSI drivers provide respective FC- and iSC Si-specific access control to the blocks and, thus, manage exports of logical unit numbers (LUNs) to either iSCSI or FCP or, alternatively, to both iSCSI and FCP when accessing data blocks on the storage server.
[0024] Each datacenter can include at least a filesystem 110 that accounts for the hosts and storage objects within the filesystem 110. The filesystem 110 can be an interactive store that is capable of providing access to a set of storage objects, such as files, Logical Unit Numbers (LUNs), partitions, qtrees, and volumes. A qtree is a subset of a volume to which a quota can be applied to limit its size. The filesystem 110 can include multiple hierarchical levels of storage objects. A storage object hierarchical level is to an enumerated level of containment for a storage object. For example, a LUN can be at a higher storage object hierarchical level than a Q-tree and a Q-tree can be at a higher storage object hierarchical level than a volume. A storage object is a form of data container. Thus, the highest storage object hierarchical level can denote the largest accessible data container, capable of storing smaller containers, all the way down to the smallest accessible data container denoted by the lowest storage object hierarchical level. [0025] The filesystem 110 can be hosted by a cluster 112 of storage hosts 114. The storage hosts 114 can be storage servers, such as the storage servers described in FIGs. 2A, 2B, and 3 discussed below.
[0026] The capacity accountability system 102 is for keeping an accurate capacity accounting of the managed storage space 106. The capacity accounting can include accounting for storage object consumption of storage consumers in the managed storage space 106 across heterogeneous storage objects. The capacity accounting can further include accounting for storage capacity allocation of the storage consumers in the managed storage space 106 across the heterogeneous storage objects. The capacity accounting can also include generating reporting of other metadata relating to the storage usage by each of the storage consumers, including idle capacity and storage usage trends. The storage usage trends can be used to calculate storage usage estimations and to recommend changes to the storage capacity provisions.
[0027] A storage consumer in this disclosure is defined as an account on the capacity accountability system 102 associated with an entity having control over the use of certain storage spaces on the managed storage space 106. For example, the storage consumer can be a business entity, a service application of a business entity, a division of a business entity, a physical host, or a virtual host. "Heterogeneous" storage objects in this disclosure are defined as storage objects, virtual or physical, that have at least two different manners of storing data. For example, heterogeneous storage objects can be accessible by at least two different access protocols. For another example, heterogeneous storage objects can be stored under at least two different storage architectures. For yet another example, heterogeneous storage objects can be stored on at least two different storage devices. As a more specific example, the storage objects can be LUNs, fixed partitions or flexible partitions, virtual volumes, or physical volumes across different types of filesystem architectures.
[0028] A client device 116 can access the capacity accountability system 102 across the network channel 104. The client device 116 can be any electronic device with a processor capable of data communication through the network channel 104. The client device 116 can access the capacity accounting reports generated by the capacity
accountability system 102. For example, the client device 116 can be a computer operated by a storage network administrator or a computer operated by one of the storage consumer accounts.
[0029] FIG. 2A is a block diagram illustrating a network storage system 200 which may provide a portion of the managed storage space 106 of the capacity accountability system 102. Each of storage servers 210 (storage servers 21 OA, 210B) manages multiple storage units 270 (storage 270A, 270B) that include mass storage devices. The storage servers 210 provide data storage services to one or more clients 202 through a network 230. Network 230 may be, for example, LAN, WAN, MAN, GAN such as the Internet, a Fiber Channel fabric, or any combination of such interconnects. Each of clients 202 may be, for example, a conventional personal computer (PC), server-class computer, workstation, handheld computing or communication device, a virtual machine, or other special or general purpose computer.
[0030] Storage of data in storage units 270 is managed by storage servers 210 which receive and respond to various I/O requests from clients 202, directed to data stored in or to be stored in storage units 270. Data is accessed (e.g., in response to the I/O requests) in units of blocks, which in the present embodiment are 4KB in size, although other block sizes (e.g., 512 bytes, 2KB, 8KB, etc.) may also be used. For one embodiment, 4KB as used herein refers to 4,096 bytes. For an alternate embodiment, 4KB refers to 4,000 bytes. Storage units
270 constitute mass storage devices which can include, for example, flash memory, magnetic or optical disks, or tape drives, illustrated as disks 271 (271 A, 271B). The storage devices
271 can further be organized into arrays (not illustrated) implementing a Redundant Array of Inexpensive Disks/Devices (RAID) scheme, whereby storage servers 210 access storage units 270 using one or more RAID protocols. Although illustrated as separate components, for one embodiment, a storage server 210 and storage unit 270 may be a part of/housed within a single device.
[0031] Storage servers 210 can provide file-level service such as used in a network- attached storage (NAS) environment, block-level service such as used in a storage area network (SAN) environment, or both file-level and block-level service, or other data access services. Although storage servers 210 are each illustrated as single units in FIG. 2 A, a storage server can, in other embodiments, be a distributed entity; for example, a storage server may include a separate network element or module (an "N-module") and disk element or module (a "D-module"). In one embodiment, the D-module includes storage access components configured to service client requests. The N-module includes functionality that enables client access to storage access components (e.g., the D-module) and may include protocol components, such as CIFS, NFS, or an IP module, for facilitating such connectivity. Details of a distributed architecture environment involving D-modules and N-modules are described further below with respect to FIG. 2B.
[0032] In other embodiments, storage servers 210 are referred to as network storage subsystems. A network storage subsystem provides networked storage services for a specific application or purpose. Examples of such applications include database applications, web applications, Enterprise Resource Planning (ERP) applications, etc., e.g., which maybe at least partially implemented in a client. Examples of such purposes include file archiving, backup, mirroring, and etc., provided, for example, on archive, backup, or secondary storage server connected to a primary storage server. A network storage subsystem can also be implemented with a collection of networked resources provided across multiple storage servers and/or storage units.
[0033] In the embodiment of FIG. 2 A, one of the storage servers (e.g., storage server
21 OA) may function as a primary provider of data storage services to client 202. Data storage requests from client 202 are serviced using storage device 271 A organized as one or more storage objects. In such an embodiment, a secondary storage server (e.g., storage server 210B) takes a standby role in a mirror relationship with the primary storage server, replicating storage objects from the primary storage server to storage objects organized on storage devices of the secondary storage server (e.g., disks 270B). In operation, the secondary storage server does not service requests from client 202 until data in the primary storage object becomes inaccessible such as in a disaster with the primary storage server, such event considered a failure at the primary storage server. Upon a failure at the primary storage server, requests from client 202 intended for the primary storage object are serviced using replicated data (i.e., The secondary storage object) at the secondary storage
[0034] It will be appreciated that in other embodiments, network storage system 200 may include more than two storage servers. In these cases, protection relationships may be operative between various storage servers in system 200 such that one or more primary storage objects from storage server 21 OA may be replicated to a storage server other than storage server 210B (not shown in this figure). Secondary storage objects may further implement protection relationships with other storage objects such that the secondary storage objects are replicated, e.g., to tertiary storage objects, to protect against failures with secondary storage objects. Accordingly, the description of a single-tier protection
relationship between primary and secondary storage objects of storage servers 210 should be taken as illustrative only.
[0035] FIG. 2B is a block diagram illustrating a distributed or clustered network storage system 220 which may provide a portion of the managed storage space 106 of the capacity accountability system 102 in one embodiment. System 220 may include storage servers implemented as nodes 210 (nodes 21 OA, 210B) which are each configured to provide access to storage devices 271. In FIG. 2B, nodes 210 are interconnected by a cluster switching fabric 225, which may be embodied as an Ethernet switch.
[0036] Nodes 210 may be operative as multiple functional components that cooperate to provide a distributed architecture of system 220. To that end, each node 210 may be organized as a network element or module (N-module 221A, 221B), a disk element or module (D-module 222A, 222B), and a management element or module (M-host 223 A, 223B). In one embodiment, each module includes a processor and memory for carrying out respective module operations. For example, N-module 221 may include functionality that enables node 210 to connect to client 202 via network 230 and may include protocol components such as a media access layer, IP layer, TCP layer, UDP layer, and other protocols known in the art. N-module 221 can be the client module 102 of FIG. 1.
[0037] In contrast, D-module 222 may connect to one or more storage devices 271 via cluster switching fabric 225 and may be operative to service access requests on devices 270. In one embodiment, the D-module 222 includes storage access components such as a storage abstraction layer supporting multi-protocol data access (e.g., the CIFS protocol, the NFS protocol, and the HTTP), a storage layer implementing storage protocols (e.g., RAID protocol), and a driver layer implementing storage device protocols (e.g., SCSI protocol) for carrying out operations in support of storage access operations. In the embodiment shown in FIG. 2B, a storage abstraction layer (e.g., file system) of the D-module divides the physical storage of devices 270 into storage objects. Requests received by node 210 (e.g., via N- module 221) may thus include storage object identifiers to indicate a storage object on which to carry out the request.
[0038] Also operative in node 210 is M-host 223 which provides cluster services for node 210 by performing operations in support of a distributed storage system image, for instance, across system 220. M-host 223 provides cluster services by managing a data structure such as a replicated database (RDB) 224 (RDB 224A, RDB 224B) which contains information used by N-module 221 to determine which D-module 222 "owns" (services) each storage object. The various instances of RDB 224 across respective nodes 210 may be updated regularly by M-host 223 using conventional protocols operative between each of the M-hosts (e.g., across network 230) to bring them into synchronization with each other. A client request received by N-module 221 may then be routed to the appropriate D- module 222 for servicing to provide a distributed storage system image.
[0039] It should be noted that while FIG. 2B shows an equal number of N-modules and D-modules making up a node in the illustrative system, a different number of N- and D- modules can make up a node in accordance with various embodiments of instantaneous cloning. For example, there may be a number of N-modules and D-modules of node 21 OA that does not reflect a one-to-one correspondence between the N- and D-modules of node 210B. As such, the description of a node comprising one N-module and one D-module for each node should be taken as illustrative only.
[0040] FIG. 3 is a block diagram illustrating an embodiment of a storage server 300, such as storage servers 21 OA and 210B of FIG. 2 A, embodied as a general or special purpose computer including a processor 302, a memory 310, a network adapter 320, a user console 312 and a storage adapter 340 interconnected by a system bus 350, such as a convention Peripheral Component Interconnect (PCI) bus. Certain standard and well-known
components, which are not germane to the understanding of embodiments of the present invention, are not shown. The processor 302 is the central processing unit (CPU) of the storage server 210 and, thus, controls its overall operation. The processor 302 accomplishes this by executing software stored in memory 310. In one embodiment, multiple processors 302 or one or more processors 302 with multiple cores are included in the storage server 210. For one embodiment, individual adapters (e.g., network adapter 320 and storage adapter 340) each include a processor and memory for carrying out respective module operations.
[0041] Memory 310 includes storage locations addressable by processor 302, network adapter 320 and storage adapter 340 configured to store processor-executable instructions and data structures associated with implementation of a storage architecture. Storage operating system 314, portions of which are typically resident in memory 310 and executed by processor 302, functionally organizes the storage server 210 by invoking operations in support of the storage services provided by the storage server 210. It will be apparent to those skilled in the art that other processing means may be used for executing instructions and other memory means, including various computer readable media, may be used for storing program instructions pertaining to the inventive techniques described herein. It will also be apparent that some or all of the functionality of the processor 302 and executable software can be implemented by hardware, such as integrated currents configured as programmable logic arrays, ASICs, and the like.
[0042] Network adapter 320 comprises one or more ports to couple the storage server to one or more clients over point-to-point links or a network. Thus, network adapter 320 includes the mechanical, electrical and signaling circuitry needed to couple the storage server to one or more client over a network. The network adapter 320 may include protocol components such as a Media Access Control (MAC) layer, CIFS, NFS, IP layer, TCP layer, UDP layer, and other protocols known in the art for facilitating such connectivity. Each client may communicate with the storage server over the network by exchanging discrete frames or packets of data according to pre-defined protocols, such as TCP/IP.
[0043] Storage adapter 340 includes one or more of ports having input/output (I/O) interface circuitry to couple the storage devices (e.g., disks) to bus 321 over an I/O interconnect arrangement, such as a conventional high-performance, FC or SAS link topology. Storage adapter 340 typically includes a device controller (not illustrated) comprising a processor and a memory, the device controller configured to control the overall operation of the storage units in accordance with read and write commands received from storage operating system 314. As used herein, data written by (or to be written by) a device controller in response to a write command is referred to as "write data," whereas data read by (or to be read by)device controller responsive to a read command is referred to as "read data."
[0044] User console 312 enables an administrator to interface with the storage server to invoke operations and provide inputs to the storage server using a command line interface (CLI) or a graphical user interface (GUI). In one embodiment, user console 312 is implemented using a monitor and keyboard.
[0045] When implemented as a node of a cluster, such as cluster 220 of FIG. 2B, the storage server further includes a cluster access adapter 330 (shown in phantom/broken lines) having one or more ports to couple the node to other nodes in a cluster. In one embodiment, Ethernet is used as the clustering protocol and interconnect media, although it will be apparent to one of skill in the art that other types of protocols and interconnects can by utilized within the cluster architecture.
[0046] FIG. 4 is a block diagram illustrating a control flow of a capacity
accountability system 400. The capacity accountability system 400 can be the capacity accountability system 102 of FIG. 1. The capacity accountability system 400 can include one or more methods of performing capacity accounting. The one or more methods can be implemented by modules described below. The modules can be implemented as hardware components, software instructions on non-transitory memory executable by a processor, or any combination thereof. For example, the modules described can be software modules implemented as instructions on a non-transitory memory capable of being executed by a processor or a controller on a machine described in FIG. 3.
[0047] Each of the modules can operate individually and independently of other modules. Some or all of the modules can be combined as one module. A single module can also be divided into sub-modules, each performing separate method step or method steps of the single module. The modules can share access to a memory space. One module can be coupled another module and access data processed by the another module by sharing a physical connection or a virtual connection, directly or indirectly.
[0048] The capacity accountability system 400 can include additional, fewer, or different modules for various applications. Conventional components such as network interfaces, security functions, load balancers, failover servers, management and network operations consoles, and the like are not shown so as to not obscure the details of the system.
[0049] The capacity accountability system 400 includes a consumer account store 402, a capacity provision store 404, an allocation module 406, a storage relation module 408, a storage object relation store 410, a capacity accounting module 412, an interface module 414, an application programming interface (API) module 416, a capacity datamart 418, and an analytics module 420. Alternatively, the allocation module 406 and the capacity provision store 404 can instead be outside of the capacity accountability system 400 (not shown), communicating with modules of the capacity accountability system 400 via the API module 416.
[0050] The consumer account store 402 maintains a record entry for each of the storage consumers including a consumer account profile. The record entry can include one or more of the following, including an identifier unique to the storage consumer, a configuration file defining the reporting format of the capacity accounting generated by the capacity accountability system 400. The storage consumer accounts can be stored in graph structures, relational tables, linked lists, tree structures, or any combination thereof. The structure can denote how one storage consumer account has control over another storage consumer account. For example, the storage consumer accounts can be stored in a hierarchical structure where a root node includes a business entity consumer account, and the specific business divisions, service applications, content groups, and data volumes are consumer accounts constituting branch nodes or leaf nodes. Access to the record entries can be restricted such that a security entry or key associated with the storage consumer is required for access.
[0051] The capacity provision store 404 maintains a record of capacity allocation provisions for each storage consumer accounts. The capacity allocation provisions can be allocated by the allocation module 406. Each of the capacity allocation provisions specifies an allocation of a storage object to the each storage consumer account. Each capacity allocation provision can include a constant capacity allocation, such as 1TB of data capacity. The capacity allocation can also be variable defined by a capacity provision rule. For example, the capacity allocation can be ten percent of storage capacity in a storage cluster, where the storage capacity of the storage cluster can increase or decrease during operation. The allocation module 406 can further specify a tier level for each capacity allocation provision. The tier level is defined by storage object type and storage object service type. The storage object type, for example, can include: a storage device model, such as a
NetApp™ 6000 series storage server, a storage architecture type, such as NAS or SAN, a file system layout architecture, such as a write anywhere file layout (WAFL), an access protocol, such as NFS, SCSI, or CIFS, a storage device type, such as solid state drive, 7200 RPM hard disk, or 15000 RPM hard disk, or any combination thereof. The storage object service type, for example, can include replication service, backup service, mirroring service, deduplication service, or any combination thereof.
[0052] The allocation module 406 stores a set of rules to determine the specific tier level based on the storage object type or the storage object service type for each storage object or each set of storage objects. Storage objects having different storage object types and/or different storage object service types can be assigned the same storage tier level based on the set of rules. The tier level of a storage object can be re-configured based on available hardware and available storage services. For example, a storage object type of a storage object can be changed by reconfiguring the storage object to utilize a different set of storage host servers. A storage object service type of a storage object can be changed by removing replication service of the storage object.
[0053] Specific storage objects can be allocated for the storage consumer account through the allocation module 406. The allocation module 406 can store one or more network paths to access the storage objects associated with the storage consumer in the consumer profile stored on the consumer account store 402. The allocation module 406 can generate and store the capacity allocation provisions on the capacity provision store 404.
[0054] The storage relation module 408 is configured to generate a relationship data structure of heterogeneous storage objects available on the managed storage space 106 of FIG. 1. The relationship data structure can associate each of the heterogeneous storage objects with at least one of the storage consumer accounts known to the capacity
accountability system 400. The relationship data structure can be stored on a storage object relation store 410. The relationship data structure can be, for example, a data graph, a relational database, or a tree structure. The relationship data structure can also store a specific storage content associated with each of the heterogeneous objects. The specific storage content can be based on a specific service application provided by the storage consumer. For example, the specific storage content can be profile picture photographs provided by an indexed photograph content provider service of a storage consumer.
[0055] The storage relation module 408 can generate the relationship data structure by traversing each instance of the filesystem 110 across the managed storage space 106. The storage relation module 408 can also generate the relationship data structure based on the associations generated through the allocation module 406.
[0056] The capacity accounting module 412 performs capacity accounting for one or more of the storage consumers. The capacity accounting module 412 is configured to generate a storage object consumption accounting that is specific for the one or more storage consumers. A storage object consumption accounting is a structured report to present how much storage capacity is used by one or more particular storage consumer. The capacity accounting module 412 is further configured to be able to generate a storage capacity allocation accounting. A storage capacity allocation accounting is a structured report to present how much storage capacity is provisioned/allocated to one or more particular storage consumers. The structured reports can be interactive to answer questions from a report reader about specific storage objects and about specific storage consumers. For example, the report reader can query regarding specific storage consumers. The report reader can also sort or filter based on specific storage consumers or storage object types. The capacity accounting module 412 can be configured and activated via an interface module 414. Once configured, the capacity accounting module 412 can generate the capacity accounting reports
periodically. The capacity accounting module 412 can also be configured and activated via an API module 416 (application programming interface module).
[0057] When accounting for storage capacity allocation and storage object consumption, the capacity accounting module 412 normalizes the storage capacity allocation data and the storage consumption data from the storage object relation store 410 to avoid duplicate accounting. For example, when a first storage object includes a second storage object or vice versa, the capacity accounting module 412 can discount a first consumption data of a first storage object when a second consumption data of a second storage object has already been accounted for. That is, when the first storage object and the second storage object are within the same branch of storage object containment hierarchy, the consumption data is accounted for once. For another example, the capacity accounting module 412 can account for a single storage object consumption when a plurality of storage hosts maps to a single storage object. A host group table including storage object types of each host storage server and paths to storage objects on the host storage server can be stored on the storage object relation store 410 for the purpose of capacity accounting.
[0058] The capacity accounting module 412 also includes a mechanism to reconcile duplicate capacity accounting due to the relationships between the storage consumers. The capacity accounting module 412 can normalize the capacity accounting by tracking the relationships amongst the multiple storage capacity consumers, including a relationship tree of the storage consumers in the consumer account store 402. For example when accounting for storage capacity, the capacity accounting module 412 can account a single storage capacity allocation for a plurality of application services of a business entity sharing storage space on a single storage object. Here, the plurality of application services each can have a storage consumer account that is a subservient storage consumer account under the storage consumer account of the business entity. In this normalization scheme, the capacity accounting module 412 identifies a single storage consumer that can account for the entirety of the storage capacity allocation. The association between the application services of the business entity can be identified from the consumer account store 402.
[0059] These normalization mechanisms can apply to both accounting of capacity allocation and capacity consumption. The normalized capacity accounting data can be stored in a capacity datamart 418. The capacity datamart 418 can be indexed for easy querying of the capacity accounting for individual or groups of storage consumers.
[0060] The capacity accountability system 400 can include an analytics module 420. The analytics module 420 can calculate a storage usage trend based on the capacity accounting by the capacity accounting module 412. The storage usage trend can be generated based on ratio of storage capacity consumed over allocated capacity. The storage usage trend can also be based on read/write access frequency of the storage objects for the storage consumer. The storage trend generated can be specific to a service application of the storage consumer. The analytics module 420 can determine a modification to a capacity allocation provision based on the storage usage trend calculated.
[0061] The storages, or "stores", described in this disclosure are hardware
components or portions of hardware components for storing digital data. Each of the storage can be a single physical entity or distributed across multiple physical devices. Each of the storage can be on separate physical device or share the same physical device or devices. Each of the stores can allocate specific storage spaces for run-time applications.
[0062] The techniques introduced in the modules herein can be implemented by programmable circuitry programmed or configured by software and/or firmware, or they can be implemented by entirely by special-purpose "hardwired" circuitry, or in a combination of such forms. Such special-purpose circuitry (if any) can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.
[0063] FIG. 5 is a block diagram illustrating an example of a mechanism to avoid duplication of capacity accounting for storage objects 502 in different storage object hierarchy levels. Each of the storage objects 502 is associated with a capacity allocation provision 504 that charges the storage object to a storage consumer. In this example, a LUN is the highest level of storage object hierarchy levels. The mechanism traverses storage objects, following hierarchy levels from the top level (LUNs), down to Q-trees and volumes, which is the lowest-level chargeable object. The mechanism determines to which storage consumer the storage object belongs, and accounts for the capacity consumption or the capacity allocation in an accounting database, such as the storage object relation store 410. Storage access technology, virtualization type, accessing host, service application identifier, protection service type, and tier-level associated with each storage object can also be saved to the storage object relation store 410.
[0064] In this example, the mechanism ensures that when performing a capacity accounting, a storage object (such as a LUN) is not charged to one storage consumer, while another storage object (such as a Q-tree of the LUN or a volume of the LUN) with a higher storage object hierarchy level {i.e., a larger data container) is charged to another storage consumer. Any capacity which was not charged to any storage consumer is reported as "not charged capacity," and the storage provider administrator can be prompted about unaccounted for capacity consumption or capacity allocation.
[0065] In this example, at the top level, LUNs V2 and V3 are first charged to respective associated storage consumers {i.e., BUI and BU3). Then Q-tree QT2 is charged to the storage consumer BU4 because QT2 has an assigned storage consumer but the LUNs of QT2 does not have an assigned consumer. Then at the volume level, internal volume IV2 is charged to the storage consumer BU5 because none of its child storage objects have an assigned storage consumer.
[0066] An accounting table 514 illustrates the resulting capacity accounting under the mechanism to avoid duplicates of capacity accounting. In one example, each LUN can be restricted to only one storage consumer. The accounting table 514 does not include the storage objects VI, V4, V5, V6, V7, QT3, and QT4 because they do not have an assigned storage consumer. The storage objects QT1 and IV 1 were not included in the accounting table 514 because their children were not included.
[0067] The mechanism described above implements support for capacity accounting of heterogeneous storage systems in the managed storage space 106, spanning storage access technologies (SAN, NAS, HTTP or any other technology), data centers (the physical storages can be in different geographical or logical locations), storage architectures (different RAIDS, disk types and data protection technologies), virtualization (physical and virtual storages and/or physical and virtual hosts), or any combination thereof. The mechanism described enables capacity accounting where each storage consumer can have assigned storage capacity on different storage systems and each storage system supports multiple storage consumers.
[0068] FIG. 6 is a flow diagram illustrating an example of a flow chart of a method 600 of operating the capacity accountability system 102. At a step 605, the capacity accountability system 102 ascertains a set of heterogeneous storage objects provisioned for a storage consumer. The heterogeneous storage objects are categorized by storage object hierarchy levels. The capacity accountability system 102 can ascertain the set of heterogeneous storage objects by determining the set by using a relationship data structure, such as the storage object relation store 410, of storage consumer accounts and managed storage objects. The step 605 can be performed by the storage relation module 408.
[0069] The method 600 continues on to a step 610 where the capacity accountability system 102 identifies an association between the storage consumer and a storage object hierarchy level. The step 610 can be performed via the interface module 414 or the capacity accounting module 412. The association can be selected from the storage object hierarchy levels of the identified set of the heterogeneous storage objects. The selection can be made based on a configuration parameter to the capacity accounting module 412.
[0070] Following the step 610 in a step 615, the capacity accountability system 102 can account for storage object consumption of the storage consumer by normalizing storage consumption data at the storage object hierarchy level across the set of the heterogeneous storage objects. The capacity accountability system 102 can also, in a step 620, account for storage capacity allocation of the storage consumer by normalizing storage capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects. The normalizing step can be based on the normalizing mechanisms described above for the capacity accounting module 412. Optionally, the step 620 includes calculating an idle capacity of the storage consumer based on the accounting of storage object consumption and the accounting of storage capacity allocation for the storage consumer. Both the step 615 and the step 620 can be performed by the capacity accounting module 412.
[0071] Once an accounting of the storage object consumption is determined, the capacity accountability system 102 can calculate a storage usage trend based on the accounting of storage object consumption in a step 625. The storage usage trend can be calculated based on a percentage of the storage capacity allocated in a storage object that is actually consumed by the storage consumer. For example, a capacity consumed percentage per time period (such as day, week, or month) can be calculated. The storage usage trend can also be calculated based on access pattern of the storage object, including how frequently the storage object is written to or how frequently the storage object is read.
[0072] With the storage usage trend calculated, the capacity accountability
system 102 can then determine a modification suggestion to a capacity allocation provision of the storage consumer based on the storage usage trend in a step 630. For example, when the provisioned capacity usage percentage is low, a modification suggestion to decrease provisioned capacity can be determined. For another example, when the access frequency of a storage object is low, a modification suggestion to lower the provisioned tier level can be determined, where the suggested modification tier level includes a less frequent replication service. Both step 625 and step 630 can be performed by the analytics module 420.
[0073] FIG. 7 is a flow diagram illustrating another example of a flow chart of a method 700 of operating the capacity accountability system 102. At a step 705, the method 700 includes determining a storage content relationship between a primary storage object and a replicated storage object of the heterogeneous storage objects, where the primary storage object and the replicated storage object are updated based on same storage content. Upon determining the storage content relationship, the method 700 continues to a step 710 of generating a relationship data structure of storage consumer accounts and heterogeneous storage objects. The relationship data structure can include the storage content relationship determined in the step 705. The steps 705 and 710 can be performed by the storage relation module 408.
[0074] From the relationship data structure, the storage relation module 408 can determine a storage tier label for each of the heterogeneous storage objects based on a storage object service type and a storage object technology type in a step 715. The storage tier label can be associated with a storage cost. The step 715 can be performed by the allocation module 406. The storage cost of the storage tier can be calculated in at least two different ways. The storage cost can be based on a charge-as-you-go model, where the storage cost is presented as a cost per storage capacity consumed. The storage cost can also be based on a provision cost model, where the storage cost is presented as a cost per capacity allocated.
[0075] Following step 715, the method 700 includes a step 720 of generating a storage cost accounting of a storage consumer by traversing the relationship data structure based on the storage tier label. For example, a list of storage objects connected to the storage consumer or sub-divisions of the storage consumer can be determined from the relationship data structure. The list of storage objects can be normalized by discounting storage objects contained by other storage objects on the list. The list can also be normalized by discounting storage objects associated with sub-divisions of the storage consumer that is already accounted for. Then the storage costs of the tier levels of the remaining storage objects on the normalized list is accrued to determine the storage cost accounting.
[0076] The storage cost accounting can include a storage cost specifically associated with the storage content referred to in the step 705. For example, the accounting can be performed by traversing through the storage objects having a relationship associated with the storage content. The accounting in the step 720 can be performed by the capacity accounting module 412.
[0077] Following the step 715, the method 700 can also include accounting for storage object consumption of the storage consumer across the heterogeneous storage objects in a step 725. The step 725 can be performed by the capacity accounting module 412. Based on the accounting of the storage object consumption, the method 700 can further include the analytics module 420 determining a storage consumption pattern in a step 730. For example, the storage consumption pattern can include a minimum and a maximum storage capacity consumed in the past year. The storage consumption pattern can also include an average storage space consumed by a storage consumer. The storage consumption pattern can also include a storage consumption trend, such as an average storage capacity consumed per day, per month, or per week. The storage consumption pattern allows the storage provider to determine what to charge the storage consumers using what kind of cost model. The accounting can be used to calculate the potential revenue from the storage consumers and the potential cost of maintaining the storage service. From the storage consumer side, the storage consumption pattern allows a storage consumer to determine how much is paid to the storage provider, and whether a change in payment plan or storage tier can benefit the storage consumer.
[0078] From the storage consumption pattern, the analytics module 420 can assign a new storage tier for a first storage object of the storage consumer to reduce an original storage cost of the first storage object. The original storage cost can be identified from the storage cost accounting of the step 720. The assignment of the new storage tier includes determining the new storage tier, at a reduced storage cost compared to the original storage cost that can satisfy the storage consumption pattern.
[0079] FIG. 8 is a user interface diagram illustrating an example of a user
interface 800 of the capacity accountability system 102. The user interface 800 can be generated by the interface module 414. The user interface 800 provides a storage
administrator access to the storage object relation store 410 constructed by the storage relation module 408. The user interface 800 facilitates generation of a report 801 to answer a question regarding used or provisioned capacity and the storage cost associated with such use or such provisioned application.
[0080] The user interface 800 includes an example of the report 801 generated for a number of storage consumers. For example, the report 801 includes a consumer identity 802, such as by business units, a tier level 804, a tier cost 806, a provisioned capacity 808, and a consumed capacity 810. The report 801 can be sorted by any of the above variables. The example interface 800 also includes a menu 811. The menu 811 includes additional ways to sort, filter, and organize the report 801. For example, the menu 811 can include sorting or filtering of the report 801 by a service application 812, a data center 814, a host 816, an internal volume 818, or a virtual machine 820, each of which can be a storage consumer account. The menu 811 can also include sorting or filtering of the report 801 by a protection type 822, a resource name 824, a resource type 826, a service cost 828, a storage object identifier 830, a storage access type 832, a storage pool identifier 834, or a specific containment level 836, each of which can be a storage object type or a storage object service type that defines the tier level 804. Here, for example, the specific containment level 836 enables the capacity accounting module 412 to sort the report 801 by the identifier of a specific storage object hierarchy level, such as a Q-tree.
[0081] The user interface 800 can be access in a variety of ways. For example, configuration and generation of the report 801 is available to storage provider and storage consumer administrators in at least three ways:
• Public API - accessing the capacity accountability system 102 directly through the API module 416, such as via database queries or custom interface messages.
· Pre-specified reports - receiving automatically generated versions of the report 801 from the capacity accountability system 102 pre-configured for the storage administrators.
• Drag-And-Drop reports - configuring the report 801 through the interface module 414 by selecting specific filters and sorting variables as described above to create the report 801 on the fly.
[0082] The capacity accountability system 102 supports multi-tenancy of storage consumer administrators, limiting a storage consumer administrator user access only to the capacity-related data which was made available for the storage consumer administrator user by the storage provider administrator. The multi-tenancy is achieved by creating groups that include business entities at different levels of hierarchy (can be tenant, line of business, business unity or project) and assigning the storage consumer administrator user to certain groups.
[0083] Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense.
[0084] Therefore, it is manifestly intended that embodiments of this invention be limited only by the following claims and equivalents thereof.

Claims

What is claimed is:
1. A method comprising:
ascertaining, in a data processing system, a set of heterogeneous storage objects provisioned for a storage consumer, the heterogeneous storage objects categorized by storage object hierarchy levels;
identifying, in the data processing system, an association between the storage consumer and a storage object hierarchy level; and
accounting for storage object consumption of the storage consumer, in the data processing system, by normalizing storage consumption data at the storage object hierarchy level across the set of the heterogeneous storage objects.
2. The method of claim 1, further comprising accounting for storage capacity allocation of the storage consumer by normalizing storage capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects.
3. The method of claim 2, further comprising calculating an idle capacity of the storage consumer based on the accounting of storage object consumption and the accounting of storage capacity allocation for the storage consumer.
4. The method of claim 2, wherein accounting for storage capacity allocation of the storage consumer includes accounting a single storage capacity allocation for a plurality of application services sharing storage space on a first storage object.
5. The method of claim 1, wherein ascertaining the set of heterogeneous storage objects includes determining the set by using a relationship data structure of storage consumer accounts and managed storage objects, the managed storage objects including the set of the heterogeneous storage objects.
6. The method of claim 1,
wherein normalizing the storage consumption data includes discounting a first consumption data of a first storage object when a second consumption data of a second storage object has been accounted for; and
wherein the first storage object and the second storage object are within a same branch of a storage object hierarchy.
7. The method of claim 1, further comprising calculating a storage usage trend based the accounting of storage object consumption.
8. The method of claim 7, further comprising determining a modification suggestion to a capacity allocation provision of the storage consumer based on the storage usage trend.
9. The method of claim 8, wherein determining the modification includes determining the modification based on a replication service frequency of a first storage object in the set of the heterogeneous storage objects.
10. The method of claim 8, wherein determining the modification includes determining the modification based on a utilization frequency of a first storage object in the set of the heterogeneous storage objects.
11. The method of claim 1 , wherein accounting for storage object consumption of the storage consumer includes accounting a single storage object consumption for a plurality of application services sharing storage space on a first storage object.
12. The method of claim 1, wherein accounting for storage object consumption of the storage consumer includes accounting a single storage capacity consumption for a plurality of storage hosts mapping to a first storage object.
13. A method comprising :
determining a storage content relationship between a primary storage object and a replicated storage object of the heterogeneous storage objects, wherein the primary storage object and the replicated storage object are updated based on a same storage content;
generating, in a data processing system, a relationship data structure of storage consumer accounts and heterogeneous storage objects based on the storage content relationship;
determining, in the data processing system, a storage tier label for each of the heterogeneous storage objects based on a storage object service type and a storage object technology type, the storage tier label associated with a storage cost; and
generating, in the data processing system, a storage cost accounting of a storage consumer by traversing the relationship data structure based on the storage tier label.
14. The method of claim 13, wherein the storage cost accounting includes a storage cost associated with the storage content.
15. The method of claim 13, further comprising accounting for storage object consumption of the storage consumer across the heterogeneous storage objects.
16. The method of claim 15, further comprising determining a storage consumption pattern based on the accounting of the storage object consumption.
17. The method of claim 16, further comprising determining a potential new storage tier for a first storage object of the storage consumer to reduce an original storage cost of the first storage object by determining the potential new storage tier, at a reduced storage cost compared to the original storage cost, that can satisfy the storage consumption pattern.
18. A processing system comprising:
a processor;
a computer-readable storage medium storing modules executable by the processor, the modules including:
a database module configured to, when executed by the processor:
ascertain a set of heterogeneous storage objects provisioned for a storage consumer, the heterogeneous storage objects categorized by storage object hierarchy levels;
identify an association between the storage consumer and a storage object hierarchy level; and
a capacity accounting module configured to, when executed by the processor, perform a storage accounting of storage object consumption of the storage consumer by normalizing storage consumption data at the storage object hierarchy level across the heterogeneous storage objects.
19. The processing system of claim 18, wherein the capacity accounting generation module is configured to perform the storage accounting including storage capacity allocation of the storage consumer by normalizing storage capacity allocation data at the storage object hierarchy level across the heterogeneous storage objects.
20. The processing system of claim 18, further comprising a network module configured to provide an application programming interface to access the report generation module to generate the storage accounting report.
PCT/US2014/013025 2013-03-12 2014-01-24 Capacity accounting for heterogeneous storage systems WO2014158326A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP14775322.2A EP2973064A4 (en) 2013-03-12 2014-01-24 Capacity accounting for heterogeneous storage systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/796,847 US9396459B2 (en) 2013-03-12 2013-03-12 Capacity accounting for heterogeneous storage systems
US13/796,847 2013-03-12

Publications (1)

Publication Number Publication Date
WO2014158326A1 true WO2014158326A1 (en) 2014-10-02

Family

ID=51533309

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/013025 WO2014158326A1 (en) 2013-03-12 2014-01-24 Capacity accounting for heterogeneous storage systems

Country Status (3)

Country Link
US (2) US9396459B2 (en)
EP (1) EP2973064A4 (en)
WO (1) WO2014158326A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI514250B (en) * 2013-11-18 2015-12-21 Synology Inc Method for managing a storage system, and associated apparatus and associated computer program product
US9367421B2 (en) * 2013-12-20 2016-06-14 Netapp, Inc. Systems, methods, and computer programs products providing relevant correlation of data source performance
US9864658B1 (en) * 2014-12-01 2018-01-09 Vce Company, Llc Automation of deduplication storage capacity sizing and trending analysis
US9934236B2 (en) * 2015-02-23 2018-04-03 International Business Machines Corporation Streamlining data deduplication
US9665534B2 (en) 2015-05-27 2017-05-30 Red Hat Israel, Ltd. Memory deduplication support for remote direct memory access (RDMA)
US10169139B2 (en) * 2016-09-15 2019-01-01 International Business Machines Corporation Using predictive analytics of natural disaster to cost and proactively invoke high-availability preparedness functions in a computing environment
US10467112B2 (en) * 2017-11-09 2019-11-05 Bank Of America Corporation Distributed data monitoring device
US10834190B2 (en) * 2018-01-18 2020-11-10 Portworx, Inc. Provisioning of clustered containerized applications
US10977081B2 (en) * 2019-02-20 2021-04-13 International Business Machines Corporation Context aware container management
CN110347675A (en) * 2019-06-05 2019-10-18 阿里巴巴集团控股有限公司 A kind of date storage method and device
US10970309B2 (en) 2019-06-05 2021-04-06 Advanced New Technologies Co., Ltd. Data storage method and apparatus
US12019867B2 (en) * 2020-09-22 2024-06-25 International Business Machines Corporation Storage tiering within a unified storage environment
US20220398282A1 (en) * 2021-06-10 2022-12-15 Fidelity Information Services, Llc Systems and methods for multi-vendor storage infrastructure in a dashboard
CN118428909B (en) * 2024-07-04 2024-09-13 成都中智游科技有限公司 Intelligent management system for text travel construction based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7353358B1 (en) * 2004-06-30 2008-04-01 Emc Corporation System and methods for reporting storage utilization
US8190583B1 (en) * 2008-01-23 2012-05-29 Netapp, Inc. Chargeback in a data storage system using data sets
US8296544B2 (en) * 2006-10-16 2012-10-23 Hitachi, Ltd. Storage capacity management system in dynamic area provisioning storage
US20120311260A1 (en) * 2011-06-02 2012-12-06 Hitachi, Ltd. Storage managing system, computer system, and storage managing method
US8332860B1 (en) * 2006-12-30 2012-12-11 Netapp, Inc. Systems and methods for path-based tier-aware dynamic capacity management in storage network environments

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2801697B1 (en) * 1999-11-26 2002-01-25 Bull Sa METHOD OF ACCESSING VARIOUS PROTOCOLS TO OBJECTS OF A TREE REPRESENTATIVE OF AT LEAST ONE SYSTEM RESOURCE
US8271457B2 (en) * 2000-11-22 2012-09-18 Bmc Software, Inc. Database management system and method which monitors action results and adjusts user parameters in response
US7778899B2 (en) * 2003-05-19 2010-08-17 Serena Software, Inc. Method and system for object-oriented workflow management of multi-dimensional data
US20100088296A1 (en) * 2008-10-03 2010-04-08 Netapp, Inc. System and method for organizing data to facilitate data deduplication
WO2010131292A1 (en) * 2009-05-13 2010-11-18 Hitachi, Ltd. Storage system and utilization management method for storage system
US8229936B2 (en) * 2009-10-27 2012-07-24 International Business Machines Corporation Content storage mapping method and system
US8364858B1 (en) * 2009-12-07 2013-01-29 Emc Corporation Normalizing capacity utilization within virtual storage pools
US8620962B1 (en) * 2012-02-21 2013-12-31 Netapp, Inc. Systems and methods for hierarchical reference counting via sibling trees
US9256622B2 (en) * 2012-12-21 2016-02-09 Commvault Systems, Inc. Systems and methods to confirm replication data accuracy for data backup in data storage systems
US9081683B2 (en) * 2013-02-08 2015-07-14 Nexenta Systems, Inc. Elastic I/O processing workflows in heterogeneous volumes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7353358B1 (en) * 2004-06-30 2008-04-01 Emc Corporation System and methods for reporting storage utilization
US8296544B2 (en) * 2006-10-16 2012-10-23 Hitachi, Ltd. Storage capacity management system in dynamic area provisioning storage
US8332860B1 (en) * 2006-12-30 2012-12-11 Netapp, Inc. Systems and methods for path-based tier-aware dynamic capacity management in storage network environments
US8190583B1 (en) * 2008-01-23 2012-05-29 Netapp, Inc. Chargeback in a data storage system using data sets
US20120311260A1 (en) * 2011-06-02 2012-12-06 Hitachi, Ltd. Storage managing system, computer system, and storage managing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2973064A4 *

Also Published As

Publication number Publication date
EP2973064A4 (en) 2016-10-26
US9396459B2 (en) 2016-07-19
US20140280382A1 (en) 2014-09-18
EP2973064A1 (en) 2016-01-20
US10210192B2 (en) 2019-02-19
US20160292200A1 (en) 2016-10-06

Similar Documents

Publication Publication Date Title
US10210192B2 (en) Capacity accounting for heterogeneous storage systems
US10331370B2 (en) Tuning a storage system in dependence upon workload access patterns
US20210334206A1 (en) Optimized Inline Deduplication
US10129333B2 (en) Optimization of computer system logical partition migrations in a multiple computer system environment
US9495409B1 (en) Techniques for performing data validation
US9047352B1 (en) Centralized searching in a data storage environment
JP2020525906A (en) Database tenant migration system and method
US20210240369A1 (en) Virtual storage policies for virtual persistent volumes
US11579790B1 (en) Servicing input/output (‘I/O’) operations during data migration
CN103109292A (en) System and method for aggregating query results in a fault-tolerant database management system
US20230020268A1 (en) Evaluating Recommended Changes To A Storage System
WO2018075790A1 (en) Performance tuning in a storage system that includes one or more storage devices
US11954238B1 (en) Role-based access control for a storage system
US9231957B2 (en) Monitoring and controlling a storage environment and devices thereof
Salam et al. Deploying and Managing a Cloud Infrastructure: Real-World Skills for the CompTIA Cloud+ Certification and Beyond: Exam CV0-001
Orlando et al. IBM ProtecTIER Implementation and Best Practices Guide
US8904141B2 (en) Merging a storage cluster into another storage cluster
US10664255B2 (en) Application aware management in a network storage system
US11513982B2 (en) Techniques for recommending configuration changes using a decision tree
US12047449B2 (en) Volume placement based on resource usage
EP3485365A1 (en) Performance tuning in a storage system that includes one or more storage devices
Tate et al. Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V8. 2.1
US20240289038A1 (en) Data path functions for data storage
WO2023069459A1 (en) Profiling user activity to achieve social and governance objectives
Bach et al. Storage Layout

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14775322

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2014775322

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE