US11435916B2 - Mapping of data storage system for a redundant array of independent nodes - Google Patents
Mapping of data storage system for a redundant array of independent nodes Download PDFInfo
- Publication number
- US11435916B2 US11435916B2 US16/453,774 US201916453774A US11435916B2 US 11435916 B2 US11435916 B2 US 11435916B2 US 201916453774 A US201916453774 A US 201916453774A US 11435916 B2 US11435916 B2 US 11435916B2
- Authority
- US
- United States
- Prior art keywords
- node
- mapped
- group
- disks
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2043—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share a common memory address space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2089—Redundant storage control functionality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0617—Improving the reliability of storage systems in relation to availability
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- the disclosed subject matter relates to data storage, more particularly, to mapping of redundant array of independent nodes of a storage device comprising of at least one cluster of storage devices to provide high access storage devices.
- SAN storage area network
- NAS network-attached storage
- ECSTM ELASTIC CLOUD STORAGE
- DELL EMC DELL EMC
- ECSTM ELASTIC CLOUD STORAGE
- the example ECSTM system can comprise data storage devices, e.g., disks, etc., arranged in nodes, wherein nodes can be comprised in an ECS cluster.
- One use of data storage is in bulk data storage.
- Data can conventionally be stored in a group of nodes format for a given cluster, for example, in a conventional ECSTM system, all disks of nodes comprising the group of nodes are considered part of the group.
- a node with many disks can, in some conventional embodiments, comprise a large amount of storage that can go underutilized.
- a storage group of five nodes, with ten disks per node, at 8 terabytes (TBs) per disk is roughly 400 TB in size.
- This can be excessively large for some types of data storage, however apportioning smaller groups, e.g., fewer nodes, less disks, smaller disks, etc., can be inefficient in regard to processor and network resources, e.g., computer resource usage, to support these smaller groups.
- FIG. 1 illustrates a part of a cloud data storage system, in accordance with aspects of the subject disclosure.
- FIG. 2 illustrates an example of a system, which can facilitate storage of data in a mapped redundant array of independent nodes, in accordance with aspects of the subject disclosure.
- FIG. 3 illustrates an example of a system, which can facilitate storage of data in a mapped redundant array of independent nodes, in accordance with aspects of the subject disclosure.
- FIG. 4 illustrates an example of a real RAIN architecture 300 accordance with one or more embodiments described herein.
- FIG. 5 illustrates an example of real cluster accordance with one or more embodiments described herein.
- FIG. 6 illustrates an example of a data storage controller operational in a storage system accordance with one or more embodiments described herein.
- FIG. 7 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes.
- FIG. 8 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes.
- FIG. 9 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes.
- FIG. 10 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes.
- FIG. 11 illustrates a block diagram of an example computer operable to execute mapping of redundant array of independent nodes of a storage device.
- FIG. 12 is a schematic block diagram of a sample computing environment with which the disclosed subject matter can interact.
- data storage techniques can conventionally store data in one or more arrays of data storage devices.
- data can be stored in an ECSTM system such as is provided by DELL EMC.
- the example ECSTM system can comprise data storage devices, e.g., disks, etc., arranged in nodes, wherein nodes can be comprised in an ECSTM cluster.
- One use of data storage is in bulk data storage.
- Data can conventionally be stored in a group of nodes format for a given cluster, for example, in a conventional ECSTM system, all disks of nodes comprising the group of nodes are considered part of the group.
- a node with many disks can, in some conventional embodiments, comprise a large amount of storage that can go underutilized.
- a mapped redundant array of independent nodes hereinafter a mapped RAIN
- a mapped RAIN can comprise a mapped cluster, wherein the mapped cluster comprises a logical arrangement of real storage devices.
- a real cluster(s) e.g., a group of real storage devices comprised in one or more hardware nodes, comprised in one or more clusters, can be defined so allow more granular use of the real cluster in contrast to conventional storage techniques.
- a mapped cluster can comprise nodes that provide data redundancy, which, in an aspect, can allow for failure of a portion of one or more nodes of the mapped cluster without loss of access to stored data, can allow for removal/addition of one or more nodes from/to the mapped cluster without loss of access to stored data, etc.
- a mapped cluster can comprise nodes having a data redundancy scheme analogous to a redundant array of independent disks (RAID) type-6, e.g., RAID6, also known as double-parity RAID, etc., wherein employing a node topology and two parity stripes on each node can allow for two node failures before any data of the mapped cluster becomes inaccessible, etc.
- RAID redundant array of independent disks
- a mapped cluster can employ other node topologies and parity techniques to provide data redundancy, e.g., analogous to RAID0, RAID1, RAID2, RAID3, RAID4, RAID5, RAID6, RAID0+1, RAID1+0, etc., wherein a node of a mapped cluster can comprise one or more disks, and the node can be loosely similar to a disk in a RAID system.
- an example mapped RAIN system can provide access to more granular storage in generally very large data storage systems, often on the order of terabytes, petabytes, exabytes, zettabytes, etc., or even larger, because each node can generally comprise a plurality of disks, unlike RAID technologies.
- software, firmware, etc. can hide the abstraction of mapping nodes in a mapped RAIN system, e.g., the group of nodes can appear to be a contiguous block of data storage even where, for example, it can be spread across multiple portions of one or more real disks, multiple real groups of hardware nodes (a real RAIN), multiple real clusters of hardware nodes (multiple real RAINs), multiple geographic locations, etc.
- a mapped RAIN can consist of up to N′ mapped nodes and manage up to M′ portions of disks of the constituent real nodes.
- one mapped node is expected to manage disks of different real nodes.
- disks of one real node are expected to be managed by mapped nodes of different mapped RAIN clusters.
- the use of two disks by one real node can be forbidden to harden mapped RAIN clusters against a failure of one real node compromising two or more mapped nodes of one mapped RAIN cluster, e.g., a data loss event, etc.
- a portion of a real disk can be comprised in a real node that can be comprised in a real cluster and, furthermore, a portion of the real disk can correspond to a portion of a mapped disk, a mapped disk can comprise one or more portions of one or more real disks, a mapped node can comprise one or more portions of one or more real nodes, a mapped cluster can comprise one or more portions of one or more real clusters, etc., and, for convenience, the term RAIN can be omitted for brevity, e.g., a mapped RAIN cluster can be referred to simply as a mapped cluster, a mapped RAIN node can simply be referred to as a mapped node, etc., wherein ‘mapped’ is intended to convey a distinction from a corresponding real physical hardware component.
- N′ can be less than, or equal to, N
- M′ can be less than, or equal to, M.
- the mapped cluster can be smaller than the real cluster.
- the real cluster can accommodate one or more additional mapped clusters.
- the mapped cluster can provide finer granularity of the data storage system.
- the real cluster is 8 ⁇ 8, e.g., 8 nodes by 8 disks
- four mapped 4 ⁇ 4 clusters can be provided, wherein each of the four mapped 4 ⁇ 4 clusters is approximately 1 ⁇ 4th the size of the real cluster.
- mapped 2 ⁇ 2 clusters can be provided where each mapped cluster is approximately 1/16th the size of the real cluster.
- 2 mapped 4 ⁇ 8 or 8 ⁇ 4 clusters can be provided and each can be approximately 1 ⁇ 2 the size of the real cluster.
- the example 8 ⁇ 8 real cluster can provide a mix of different sized mapped clusters, for example one 8 ⁇ 4 mapped cluster, one 4 ⁇ 4 mapped cluster, and four 2 ⁇ 2 mapped clusters.
- not all of the real cluster must be comprised in a mapped cluster, e.g., an example 8 ⁇ 8 real cluster can comprise only one 2 ⁇ 4 mapped cluster with the rest of the real cluster not (yet) being allocated into mapped storage space.
- a mapped cluster can comprise storage space from more than one real cluster. In some embodiments, a mapped cluster can comprise storage space from real nodes in different geographical areas. In some embodiments, a mapped cluster can comprise storage space from more than one real cluster in more than one geographic location. As an example, a mapped cluster can comprise storage space from a cluster having hardware nodes in a data center in Denver. In a second example, a mapped cluster can comprise storage space from a first cluster having hardware nodes in a first data center in Denver and from a second cluster also having hardware nodes in the first data center in Denver.
- a mapped cluster can comprise storage space from both a cluster having hardware nodes in a first data center in Denver and a second data center in Denver.
- a mapped cluster can comprise storage space from a first cluster having hardware nodes in a first data center in Seattle, Wash., and a second data center having hardware nodes in Tacoma, Wash.
- a mapped cluster can comprise storage space from a first cluster having hardware nodes in a first data center in Houston, Tex., and a second cluster having hardware nods in a data center in Mosco, Russia.
- FIG. 1 illustrates a part of a cloud data storage system such as ECSTM comprising a zone (e.g., cluster) 102 of storage nodes 104(1)-104(M), in which each node is typically a server configured primarily to serve objects in response to client requests (e.g., received from clients 108 ).
- the nodes 104(1)-104(M) can be coupled to each other via a suitable data communications link comprising interfaces and protocols such as, but not limited to, Ethernet block 106 .
- Clients 108 can send data system-related requests to the cluster 102 , which in general is configured as one large object namespace; there may be on the order of billions of objects maintained in a cluster, for example.
- a node such as the node 104(2) generally comprises ports 112 by which clients connect to the cloud storage system.
- Example ports are provided for requests via various protocols, including but not limited to SMB (server message block), FTP (file transfer protocol), HTTP/HTTPS (hypertext transfer protocol), and NFS (Network File System); further, SSH (secure shell) allows administration-related requests, for example.
- SMB server message block
- FTP file transfer protocol
- HTTP/HTTPS hypertext transfer protocol
- NFS Network File System
- SSH secure shell
- Each node such as the node 104(2), includes an instance of an object storage system 114 and data services.
- At least one node such as the node 104(2), includes or coupled to reference tracking asynchronous replication logic 116 that synchronizes the cluster/zone 102 with each other remote GEO zone 118 .
- ECSTM implements asynchronous low-level replication, that is, not object level replication.
- organizations protect against outages or information loss by backing-up (e.g., replicating) their data periodically.
- backing-up e.g., replicating
- one or more duplicate or deduplicated copies of the primary data are created and written to a new disk or to a tape, for example within a different zone.
- zone can refer to one or more clusters that is/are independently operated and/or managed. Different zones can be deployed within the same location (e.g., within the same data center) and/or at different geographical locations (e.g., within different data centers).
- disk space is partitioned into a set of large blocks of fixed size called chunks; user data is stored in chunks.
- Chunks are shared, that is, one chunk may contain segments of multiple user objects; e.g., one chunk may contain mixed segments of some number of (e.g., three) user objects.
- a chunk manager 120 can be utilized to manage the chunks and their protection (e.g., via erasure coding (EC)). Erasure coding was created as a forward error correction method for binary erasure channel. However, erasure coding can be used for data protection on data storages.
- the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size. During encoding, redundant m coding fragments are created so that the system can tolerate the loss of any m fragments. Typically, the chunk manager 120 can assign indices to the data fragments (and corresponding coding fragments).
- an index can be a numerical value (e.g., 1 to k) that is utilized for erasure coding.
- the index of a data fragment can be utilized to determine a coefficient, within an erasure coding matrix, which is to be combined (e.g., multiplied) with the data fragment to generate a corresponding coding fragment for the chunk.
- an index value can specify a row and/or column of the coefficient within the erasure coding matrix.
- the indices can be assigned based on a defined sequence, in a random order, based on a defined criterion (e.g., to increase probability of complementary data fragments), based on operator preferences, etc.
- the process of coding fragments creation is called encoding.
- the process of data fragments recovery using available data and coding fragments is called decoding.
- GEO erasure coding can also be utilized, wherein if a distributed storage 100 is to tolerate the loss of any m zones/clusters/chunks, then GEO erasure coding can begin at each zone by replicating each new chunk to at least m remote zones. As a result, there are m backup copies of each chunk. Typically, there is one primary backup copy, which can be utilized for encoding. Encoding is performed by one zone for primary backup chunks and other zones replicate to it. Once a zone has k primary chunks replicated from different remote zones, the zone can perform encoding using the chunks replicated to it as data fragments. The chunk size is fixed, in ECSTM, with padding or other data to complement, wherein the other data is added as needed.
- the result of encoding is m data portions of a chunk size. They are stored as chunks of a specific type called coding chunks. After encoding is complete, the zone can store one coding chunk locally and move other m ⁇ 1 coding chunks to remote zones making sure all the k+m data and coding chunks are stored at different zones whenever possible. Afterwards, the primary backup chunks used for encoding and their peer backup chunks at other zones can be deleted.
- the chunk manager 120 can efficiently generate combined data protection sets during consolidating two or more erasure-coded data portions (e.g., normal/source chunks) that have a reduced sets of data fragments.
- chunk manager 120 can verify that the two or more erasure-coded data portions are complementary (e.g., do not have data fragments with the same index) and perform a summing operation to combine their corresponding coding fragments to generate a combined protection set.
- a CPU 122 and RAM 124 are shown for completeness; note that the RAM 124 can comprise at least some non-volatile RAM.
- the node includes storage devices such as disks 126 , comprising hard disk drives and/or solid-state drives.
- the storage devices can comprise volatile memory(s) or nonvolatile memory(s), or both volatile and nonvolatile memory(s). Examples of suitable types of volatile and non-volatile memory are described below with reference to FIG. 11 .
- the memory e.g., data stores, databases, tables, etc.
- the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory.
- FIG. 2 illustrates an example of a system 200 , which can facilitate storage of data in a mapped redundant array of independent nodes, in accordance with aspects of the subject disclosure.
- System 200 can comprise a RAIN Data Storage System 202 , which can be embodied in a cluster storage system.
- RAIN Data Storage System 202 can be embodied in a real cluster storage system comprising one or more hardware nodes that each comprise one or more storage devices, e.g., hard disks, optical storage, solid state storage, etc.
- RAIN Data Storage System 202 can receive data for storage in a mapped cluster, e.g., data for storage in mapped RAIN cluster storage system.
- the data can be stored by portions of the one or more storage devices of RAIN Data Storage System 202 according to a logical mapping of the storage space, e.g., according to one or more mapped clusters.
- a mapped cluster can be a logical allocation of storage space of RAIN Data Storage System 202 .
- a portion of a real disk can be comprised in a real node that can be comprised in a real cluster and, furthermore, a portion of the real disk can correspond to a portion of a mapped disk, a mapped disk can comprise one or more portions of one or more real disks, a mapped node can comprise one or more portions of one or more real nodes, a mapped cluster can comprise one or more portions of one or more real clusters, etc.
- RAIN Data Storage System 202 can support a mapped cluster enabling data to be stored on one or more disk, e.g., first disk component 240 through M-th disk component 248 of a first cluster node component 230 , first disk component 260 through M-th disk 268 of a second cluster node component 250 through first disk component 280 through M-th disk component 288 of N-th cluster node component 270 of first cluster storage component (CSC) 210 , through disks corresponding to CSCs of L-th cluster storage component 218 , according to a mapped cluster schema.
- CSC cluster storage component
- the first CSC 210 comprises multiple nodes (e.g., the first cluster node component 230 through the N-th cluster node component) that are connected to each other nodes.
- This architecture is referred to as Share-Nothing (SN) architecture.
- the SN architecture mapping provides that none of the node components (e.g. 230 , 250 , and 270 ) have direct access to disks (e.g., first disk component 240 through M-th disk component 248 ) of the other connected nodes (e.g. nodes 230, 250 and 270).
- FIG. 3 illustrates an example of a system 300 , which can facilitate storage of data in a mapped redundant array of independent nodes, in accordance with aspects of the subject disclosure.
- System 300 can comprise a RAIN Data Storage System 302 , first cluster storage component 310 through L-th cluster storage component 318 .
- each CSC 310 comprises a first cluster node component 330 , a second cluster node component 350 through an N-th cluster node component 370 (hereinafter referred to as group of nodes) that are connected to each other.
- Each the group of nodes is connected to an enclosure 320 , wherein the enclosure 320 comprises a first disk component 340 , a second disk component 360 through a M-th disk component 380 (hereinafter referred to as group of disks).
- group of disks a Share-Everything (SE) architecture is employed, wherein all nodes of the group of nodes have access to all the group of disks from a shared pool (e.g., enclosure 320 ).
- SE Share-Everything
- first cluster node component 330 has access to all the disks. If the first cluster node component 330 fails, other group of nodes can access the group of disks.
- all cluster components are divided into highly available/accessible (HA) pairs, wherein a subset of disks (e.g., 340 through 380 ) are associated with each HA pair of nodes.
- the HA architecture provides that node failure(s) and disk failure(s) are decoupled (e.g., nodes and disk fail independently).
- a pair of nodes are associated with each disk of group of disks. Thus, in case of a node failure, there is still the failed node's counter part (e.g., remaining paired node) that can manage the disks associated with the failed node.
- a mapped RAIN (or mapped cluster) is built above a real cluster (RAIN).
- a real cluster comprises N nodes and each cluster node manages M disks. The N*M disks form a disk pool.
- a mapped RAIN can be built using disks from a disk pool.
- a mapped cluster may consist of N′ mapped nodes. Each mapped node may manage M′ disks allocated from a disk pool. One mapped node may manage disks of different mapped RAINs.
- There is a mapping table that contains information about relationship between mapped nodes and disks from a disk pool.
- the mapped RAIN is expected to have a mapping layer. It can be a software and/or firmware layer, which uses a mapping table to tie different component of a mapped RAIN together.
- the distribution of disks has a limitation, wherein two disks managed by one real node should not go to different mapped nodes of one mapped RAIN. This limitation is referred to a Mapped RAIN limitation.
- M′ can be greater than M.
- N′ is fewer that N and one mapped node manages disks from 2 or more real nodes. Note that N′ should not be greater than N because the only way to do so is to violate the Mapped RAIN limitation.
- one real RAIN may accommodate a plurality of mapped clusters. These mapped clusters may have different configurations (N′ ⁇ M′) and may use different protection schemes (e.g., erasure coding 10+2 or triple mirroring). A mapped RAIN may protect data similar to traditional cluster does.
- FIG. 4 illustrates an example of a real RAIN architecture 400 accordance with one or more embodiments described herein. Repetitive description of like elements employed in respective embodiments is omitted for sake of brevity.
- the exemplary RAIN architecture 400 comprises a cluster storage construct 402 having 8 nodes by 8 disks. Each node manages 8 disks. A disk is identified with a pair node #.disk # (e.g., disk 2 . 5 ).
- a mapped cluster 460 shows how a mapped RAIN 4 x 4 can be built above the real one. Grey disks (e.g., disk 1 . 5 , disk 2 . 3 , disk 2 . 4 , disk 2 .
- the mapped cluster 460 comprises 4 mapped nodes. Each node manages 4 disks. The total capacity of the mapped RAIN is 1 ⁇ 4 of the total capacity of the real one. The disks are distributed between mapped nodes arbitrary. Mapped node 1 manages disks from 3 real nodes. Mapped node 2 manages disks from 1 real node. Mapped nodes 3 and 4 manages disks from 2 real nodes each. It should be noted that there are no pair of mapped nodes that manage disks that belong to one real node.
- the HA mapped cluster can be created wherein all mapped nodes of a HA mapped cluster are divided into HA pairs of mapped nodes.
- Each HA pair of mapped nodes manages a group of disks. Allocation of disks for a HA pair and orchestration of storage services for a HA pair must guarantee that 1) storage services of mapped nodes from one HA pair run on different physical nodes; and 2) none of the disks managed by a HA pair of mapped nodes is connected to a physical node that runs storage services of a mapped node from the HA pair. Compliance with the rule 1 and 2 allows to fulfil the requirements for HA storage systems.
- mapped node failure(s) and disk failure(s) are decoupled (e.g., mapped nodes and their disks fail independently; and 2) there is a pair of mapped nodes associated with each disk. Therefore, in case of a mapped node failure, there is still the failed mapped node's mate to manage the still available disks associated with the failed mapped node.
- two disks are connected to one real node must not go to different mapped nodes of one mapped RAIN.
- the mapped nodes form HA groups, e.g. HA pairs, two disks connected to one real node must not go to different HA groups of mapped nodes of one HA mapped RAIN.
- FIG. 5 illustrates an example of real cluster 500 accordance with one or more embodiments described herein. Repetitive description of like elements employed in respective embodiments is omitted for sake of brevity.
- the real cluster 500 comprises 8 real nodes (e.g. 512 , 522 , 532 , 542 , 552 , 562 , 572 and 582 ).
- a highly available/accessible (HA) mapped cluster comprise 4 mapped nodes MN 1 through MN 4 (e.g, 514 , 534 , 554 and 574 ).
- the 4 mapped nodes or HA pairs of mapped nodes e.g, first pair MN 1 -MN 2 illustrated as white ( 514 and 534 ) and second pair MN 3 -MN 4 illustrated as dark ( 554 and 574 )).
- the first pair and second pair of HA mapped nodes run on different real nodes (e.g. MN 1 514 runs on node 1 512, MN 2 534 runs on node 3 532, MN 3 554 runs on node 3 552 and MN 4 574 runs on node 3 572).
- MN 1 514 manages the group of disks associated with node 1 512
- MN 2 534 manages group of disks 536 associated with node 4 532
- MN 3 554 manages group of disks 556 associated with node 5 552
- MN 2 574 manages group of disks 576 associated with node 7 572.
- the first pair of MN 1 -MN 2 e.g., 514 and 534
- the second pair of MN 3 -MN 4 (e.g., 554 and 574 ), manage group of disks 566 associated with real node 6 562 and group of disks 586 associated with real node 8 582.
- Failure of one mapped node e.g., failure of a real node 1 512 that runs MN 1 514
- the disks mapped to failed node e.g., group of disks 526 managed by node 1 512
- the disks mapped to failed node e.g., group of disks 526 managed by node 1 512
- there is pair mapped node to manage the disks e.g., MN 2 534 that runs on node 3 532 that is connected to group of disks 526 ).
- the group of disks managed by MN 1 514 can be accessible by the paired node MN 2 534 . Similar accessibility is available in event of failure of other real nodes that run mapped nodes.
- the group of disks managed by MN 3 554 e.g., group of disks 556
- the paired node MN 4 584 can be accessible by the paired node MN 4 584 .
- FIG. 6 illustrates an example of a data storage controller 602 operational in a storage system 600 accordance with one or more embodiments described herein. Repetitive description of like elements employed in respective embodiments is omitted for sake of brevity.
- the data storage controller 602 comprises a mapped cluster component 610 , a node failure detecting component 612 , a processor 606 , and memory 604 that are communicatively coupled to each other via a bus 608 .
- the data storage controller 602 generates the mapping of the nodes and disks to provide the HA storage system.
- the mapped cluster component 610 generates a first configuration of a storage cluster (e.g., as described in FIG.
- the storage cluster comprises a group of nodes and a group of disks.
- the mapped cluster component 610 further generates a second configuration of the storage cluster (e.g., as described in FIG. 4 and FIG. 5 , above) using the first configuration, wherein the group of nodes are divided into a first pair of nodes comprising a first node having access to a first group of disks and a second node having access to a second group of disks.
- the mapped cluster component 610 further generates a third configuration of the storage cluster (e.g., as described FIG. 5 , above) using the second configuration, wherein the first node comprises a first mapped node that manages the first group of disks of the first node and enables access to the second group of disks of the second node.
- the data storage controller 602 can further comprise a node failure detecting component 612 that can detect an access failure of the first mapped node and/or second mapped node.
- the node failure detecting component 612 in response to the detecting the access failure of the first mapped node, can indicate the failure to the mapped cluster component 610 , wherein the mapped cluster component 610 uses the second mapped node to access the first group of disks.
- the node failure detecting component 612 in response to the detecting the access failure of the first mapped node, can indicate the failure to the mapped cluster component 610 , wherein the mapped cluster component 610 uses the second mapped node to access the second group of disks.
- the node failure detecting component 612 in response to the detecting the access failure of the second node, can indicate the failure to the mapped cluster component 610 , wherein the mapped cluster component 610 uses the first mapped node to access the second group of disks associated with the second node.
- processor 606 can constitute machine-executable component(s) embodied within machine(s), e.g., embodied in one or more computer readable mediums (or media) associated with one or more machines. Such component(s), when executed by the one or more machines, e.g., computer(s), computing device(s), virtual machine(s), etc. can cause the machine(s) to perform the operations described herein.
- memory 304 can store computer executable components and instructions. It is noted that the memory 304 can comprise volatile memory(s) or nonvolatile memory(s), or can comprise both volatile and nonvolatile memory(s). Examples of suitable types of volatile and non-volatile memory are described below with reference to FIG. 11 .
- the memory (e.g., data stores, databases) 304 of the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory.
- FIG. 7 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes. Repetitive description of like elements employed in other embodiments described herein is omitted for sake of brevity.
- flow diagram 700 can be implemented by operating environment 1100 described below. It can be appreciated that the operations of flow diagram 700 can be implemented in a different order than is depicted.
- a computing device (or system) (e.g., computer 1112 ) is provided, the device or system comprising one or more processors and one or more memories that stores executable instructions that, when executed by the one or more processors, can facilitate performance of the operations as described herein, including the non-limiting methods as illustrated in the flow diagrams of FIG. 7 .
- Operation 702 depicts generating, by a system comprising a processor, a first mapping of a storage cluster, wherein the storage cluster comprises a group of nodes, where a node of the group of nodes comprises a group of disks.
- Operation 704 depicts generating a second mapping of the storage cluster using the first mapping, wherein the group of nodes are divided into a first pair of nodes comprising a first node comprising a first group of disks and a second node comprising a second group of disks.
- Operation 706 depicts generating a third mapping of the storage cluster using the second mapping, wherein the first node comprises a first mapped node that manages the first group of disks of the first node and manages the second group of disks of the second node.
- FIG. 8 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes. Repetitive description of like elements employed in other embodiments described herein is omitted for sake of brevity.
- flow diagram 800 can be implemented by operating environment 1100 described below. It can be appreciated that the operations of flow diagram 800 can be implemented in a different order than is depicted.
- a computing device (or system) (e.g., computer 1112 ) is provided, the device or system comprising one or more processors and one or more memories that stores executable instructions that, when executed by the one or more processors, can facilitate performance of the operations as described herein, including the non-limiting methods as illustrated in the flow diagrams of FIG. 8 .
- Operation 802 depicts generating, by a system comprising a processor, a first mapping of a storage cluster, wherein the storage cluster comprises a group of nodes, where a node of the group of nodes comprises a group of disks.
- Operation 804 depicts generating a second mapping of the storage cluster using the first mapping, wherein the group of nodes are divided into a first pair of nodes comprising a first node comprising a first group of disks and a second node comprising a second group of disks.
- Operation 806 depicts generating a third mapping of the storage cluster using the second mapping, wherein the first node comprises a first mapped node that manages the first group of disks of the first node and manages the second group of disks of the second node.
- Operation 808 depicts detecting an access failure of the first mapped node.
- Operations 810 depicts, in response to the detecting the access failure of the first mapped node, using by the system, the second mapped node to access the first group of disks.
- FIG. 9 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes. Repetitive description of like elements employed in other embodiments described herein is omitted for sake of brevity.
- flow diagram 900 can be implemented by operating environment 1100 described below. It can be appreciated that the operations of flow diagram 900 can be implemented in a different order than is depicted.
- a computing device (or system) (e.g., computer 1112 ) is provided, the device or system comprising one or more processors and one or more memories that stores executable instructions that, when executed by the one or more processors, can facilitate performance of the operations as described herein, including the non-limiting methods as illustrated in the flow diagrams of FIG. 9 .
- Operation 902 depicts generating, by a system comprising a processor, a first mapping of a storage cluster, wherein the storage cluster comprises a group of nodes, where a node of the group of nodes comprises a group of disks.
- Operation 904 depicts generating a second mapping of the storage cluster using the first mapping, wherein the group of nodes are divided into a first pair of nodes comprising a first node comprising a first group of disks and a second node comprising a second group of disks.
- Operation 906 depicts generating a third mapping of the storage cluster using the second mapping, wherein the first node comprises a first mapped node that manages the first group of disks of the first node and manages the second group of disks of the second node.
- Operation 908 depicts detecting an access failure of the first mapped node.
- Operations 910 depicts, in response to the detecting the access failure of the first mapped node, employing the second mapped node to access the second group of disks.
- FIG. 10 depicts a diagram of an example, non-limiting computer implemented method that facilitates storage of data in a mapped redundant array of independent nodes. Repetitive description of like elements employed in other embodiments described herein is omitted for sake of brevity.
- flow diagram 1000 can be implemented by operating environment 1100 described below. It can be appreciated that the operations of flow diagram 1000 can be implemented in a different order than is depicted.
- a computing device (or system) (e.g., computer 1112 ) is provided, the device or system comprising one or more processors and one or more memories that stores executable instructions that, when executed by the one or more processors, can facilitate performance of the operations as described herein, including the non-limiting methods as illustrated in the flow diagrams of FIG. 10 .
- Operation 1002 depicts generating, by a system comprising a processor, a first mapping of a storage cluster, wherein the storage cluster comprises a group of nodes, where a node of the group of nodes comprises a group of disks.
- Operation 1004 depicts generating a second mapping of the storage cluster using the first mapping, wherein the group of nodes are divided into a first pair of nodes comprising a first node comprising a first group of disks and a second node comprising a second group of disks.
- Operation 1006 depicts generating a third mapping of the storage cluster using the second mapping, wherein the first node comprises a first mapped node that manages the first group of disks of the first node and manages the second group of disks of the second node.
- Operation 1008 depicts detecting an access failure of the second node.
- Operations 1010 depicts, in response to the detecting the access failure of the second node, using the first mapped node to access the second group of disks associated with the second node.
- FIG. 11 illustrates a block diagram of an example computer operable to execute mapping of redundant array of independent nodes of a storage device.
- FIG. 11 and the following discussion are intended to provide a brief, general description of a suitable computing environment 1100 in which the various aspects of the specification can be implemented. While the specification has been described above in the general context of computer-executable instructions that can run on one or more computers, those skilled in the art will recognize that the specification also can be implemented in combination with other program modules and/or as a combination of hardware and software.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
- inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
- the illustrated aspects of the specification can also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
- program modules can be located in both local and remote memory storage devices.
- Computer-readable storage media can be any available storage media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data.
- Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information.
- Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
- Communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal such as a modulated data signal, (e.g., a carrier wave or other transport mechanism), and includes any information delivery or transport media.
- modulated data signal or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals.
- communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared and other wireless media.
- RF radio frequency
- Computer 1112 comprises a processing unit 1114 , a system memory 1116 , and a system bus 1118 .
- the component(s), server(s), client(s), node(s), cluster(s), system(s), zone(s), module(s), agent(s), engine(s), manager(s), and/or device(s) disclosed herein with respect to systems 400 - 900 can each include at least a portion of the computing system 1100 .
- System bus 1118 couples system components comprising, but not limited to, system memory 1116 to processing unit 1114 .
- Processing unit 1114 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as processing unit 1114 .
- System bus 1118 can be any of several types of bus structure(s) comprising a memory bus or a memory controller, a peripheral bus or an external bus, and/or a local bus using any variety of available bus architectures comprising, but not limited to, industrial standard architecture (ISA), micro-channel architecture (MSA), extended ISA (EISA), intelligent drive electronics (IDE), VESA local bus (VLB), peripheral component interconnect (PCI), card bus, universal serial bus (USB), advanced graphics port (AGP), personal computer memory card international association bus (PCMCIA), Firewire (IEEE 1394), small computer systems interface (SCSI), and/or controller area network (CAN) bus used in vehicles.
- ISA industrial standard architecture
- MSA micro-channel architecture
- EISA extended ISA
- IDE intelligent drive electronics
- VLB VESA local bus
- PCI peripheral component interconnect
- card bus universal serial bus
- USB universal serial bus
- AGP advanced graphics port
- PCMCIA personal computer memory card international association bus
- Firewire IEEE 1394
- SCSI small computer
- System memory 1116 comprises volatile memory 1120 and nonvolatile memory 1122 .
- nonvolatile memory 1122 can comprise ROM, PROM, EPROM, EEPROM, or flash memory.
- Volatile memory 1120 comprises RAM, which acts as external cache memory.
- RAM is available in many forms such as SRAM, dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM).
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Synchlink DRAM
- RDRAM Rambus direct RAM
- DRAM direct Rambus dynamic RAM
- RDRAM Rambus dynamic RAM
- Disk storage 1124 comprises, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick.
- disk storage 1124 can comprise storage media separately or in combination with other storage media comprising, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- CD-ROM compact disk ROM
- CD-R Drive CD recordable drive
- CD-RW Drive CD rewritable drive
- DVD-ROM digital versatile disk ROM drive
- a removable or non-removable interface is typically used, such as interface 1126 .
- FIG. 11 describes software that acts as an intermediary between users and computer resources described in suitable operating environment 1100 .
- Such software comprises an operating system 1128 .
- Operating system 1128 which can be stored on disk storage 1124 , acts to control and allocate resources of computer system 1112 .
- System applications 1130 take advantage of the management of resources by operating system 1128 through program modules 1132 and program data 1134 stored either in system memory 1116 or on disk storage 1124 . It is to be appreciated that the disclosed subject matter can be implemented with various operating systems or combinations of operating systems.
- a user can enter commands or information into computer 1112 through input device(s) 1136 .
- Input devices 1136 comprise, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, cellular phone, user equipment, smartphone, and the like.
- These and other input devices connect to processing unit 1114 through system bus 1118 via interface port(s) 1138 .
- Interface port(s) 1138 comprise, for example, a serial port, a parallel port, a game port, a universal serial bus (USB), a wireless based port, e.g., Wi-Fi, Bluetooth®, etc.
- Output device(s) 1140 use some of the same type of ports as input device(s) 1136 .
- a USB port can be used to provide input to computer 1112 and to output information from computer 1112 to an output device 1140 .
- Output adapter 1142 is provided to illustrate that there are some output devices 1140 , like display devices, light projection devices, monitors, speakers, and printers, among other output devices 1140 , which use special adapters.
- Output adapters 1142 comprise, by way of illustration and not limitation, video and sound devices, cards, etc. that provide means of connection between output device 1140 and system bus 1118 . It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1144 .
- Computer 1112 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1144 .
- Remote computer(s) 1144 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, or other common network node and the like, and typically comprises many or all of the elements described relative to computer 1112 .
- Network interface 1148 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN).
- LAN technologies comprise fiber distributed data interface (FDDI), copper distributed data interface (CDDI), Ethernet, token ring and the like.
- WAN technologies comprise, but are not limited to, point-to-point links, circuit switching networks like integrated services digital networks (ISDN) and variations thereon, packet switching networks, and digital subscriber lines (DSL).
- Communication connection(s) 1150 refer(s) to hardware/software employed to connect network interface 1148 to bus 1118 . While communication connection 1150 is shown for illustrative clarity inside computer 1112 , it can also be external to computer 1112 .
- the hardware/software for connection to network interface 1148 can comprise, for example, internal and external technologies such as modems, comprising regular telephone grade modems, cable modems and DSL modems, wireless modems, ISDN adapters, and Ethernet cards.
- the computer 1112 can operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, cellular based devices, user equipment, smartphones, or other computing devices, such as workstations, server computers, routers, personal computers, portable computers, microprocessor-based entertainment appliances, peer devices or other common network nodes, etc.
- the computer 1112 can connect to other devices/networks by way of antenna, port, network interface adaptor, wireless access point, modem, and/or the like.
- the computer 1112 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, user equipment, cellular base device, smartphone, any piece of equipment or location associated with a wirelessly detectable tag (e.g., scanner, a kiosk, news stand, restroom), and telephone.
- any wireless devices or entities operatively disposed in wireless communication e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, user equipment, cellular base device, smartphone, any piece of equipment or location associated with a wirelessly detectable tag (e.g., scanner, a kiosk, news stand, restroom), and telephone.
- This comprises at least Wi-Fi and Bluetooth® wireless technologies.
- the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
- the computing system 1100 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., desktop and/or portable computer, server, communications satellite, etc. This includes at least Wi-Fi and Bluetooth® wireless technologies.
- the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
- FIG. 12 is a schematic block diagram of a sample computing environment 1200 with which the disclosed subject matter can interact.
- the sample computing environment 1200 includes one or more client(s) 1202 .
- the client(s) 1202 can be hardware and/or software (e.g., threads, processes, computing devices).
- the sample computing environment 1200 also includes one or more server(s) 1204 .
- the server(s) 1204 can also be hardware and/or software (e.g., threads, processes, computing devices).
- the servers 1204 can house threads to perform transformations by employing one or more embodiments as described herein, for example.
- One possible communication between a client 1202 and servers 1204 can be in the form of a data packet adapted to be transmitted between two or more computer processes.
- the sample computing environment 1200 includes a communication framework 1206 that can be employed to facilitate communications between the client(s) 1202 and the server(s) 1204 .
- the client(s) 1202 are operably connected to one or more client data store(s) 1208 that can be employed to store information local to the client(s) 1202 .
- the server(s) 1204 are operably connected to one or more server data store(s) 1210 that can be employed to store information local to the servers 1204 .
- Wi-Fi Wireless Fidelity
- Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station.
- Wi-Fi networks use radio technologies called IEEE 802.11 (a, b, g, n, etc.) to provide secure, reliable, fast wireless connectivity.
- IEEE 802.11 a, b, g, n, etc.
- a Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet).
- Wi-Fi networks operate in the unlicensed 5 GHz radio band at a 54 Mbps (802.11a) data rate, and/or a 2.4 GHz radio band at an 11 Mbps (802.11b), a 54 Mbps (802.11g) data rate, or up to a 600 Mbps (802.11n) data rate for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10BaseT wired Ethernet networks used in many offices.
- processor can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors; single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware multithread technology; parallel platforms; and parallel platforms with distributed shared memory in a single machine or multiple machines.
- a processor can refer to an integrated circuit, a state machine, an application specific integrated circuit (ASIC), a digital signal processor (DSP), a programmable gate array (PGA) including a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
- a processor may also be implemented as a combination of computing processing units.
- One or more processors can be utilized in supporting a virtualized computing environment.
- the virtualized computing environment may support one or more virtual machines representing computers, servers, or other computing devices.
- components such as processors and storage devices may be virtualized or logically represented.
- a processor executes instructions to perform “operations”, this could include the processor performing the operations directly and/or facilitating, directing, or cooperating with another device or component to perform the operations
- nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
- SRAM synchronous RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Synchlink DRAM
- DRRAM direct Rambus RAM
- the disclosed memory components of systems or methods herein are intended to comprise, without being limited to comprising, these and any other suitable types of memory.
- program modules can be located in both local and remote memory storage devices.
- a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, computer-executable instruction(s), a program, and/or a computer.
- a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, computer-executable instruction(s), a program, and/or a computer.
- an application running on a controller and the controller can be a component.
- One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
- an interface can include input/output (I/O) components as well as associated processor, application, and/or API components.
- the terms “user,” “consumer,” “client,” and the like are employed interchangeably throughout the subject specification, unless context warrants particular distinction(s) among the terms. It is noted that such terms can refer to human entities or automated components/devices supported through artificial intelligence (e.g., a capacity to make inference based on complex mathematical formalisms), which can provide simulated vision, sound recognition and so forth.
- artificial intelligence e.g., a capacity to make inference based on complex mathematical formalisms
- the various embodiments can be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement one or more aspects of the disclosed subject matter.
- An article of manufacture can encompass a computer program accessible from any computer-readable device or computer-readable storage/communications media.
- computer readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ).
- magnetic storage devices e.g., hard disk, floppy disk, magnetic strips . . .
- optical disks e.g., compact disk (CD), digital versatile disk (DVD) . . .
- smart cards e.g., card, stick, key drive
- Artificial intelligence-based systems e.g., utilizing explicitly and/or implicitly trained classifiers, can be employed in connection with performing inference and/or probabilistic determinations and/or statistical-based determinations as in accordance with one or more aspects of the disclosed subject matter as described herein.
- an artificial intelligence system can be used to dynamically perform operations as described herein.
- Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to infer an action that a user desires to be automatically performed.
- attributes can be information received from access points, servers, components of a wireless communication network, etc.
- the classes can be categories or areas of interest (e.g., levels of priorities).
- a support vector machine is an example of a classifier that can be employed.
- the support vector machine operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data.
- Other directed and undirected model classification approaches include, e.g., na ⁇ ve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein can also be inclusive of statistical regression that is utilized to develop models of priority.
- artificial intelligence-based systems, components, etc. can employ classifiers that are explicitly trained, e.g., via a generic training data, etc. as well as implicitly trained, e.g., via observing characteristics of communication equipment, e.g., a server, etc., receiving reports from such communication equipment, receiving operator preferences, receiving historical information, receiving extrinsic information, etc.
- support vector machines can be configured via a learning or training phase within a classifier constructor and feature selection module.
- the classifier(s) can be used by an artificial intelligence system to automatically learn and perform a number of functions.
- the word “example” or “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
- the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances.
- the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/453,774 US11435916B2 (en) | 2019-06-26 | 2019-06-26 | Mapping of data storage system for a redundant array of independent nodes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/453,774 US11435916B2 (en) | 2019-06-26 | 2019-06-26 | Mapping of data storage system for a redundant array of independent nodes |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200409582A1 US20200409582A1 (en) | 2020-12-31 |
US11435916B2 true US11435916B2 (en) | 2022-09-06 |
Family
ID=74042864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/453,774 Active 2040-11-01 US11435916B2 (en) | 2019-06-26 | 2019-06-26 | Mapping of data storage system for a redundant array of independent nodes |
Country Status (1)
Country | Link |
---|---|
US (1) | US11435916B2 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11349501B2 (en) * | 2020-02-27 | 2022-05-31 | EMC IP Holding Company LLC | Multistep recovery employing erasure coding in a geographically diverse data storage system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040243650A1 (en) * | 2003-06-02 | 2004-12-02 | Surgient, Inc. | Shared nothing virtual cluster |
US7383381B1 (en) * | 2003-02-28 | 2008-06-03 | Sun Microsystems, Inc. | Systems and methods for configuring a storage virtualization environment |
US20110202721A1 (en) * | 2010-02-12 | 2011-08-18 | Lsi Corporation | Redundant array of independent storage |
US20140047263A1 (en) * | 2012-08-08 | 2014-02-13 | Susan Coatney | Synchronous local and cross-site failover in clustered storage systems |
US20140317438A1 (en) * | 2013-04-23 | 2014-10-23 | Neftali Ripoll | System, software, and method for storing and processing information |
US20150169415A1 (en) * | 2013-12-13 | 2015-06-18 | Netapp, Inc. | Techniques to manage non-disruptive san availability in a partitioned cluster |
US20190034087A1 (en) * | 2017-07-26 | 2019-01-31 | Vmware, Inc. | Reducing data amplification when replicating objects across different sites |
-
2019
- 2019-06-26 US US16/453,774 patent/US11435916B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7383381B1 (en) * | 2003-02-28 | 2008-06-03 | Sun Microsystems, Inc. | Systems and methods for configuring a storage virtualization environment |
US20040243650A1 (en) * | 2003-06-02 | 2004-12-02 | Surgient, Inc. | Shared nothing virtual cluster |
US20110202721A1 (en) * | 2010-02-12 | 2011-08-18 | Lsi Corporation | Redundant array of independent storage |
US20140047263A1 (en) * | 2012-08-08 | 2014-02-13 | Susan Coatney | Synchronous local and cross-site failover in clustered storage systems |
US20140317438A1 (en) * | 2013-04-23 | 2014-10-23 | Neftali Ripoll | System, software, and method for storing and processing information |
US20150169415A1 (en) * | 2013-12-13 | 2015-06-18 | Netapp, Inc. | Techniques to manage non-disruptive san availability in a partitioned cluster |
US20190034087A1 (en) * | 2017-07-26 | 2019-01-31 | Vmware, Inc. | Reducing data amplification when replicating objects across different sites |
Also Published As
Publication number | Publication date |
---|---|
US20200409582A1 (en) | 2020-12-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10594340B2 (en) | Disaster recovery with consolidated erasure coding in geographically distributed setups | |
US11023331B2 (en) | Fast recovery of data in a geographically distributed storage environment | |
US10715181B2 (en) | Facilitation of data deletion for distributed erasure coding | |
US10719250B2 (en) | System and method for combining erasure-coded protection sets | |
US10892782B2 (en) | Flexible system and method for combining erasure-coded protection sets | |
US10956276B2 (en) | System state recovery in a distributed, cloud-based storage system | |
US10732839B2 (en) | Scale-out storage system rebalancing | |
US11334521B2 (en) | System and method that determines a size of metadata-based system snapshots | |
US20200401316A1 (en) | Replication across partitioning schemes in a distributed storage system | |
WO2021113488A1 (en) | Creating a replica of a storage system | |
US20180074903A1 (en) | Processing access requests in a dispersed storage network | |
US10860256B2 (en) | Storing data utilizing a maximum accessibility approach in a dispersed storage network | |
US10642688B2 (en) | System and method for recovery of unrecoverable data with enhanced erasure coding and replication | |
US10776218B2 (en) | Availability-driven data recovery in cloud storage systems | |
US10768840B2 (en) | Updating protection sets in a geographically distributed storage environment | |
US11748004B2 (en) | Data replication using active and passive data storage modes | |
US11416338B2 (en) | Resiliency scheme to enhance storage performance | |
US10942827B2 (en) | Replication of data in a geographically distributed storage environment | |
US10938905B1 (en) | Handling deletes with distributed erasure coding | |
US11435916B2 (en) | Mapping of data storage system for a redundant array of independent nodes | |
US10880040B1 (en) | Scale-out distributed erasure coding | |
US10594792B1 (en) | Scale-out erasure coding | |
US10817374B2 (en) | Meta chunks | |
US10572191B1 (en) | Disaster recovery with distributed erasure coding | |
US10592478B1 (en) | System and method for reverse replication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DANILOV, MIKHAIL;BUINOV, KONSTANTIN;REEL/FRAME:049657/0693 Effective date: 20190626 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050406/0421 Effective date: 20190917 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050724/0571 Effective date: 20191010 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169 Effective date: 20200603 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825 Effective date: 20211101 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825 Effective date: 20211101 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |