US20130151653A1 - Data management systems and methods - Google Patents
Data management systems and methods Download PDFInfo
- Publication number
- US20130151653A1 US20130151653A1 US13/492,633 US201213492633A US2013151653A1 US 20130151653 A1 US20130151653 A1 US 20130151653A1 US 201213492633 A US201213492633 A US 201213492633A US 2013151653 A1 US2013151653 A1 US 2013151653A1
- Authority
- US
- United States
- Prior art keywords
- data
- storage
- data storage
- nodes
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1443—Transmit or communication errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
- G06F11/1662—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17306—Intercommunication techniques
- G06F15/17331—Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/176—Support for shared access to files; File sharing support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0607—Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0644—Management of space entities, e.g. partitions, extents, pools
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/12—Arrangements for detecting or preventing errors in the information received by using return channel
- H04L1/16—Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
- H04L1/18—Automatic repetition systems, e.g. Van Duuren systems
- H04L1/1809—Selective-repeat protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/142—Reconfiguring to eliminate the error
- G06F11/1425—Reconfiguring to eliminate the error by reconfiguration of node membership
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/104—Metadata, i.e. metadata associated with RAID systems with parity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
Definitions
- the virtual storage controllers 218 - 222 are also referred to as “virtual controllers”.
- data is communicated between client devices 204 - 208 and storage nodes 212 - 216 without requesting an acknowledgement receipt from the receiving device.
- alternate systems and methods are provided to ensure proper communication of data to a receiving device.
- the storage node marks all of its data files as “dirty” at 704 and does not share the dirty data files with other storage nodes in the shared storage system. Marking a data file as “dirty” indicates that the data file may contain out-of-date information. Data files that contain current (i.e., up-to-date) information are typically marked as “clean” data files.
- a simple user mode application (u_ndfs.exe) is used for startup, maintenance, recovery, cleanup, VSP forming, LSE operations and the communication protocol, however, it will be seen that separate functionality could equally be implemented in separate applications.
- the VSP is suspended for that node until a quorum is reached.
- the networking protocol should remain aware of network failure and needs to perform an LSE rescan and recovery every time the node is reconnected to the network. The user should be alerted to expect access to the VSP when this happens.
- machine-readable medium 1622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures.
- the term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
- the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Example data management systems and methods are described. In one implementation, a method restores data on a first data storage node that is part of a data storage system including multiple storage nodes. The method marks multiple data entries stored in the first data storage node as dirty. A data index associated with the data storage system is received from a quorum of the data storage nodes in the data storage system. The data index is compared with data entries stored in the first data storage node. Data entries that are not contained in the data index are deleted from the first data storage node. Data entries stored in the first data storage nodes are modified to match corresponding data entries in the data storage system based on the data index.
Description
- This application claims the priority benefit of U.S. Provisional Application Ser. No. 61/520,560, entitled “Data storage systems and methods”, filed Jun. 10, 2011, the disclosure of which is incorporated by reference herein in its entirety.
- This application also is a continuation-in-part of and claims the benefit of priority to U.S. patent application Ser. No. 12/143,134, filed on Jun. 20, 2008, which claims the benefit of priority under 35 U.S.C. §119 to Ireland Patent Application No. S2007/0453, filed on Jun. 22, 2007, the benefit of priority of each of which is claimed hereby, and each of which is incorporated by reference herein in its entirety.
- The present disclosure generally relates to data processing techniques and, more specifically, to systems and methods for storing and retrieving data.
-
FIG. 1 illustrates a traditional data storage model including one or more storage devices, such as hard disks, connected to a single storage controller. The storage controller is responsible for applying data redundancy (e.g., data duplication) and data consistency, as well as orchestrating concurrent data access, to ensure that there are no colliding file or disk operations when storing data to the storage devices. This type of storage controller is either hardware (e.g., a RAID (redundant array of independent disks) controller) or software (e.g., a network file server). As shown inFIG. 1 , multiple computing devices access the storage devices through a single storage controller. - The single storage controller model shown in
FIG. 1 has potential drawbacks, such as the creation of a bottleneck since all data activities are directed through the single storage controller. As additional computing devices are connected to the storage controller, more bandwidth is generally required. Further, as more storage devices are connected to the storage controller, additional processing power is generally required to calculate the data redundancy and perform other functions. The single storage controller model also represents a single point of failure. Even with multiple redundant storage devices, data loss due to failure of the storage controller is not uncommon. This problem is partially mitigated by a dual or clustered controller. However, since storage controllers are generally complex and expensive, the scalability of such an approach is limited. - Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
-
FIG. 1 illustrates a traditional data storage model including multiple storage devices, such as hard disks, connected to a single storage controller. -
FIG. 2 is a block diagram illustrating an example data storage environment capable of implementing the systems and methods discussed herein. -
FIG. 3 is a block diagram illustrating an example client device including a virtual storage controller. -
FIG. 4 is a block diagram illustrating an example storage node. -
FIG. 5 is a flow diagram illustrating an example method of writing data to a shared storage system. -
FIG. 6 is a flow diagram illustrating an example method of communicating data between devices across a network. -
FIG. 7 is a flow diagram illustrating an example method of updating data stored in a storage node upon activation of the storage node. -
FIG. 8 illustrates example data communications in a data storage environment. -
FIG. 9 illustrates another example of data communications in a data storage environment. -
FIG. 10 illustrates an example pair of virtual storage pools (VSP) distributed across a set of nodes. -
FIG. 11 illustrates an example client application accessing a virtual storage pool. -
FIG. 12 illustrates example components within a particular client implementation. -
FIG. 13 illustrates example components contained in an alternative client implementation. -
FIG. 14 illustrates the performance of an example write operation. -
FIG. 15 illustrates an example cluster of virtual storage pools in a high availability group. -
FIG. 16 is a block diagram of a machine in the example form of a computer system within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. - Example systems and methods to manage the storing and retrieval of data in a shared storage system are described. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of example embodiments. It will be evident, however, to those skilled in the art that the present invention may be practiced without these specific details.
- The systems and methods described herein utilize a virtual storage controller in each client device to access a shared storage system that includes multiple storage nodes. The virtual storage controller is responsible for applying data redundancy (e.g., data mirroring), applying data consistency, orchestrating concurrent data access, and avoiding any data collisions or other conflicts with file or disk operations. The described systems and methods perform data redundancy calculations and other data-handling operations within the virtual storage controller in each client device, thereby minimizing or eliminating bottlenecks and other restrictions on the performance of a shared storage system.
- The systems and methods discussed herein, distribute storage processing tasks to any number of virtual storage controllers that operate independently and in parallel with each other. Instead of having a fixed number of storage controllers (e.g., one or two storage controllers), the described systems and methods have as many virtual storage controllers as there are machines that wish to access the shared storage system. Each of virtual storage controllers may optimize the data for storage before communicating the data across the network to the shared storage system. Additionally, the various data fragments are written in parallel to many different storage nodes to increase throughput to the shared storage system. Since the storage nodes do not process the data, they can move data from the network to one or more storage devices in an efficient manner.
- Since the virtual storage controllers reside and operate within the client device accessing the shared storage system, much of the actual storage I/O (input/output) can be cached locally in the client device. In a typical storage I/O operation, certain parts of the storage node are frequently read from and/or written to. For these elements, the data can be cached locally on the client device to reduce the need to traverse the network and access the various storage nodes in the shared storage system. The offers significant performance increases by reducing the need to traverse the network for data and communicate the data back across the network.
- The virtual storage controller can also optimize its behavior to the data and workload associated with the client device on which the virtual storage controller is running. This optimization can be performed on each client device based on the specific needs and operating patterns of the client device without affecting the operation of other client devices, which have their own virtual storage controllers optimized to the operation of their associated client device. This allows users to maintain one or more client devices or systems that are capable of performing in an optimized manner for many simultaneous and different workloads.
-
FIG. 2 is a block diagram illustrating an exampledata storage environment 200 capable of implementing the systems and methods discussed herein. Thedata storage environment 200 includes a sharedstorage system 202 that is accessed bymultiple client devices data communication network 210 or other communication mechanism. In some embodiments, thedata communication network 210 is a local area network (LAN), wide area network (WAN), the Internet, or a combination of two or more networks. - The shared
storage system 202 includesmultiple storage nodes data communication network 210. Thestorage nodes storage nodes data communication network 210. Eachclient device virtual storage controller client devices - In the data storage environment of
FIG. 2 , data redundancy calculations and other data-related functions are performed by the virtual storage controllers 218-222 in the client devices 204-208, thereby eliminating the bottleneck caused by the single storage controller model discussed above with respect toFIG. 1 . The virtual storage controllers 218-222 also perform operations of orchestrating concurrent access over thedata communication network 210 by means of virtual locking, discussed herein. - In some embodiments, each virtual storage controller 218-222 is a software component installed on a client device 204-208 that takes the role of what traditionally would be a hardware or software storage controller residing either on a storage device such as SAN (storage area network), NAS (network attached storage), disk array or a network file server. In other embodiments, the virtual storage controllers 218-222 are implemented as hardware components or hardware modules contained within each client device 204-208.
- The virtual storage controllers 218-222 perform various storage logic functions and provide a defined interface though which the client devices 204-208 and various applications running thereon can access the shared
storage system 202. Additionally, the virtual storage controllers 218-222 may communicate with the storage nodes 212-216 to perform virtual locking, and to access or store information. During access or store operations, the virtual storage controller 218-222 performs various data redundancy calculations. For example, if one of the storage nodes 212-216 is inactive or missing, the virtual storage controller 218-222 can recalculate the missing data portions using data redundancy information, and present the missing data portions to a client application as though there is no missing storage node. - The client devices 204-208 represent any type of computing device or other system that includes the virtual storage controller 218-222. The client devices 204-208 typically execute a set of applications such as a word processor or server software (e.g., web or email server software). In some embodiments, a backup server stores data backups via the virtual storage controllers 218-222 to the storage nodes 212-216. The client devices 204-208 access the virtual storage controllers 218-222 via defined interfaces. In the case of the Windows Operating System, the client node access may be a mapped network drive, such as G:\folder\file or a UNC (universal naming convention) path, for instance \\datastore\volume\folder\file.
- The storage nodes 212-216 may be remote computing devices or other systems capable of accepting different file I/O (input/output) or control requests from the virtual storage controllers 218-222. The storage nodes 212-216 provide storage capacity (e.g., a hard disk) as well as other resources, such as cache memory or CPU processing resources to the virtual storage controllers 218-222. The storage nodes 212-216 can be thought of as a server in the traditional client-server computing model. However, in contrast to such servers the storage nodes 212-216 shown in
FIG. 2 have minimal logic and perform file operations as directed by the virtual storage controllers 218-222. Additionally, the storage nodes 212-216 do not generally function as singular units. Instead, there are a minimum number of storage nodes 212-216 required for operation of a particular virtual storage controller 218-222. -
FIG. 3 is a block diagram illustrating example client device 204 (shown inFIG. 2 ) including thevirtual storage controller 218. Theclient device 204 includes acommunication module 302 that allows theclient device 204 to communicate with other devices and systems, such as storage nodes and other client devices. As discussed herein, thevirtual storage controller 218 performs various functions associated with the storage and retrieval of data between theclient device 204 and multiple storage nodes in a shared storage system. - The
client device 204 also includes adata buffer 306 that stores, for example, incoming and outgoing data. Avirtual locking manager 308 performs various virtual locking functions, for example, during the writing of data to the multiple storage nodes. Additional details regarding these virtual locking functions are discussed herein. Adata repetition manager 310 handles various data writing and re-writing functions when storing data to the multiple storage nodes. Adata recovery module 312 performs various operations related to, for example, restoring or recovering data from one or more storage nodes. - The
client device 204 further includes a data dispersal andredundancy module 314 that manages the storing of data on the multiple storage nodes such that the data is dispersed across the multiple storage nodes and stored in a redundant manner. For example, the data dispersal andredundancy module 314 may handle the striping of data across the multiple storage nodes, storing of redundant copies of the same data set, and queuing data for various write operations. A user interface module 316 allows one or more users to interact with the various modules, systems, and applications discussed herein. For example, users may configure various data storage and data retrieval parameters that define the operation of theclient device 204 as well as the multiple storage nodes. -
FIG. 4 is a block diagram illustrating example storage node 212 (shown inFIG. 2 ). Thestorage node 212 includes acommunication module 402 that allows thestorage node 212 to communicate with other devices and systems, such as client devices and other storage nodes. Thestorage node 212 includes one ormore storage devices 404, such as hard disk drives, non-volatile memory devices, and the like. Thestorage node 212 also includes adata buffer 406 that stores, for example, incoming and outgoing data. - The
storage node 212 further includes adata management module 408 that handles the storage of data to thestorage devices 404 as well as the retrieval of data from thestorage devices 404. Adata repetition manager 410 handles various data writing and re-writing functions when storing data to thestorage devices 404. In some embodiments, the instructions for these data writing and re-writing functions are received from one or more client devices. Adata recovery module 412 performs various operations related to, for example, restoring or recovering data from one or more of thestorage devices 404. - The
storage node 212 also includes a data dispersal andredundancy module 414 that manages the storing of data on thestorage devices 404 such that the data is properly dispersed across thestorage devices 404 as well as the storage devices in other storage nodes. Further the data dispersal and redundancy module manages the redundant storage of data across thestorage devices 404 and the storage devices on other storage nodes. As discussed herein, data may be stored by striping the data across the multiple storage nodes and by storing redundant copies of the same data set across the multiple storage nodes. A user interface module 416 allows one or more users to interact with the various modules, systems, and applications discussed herein. For example, users may configure various data storage and data retrieval parameters that define the operation of thestorage node 212 as well as other storage nodes. -
FIG. 5 is a flow diagram illustrating anexample method 500 of writing data to a shared storage system. Initially, a client device needs to write a data file to a shared storage system at 502. Although particular examples discussed herein may refer to a “data file” or a “data packet”, the described systems and methods are applicable to any type of data arranged in any manner and having any size. Themethod 500 continues as a virtual storage controller in the client device communicates a write operation vote request for the data file to all storage nodes in the shared storage system at 504. A write operation vote request is a request for the storage nodes to respond by indicating whether the storage node is available to accept a new write operation. A positive response by the storage node indicates that the storage node is not currently performing another write operation and, therefore, is available to accept a new write operation. A negative response by the storage node indicates that the storage node is not available to accept a new write operation (e.g., the storage node is already processing a different write operation). A negative response is also referred to as a “collision response” because initiation of a new write operation would likely generate a data collision at the storage node. - At 506, the virtual storage controller identifies responses from at least a portion of the storage nodes. In particular implementations, responses to the write operation vote request are received from some storage nodes, but not necessarily all storage nodes in the shared storage system. The
method 500 continues by determining whether positive responses (to the write operation vote request) have been received from a quorum of storage nodes at 508. As discussed in greater detail below, a quorum of storage nodes includes more than half of all storage nodes in the shared storage system. For example, if a shared storage system includes 15 storage nodes, a quorum is eight storage nodes. If positive responses are received from a quorum of storage nodes, the client device (e.g., the virtual storage controller in the client device) initiates a write operation to write the data file to the shared storage system at 510. While the client device is performing the write operation, other client devices are prevented from performing other write operations until the pending write operation is completed. - If positive responses are not received from a quorum of storage nodes at 508, the
method 500 continues by determining whether at least one collision response was received from a storage node at 512. If at least one collision response was received from a storage node, the client device (e.g., the virtual storage controller in the client device) cancels the intended write operation at 514 or delays the write operation for a period of time and re-sends the write operation vote request after the period of time. If no collision response was received from a storage node at 512, themethod 500 continues by determining whether a time limit has been reached at 516. The time period is, for example, a predetermined time period during which responses to the write operation vote request are collected. If the time limit is not reached, themethod 500 returns to 506 to continue identifying responses from the storage nodes. However, if the time limit is reached, the virtual storage controller repeats communication of the write operation vote request at 518, thereby repeatingmethod 500. - As mentioned above, a quorum of storage nodes includes more than half of all storage nodes in the shared storage system. A disk or file operation requested by a virtual storage controller in any client device needs a quorum of storage nodes supporting the disk or file operation before the operation can begin. In some embodiments, the number of storage nodes and the corresponding number of quorum nodes is predefined when the client device and the storage nodes are initialized. The number of storage nodes and quorum nodes is updated, as necessary when storage nodes are added or removed from the shared storage system. In a particular implementation, the actual number of storage nodes that make up the quorum is not defined and may differ from time-to-time. For example, if a particular shared storage system includes three storage nodes, a quorum is two storage nodes. For a particular disk or file operation, any two of the three storage nodes will provide a quorum. The two storage nodes in the quorum are the first two storage nodes that provide a positive response to the write operation vote request. Storage nodes that are not part of the quorum are referred to as “redundant nodes” or “out-of-quorum nodes”. For example, redundant nodes may have responded after the quorum was established or were disconnected from the network when the write operation vote request was communicated. Any decisions by the quorum of storage nodes, such as allowing a write operation, are also applied to the redundant nodes.
- In traditional data storage models, such as the model shown in
FIG. 1 , a single storage controller coordinates data traffic from individual computing devices to the storage devices, which prevents conflicting file or disk operations. The data storage environment discussed herein (e.g., the environment shown inFIG. 2 ) does not provide a single storage controller. Although an individual virtual storage controller can make decisions related to the client device on which the virtual storage controller is operating, multiple virtual storage controllers need to coordinate their decisions with respect to the shared storage system. The coordination of multiple virtual storage controllers is accomplished with a virtual locking system. - In some embodiments, the virtual locking system operates as a “democratic” voting among the multiple virtual storage controllers. In particular implementations, the virtual locking system is referred to as a “virtual atomic locking system” because it ensures that conflicting operations do not occur at the same time.
-
FIG. 6 is a flow diagram illustrating anexample method 600 of communicating data between devices across a network. Initially, a sending device (e.g., a client device) generates a data packet and assigns a sequence number to the data packet at 602. In some embodiments, the sequence number is unique over a particular time period or across a particular number of data packets. Themethod 600 continues as the sending device communicates the data packet to one or more receiving devices via a network without requesting acknowledgement of a receipt at 604. In this example, data packets are sent between two devices (or nodes) via a data communication network without requiring the generation of a confirmation upon receipt of each data packet. Instead, a buffer, such as a first-in-first-out (FIFO) buffer is used to store previously sent data packets. - Upon receiving the data packet, a receiving device buffers the data packet and determines whether the sequence number is one greater than the previously received data packet at 606. For example, if the previously received data packet has a sequence number of 52918, the next data packet in the sequence will have a sequence number of 52919. If the received data packet has the correct sequence number at 608, the receiving device continues receiving data at 610. However, if the received data packet does not have the correct sequence number, the receiving device searches the buffer (e.g., the FIFO buffer) for one or more intervening data packets at 612. For example, if the previously received data packet has a sequence number of 52918, and the received data packet has a sequence number of 52922, the receiving device searches the buffer for intervening data packets having sequence numbers of 52919, 52920, and 52921.
- If the one or more intervening data packets are in the buffer at 614, the receiving device continues receiving data at 610. However, if the intervening data packets are not in the buffer, the
method 600 determines whether a waiting period has expired at 616. Since data packets may not arrive in sequential order, the waiting period allows extra time for the “out of order” data packets to arrive such that the receiving device can properly reconstruct the data packets in the correct sequence. In some embodiments, the waiting period is approximately one second. If the waiting period has not expired at 616, themethod 600 continues monitoring the buffer for the missing data packets. If the waiting period has expired at 616, the receiving device presumes that the missing data packet has been lost during the communication process, and the receiving device communicates a repeat data request to the sending device at 618. Upon receiving the repeat data request, the sending device re-sends the requested data packet at 620. -
FIG. 7 is a flow diagram illustrating anexample method 700 of updating data stored in a storage node upon activation of the storage node. Themethod 700 is initiated when a storage node has been powered off or otherwise disconnected from the data communication network for any length of time. In this situation, some of the data in the storage node may be obsolete due to data updates performed while the storage node was disconnected from the data communication network. - Upon restarting or rebooting a storage node in a shared storage system, the storage node is placed into a virtual controller operating mode at 702. In a typical storage node operating mode, the storage node is a “dumb server” that serves requests from client devices (e.g., virtual storage controllers in the client devices). When entering the virtual controller operating mode, the storage node becomes a client device to other storage nodes in the shared storage system, which allows the storage node to receive data from the other storage nodes for purposes of updating the data stored in the storage node.
- The storage node marks all of its data files as “dirty” at 704 and does not share the dirty data files with other storage nodes in the shared storage system. Marking a data file as “dirty” indicates that the data file may contain out-of-date information. Data files that contain current (i.e., up-to-date) information are typically marked as “clean” data files.
- The
method 700 continues as the storage node receives file index metadata from a quorum of other storage nodes in the shared storage system at 706. The file index metadata identifies the current status and content of all data files stored in the shared storage system. The storage node compares the current data files stored within the storage node with the file index metadata at 708 on a file-by-file basis. If a particular data file currently stored on the storage node is not present on a quorum of storage nodes in the shared storage system (as determined at 710), that data file is deleted from the storage node at 712. In this situation, the data file is deleted since a corresponding data file is not present on a quorum of other storage nodes, indicating that the data file was likely deleted from the shared storage system while the storage node was disconnected from the data communication network. - If a particular data file is present on a quorum of nodes (as determined at 710), the
method 700 compares the particular data file to the corresponding data file on other storage nodes in the shared storage system at 714. In some embodiments, the data file comparison includes a comparison of a file name, a file size, a date of file creation, a date of last file modification, file attributes, a security attribute, and the like. If the data file on the storage node is identical to the corresponding data file in the shared storage system, the data file is marked as “clean” at 718. If a particular data file is locked or opened for a write operation at the time of the file comparison, the comparison is postponed until the file is unlocked or closed (e.g., the write operation is completed). - If the data file comparison indicates that the data file on the storage node is not identical to the current data file in the shared storage system, as indicated by the file index metadata, the data file is updated by modifying the file properties and/or retrieving data portions of the file at 716. This updating of the data file is performed by accessing one or more of the currently active storage nodes in the quorum of storage nodes. In some embodiments, the data file is read from the quorum of storage nodes on a cluster-by-cluster basis. For each cluster, a fully redundant cluster image is constructed in the storage device's memory and stored to the storage device within the storage node (e.g.,
storage device 404 shown inFIG. 4 ). After the data file is updated on the storage node, the data file is marked as “clean” at 718. - The
method 700 continues by selecting the next data file for comparison at 720 and returns to 708 where the selected data file is compared with the file index metadata. Themethod 700 ends after all files in the storage node have been compared with the file index metadata, and updated as necessary. Whenmethod 700 ends, the storage node is removed from the virtual controller operating mode and returned to operate as a “normal” storage node in the shared storage system. -
FIG. 8 illustrates example data communications in adata storage environment 800. Five storage nodes are shown inFIG. 8 . A quorum of storage nodes is three, which is predefined prior to initialization of thedata storage environment 800. In this example, aclient device 802 wants to initiate a write operation to the storage nodes in thedata storage environment 800. To accomplish this operation, theclient device 802 communicates a write operation vote request to each of the five storage nodes, indicated by the five lines from theclient device 802 to each of the five storage nodes. In this example,Storage Node 1,Storage Node 2, andStorage Node 3 are the first three storage nodes to positively respond to the write operation vote request. Thus,Storage Node 1,Storage Node 2, andStorage Node 3 becomequorum nodes 804 for this particular write operation. AlthoughStorage Node 4 andStorage Node 5 may also respond positively to the write operation vote request, the quorum is already established asStorage Node 1,Storage Node 2, andStorage Node 3. Therefore,Storage Node 4 andStorage Node 5 areredundant nodes 806 for this particular write operation. AlthoughStorage Node 4 andStorage Node 5 are redundant nodes, they still participate in the write operation performed by theclient device 802. During future write operations, different groups of storage nodes may be the quorum nodes for those operations. -
FIG. 9 illustrates another example of data communications in adata storage environment 900. Five storage nodes are shown inFIG. 9 , and a quorum of storage nodes is three. In this example, aclient device 902 wants to initiate a write operation to the storage nodes in thedata storage environment 900. Additionally, aclient device 904 wants to initiate its own write operation to the storage nodes in thedata storage environment 900. Bothclient devices client device 902 to each of the five storage nodes, and by the five broken lines from theclient device 904 to each of the five storage nodes. Theclient device 902 establishes a quorum of nodes 906 (Storage Node 1,Storage Node 2, and Storage Node 3) before theclient device 904 is able to establish a quorum of nodes. In this example,Storage Node 3 responds to the write operation vote request from the client device 904 (indicated by a bold broken line 908) by sending a collision response. The collision response is generated becauseStorage Node 3 is already in thequorum 906 and cannot accept another write operation until the write operation initiated by theclient device 902 is completed. In response to receiving the collision response, theclient device 904 will cancel its intended write operation or wait for a period of time before re-sending another write operation vote request to the multiple storage nodes. - It is important to note that, while both the number of total storage nodes and the number of quorum nodes is predefined at the initialization time, which actual storage nodes of the total nodes that make the quorum for a given operation is matter of a chance. For example, in a minimal configuration, the total storage nodes=3 and quorum nodes=2. Any given operation would require the presence of either storage nodes [1,2] or [1,3] or [2,3]. In some embodiments, quorum membership is established on a FCFS (first come first served) basis. So, if all three storage nodes are present, only the two storage nodes that responded first will be used in the quorum.
- Storage nodes that do not make the quorum for a given operation are called out-of-quorum or redundant storage nodes. A redundant storage node can be made such by either being late to FCFS, miss a whole operation, or miss a larger time span. All decisions made by the quorum will be forced upon the redundant storage nodes without question. Therefore, the redundant storage nodes are slightly lagging behind the quorum nodes and have to process extra information. This is overcome by an advanced multilevel queuing mechanism. If a redundant storage node loses a single transaction it will detect the loss and perform a transactional log replay to recover the missing operation. Additionally, if a redundant storage node was absent for a prolonged period of time, it will have to perform full recovery by scanning all files on a disk and downloading missing pieces from other storage nodes, as discussed herein. The term “redundant storage nodes” also applies to the concept of data redundancy. A particular embodiment of the environment of
FIG. 2 adds an overhead redundant data to files so that missing chunks can be recovered with some of the storage nodes missing. The number of data nodes is equal to quorum nodes and the number of data redundant nodes is equal to out-of-quorum redundant nodes. - As discussed herein, the environment of
FIG. 2 requires the quorum of storage nodes to be more than half of all storage nodes in order to avoid so-called “split brain.” If a quorum is defined as a number of nodes less than half, a situation may arise where two separate groups of quorums will think the other part is not present and undertake a decision that may be colliding to the other quorum group. To prevent this, the model defines the quorum to be (½)+1 of the total storage nodes. - As discussed herein, an individual virtual storage controller can make autonomous decisions within the bounds of the computer or client device on which it is running. Multiple distributed virtual storage controllers have to communicate remotely with each other to coordinate decisions. These decisions include, for example, which virtual storage controller can access a particular file on the shared storage system at a given time. The environment of
FIG. 2 has solved this problem by developing a virtual atomic locking system, discussed herein, which works by means of “democratic” voting among remote virtual storage controllers. “Atomic” refers to the system's ability to ensure that only one operation can happen at time. - In some embodiments, the storage nodes do not vote themselves. Instead, they are used as a pot where votes from virtual controllers are cast and later are drawn from. In other words, the storage nodes are a scoreboard where virtual storage controllers register pending operations. If there are two offending operations for the same file, a collision (or veto) will occur. Otherwise, the operation will be able continue. To avoid potential abuse of this system, devices will only accept data on which they have a previously open, or registered vote. The virtual locking mechanism exists to ensure atomicity of disk operations and prevent metadata corruption on the lowest level. Concurrent access to files is ensured by individual applications and mechanisms like file, range or opportunistic locking mechanisms.
-
FIGS. 10-15 illustrate a particular embodiment of a data storage system and method.FIG. 10 shows schematically a pair of virtual storage pools (VSP) distributed across a set of nodes according to an embodiment of the present invention.FIG. 11 shows a client application accessing a virtual storage pool according to an embodiment of the invention.FIG. 12 shows the main components within a Microsoft Windows client implementation of the invention.FIG. 13 shows the main components within an alternative Microsoft Windows client implementation of the invention.FIG. 14 a write operation being performed according to an embodiment of the invention.FIG. 15 shows a cluster of VSPs in a high availability group. - Referring to
FIG. 10 , a VSP (Virtual Storage Pool), VSP A or VSP B, according to one embodiment is formed up from Local Storage Entities (LSE) served by either server orclient nodes 1 . . . 5. In a simple implementation, an LSE can be just a hidden subdirectory on a disk of the node. However, alternative implementations referred to later could implement an LSE as an embedded transactional database. In general, LSE size is determined by the available free storage space on the various nodes contributing to the VSP. Preferably, LSE size is the same on every node, and so global LSE size within a VSP will be dependent on smallest LSE in the VSP. - The size of VSP is calculated on VSP Geometry:
-
- If no data redundancy is used (Geometry=N), the size of the VSP is determined by the number N of nodes multiplied by size of the LSE.
- When mirroring (M replicas) is being used (Geometry=1+M), the size of the VSP is equal to the size of the LSE.
- When RAID3/5 is being used (Geometry=N+1), the size of the VSP equals N+1 multiplied by size of LSE.
- When RAID-6 is being used (Geometry=N+2), the size of VSP equals N+2 multiplied by size of LSE.
- If N+M redundancy is used (Geometry=N+M), the size of VSP equals N+M multiplied by the size of LSE.
- Because the LSE is the same on every node, a situation may occur when one or few nodes having a major storage size difference could be under utilized in contributing to virtual network storage. For example in a workgroup of 6 nodes, two nodes having 60 GB disks and four having 120 GB disks, the LSE on two nodes may be only 60 GB, and so single VSP size could only be 6*60 GB=360 GB as opposed to 120+120+120+120+60+60=600 GB.
- In such a situation, multiple VSPs can be defined. So in the above example, two VSPs could be created, one 6*60 GB and a second 4*60 GB, and these will be visible as two separate network disks. In fact, multiple VSPs enable different redundancy levels and security characteristics to be applied to different VSPs, so enabling greater flexibility for administrators.
- Using the invention, a VSP is visible to an Active Client, Nodes or indeed Legacy Client as a normal disk formed from the combination of LSEs with one of the geometries outlined above. When a client stores or retrieves data from a VSP it attempts to connect to every Server or Node of the VSP and to perform an LSE I/O operation with an offset based on VSP Geometry.
- Before describing an implementation of the invention in detail, we define the following terms:
-
- LSE Block Size (LBS) is a minimal size of data that can be accessed on an LSE. Currently it is hard coded at 1024 bytes.
- Network Block Size (NBS) is a maximum size of data payload to be transferred in a single packet. Preferably, NBS is smaller than the network MTU (Maximum Transmission Unit)/MSS (Maximum Segment Size) and in the present implementations NBS is equal to LBS, i.e. 1024 bytes, to avoid network fragmentation. (Standard MTU size on an Ethernet type network is 1500 bytes).
- VSP Block Size (VBS) is the size of data block at which data is distributed within the network: VBS=LBS*number of non-redundant nodes (N).
- VSP Cluster Size (VCS)—data (contents of the files before redundancy is calculated) is divided into so called clusters, similar in to data clusters of traditional disk based file systems (FAT (File Allocation Table), NTFS (New Technology File System)). Cluster size is determined by VSP Geometry and NBS (Network Block Size) in following way:
- VCS=Number of Data Nodes*NBS
- VCS is a constant data size that a redundancy algorithm can be applied to. If a data unit is smaller than VCS, mirroring is used. If data unit is larger than VCS it will be wrapped to a new cluster. For example, with reference to
FIG. 14 , if a VSP has 5 data nodes and the NBS is 1400 bytes, the VCS would be 5*1400=7000 bytes. If a client application performs a write I/O operation of 25 kilobytes of data, the NDFS will split it to three clusters (of 7000 bytes) and remaining 4000 bytes will be mirrored among nodes. Another implementation would pad the remaining 4000 bytes with 3000 zeros up to full cluster size and distribute among nodes as a fourth cluster. - Host Block Size (HBS) is the block size used on a host operating system.
- Referring now to the implementation of
FIG. 12 where only Nodes and a single VSP per network are considered. In this implementation, a simple user mode application (u_ndfs.exe) is used for startup, maintenance, recovery, cleanup, VSP forming, LSE operations and the communication protocol, however, it will be seen that separate functionality could equally be implemented in separate applications. - Upon startup, u_ndfs.exe reads config.xml, a configuration file, which defines LSE location and VSP properties i.e. geometry, disk name and IP addresses of nodes. (The configuration file is defined through user interaction with a configuration GUI portion (CONFIG GUI) of U_ndfs.) U_ndfs then spawns a networking protocol thread, NDFS Service. The network protocol used by the thread binds to a local interface on a UDP port and starts network communications with other nodes contributing to the VSP.
- If less than a quorum N of N+M nodes are detected by the node on start-up, the VSP is suspended for that node until a quorum is reached.
- Where there is N+M redundancy and where N<=M, it is possible for two separate quorums to exist on two detached networks. In such a case, if N<=50% of N+M, but a quorum is reached at a node, the VSP is set to read-only mode at that node.
- Once a quorum is present, local LSE to VSP directory comparison is performed by recovering directory metadata from another node.
- If the VSP contains any newer files/directories than the local LSE (for instance if the node has been off the network and files/directories have been changed), a recovery procedure is performed by retrieving redundant network parts from one or more other nodes and rebuilding LSE data for the given file/directory. In a simple implementation, for recovery, the node closest to the requesting node based on network latency is used as the source for metadata recovery.
- So for example, in an N+M redundancy VSP implementation, a file is split into N+M clusters, each cluster containing a data component and a redundant component. Where one or more the N+M nodes of the VSP was unavailable when the file was written or updated, during recovery, the previously unavailable node must obtain at least N of the clusters in order to re-build the cluster which should be stored for the file on the recovering node to maintain the overall level of redundancy for all files of the VSP.
- It will also be seen that, after start-up and recovery, the networking protocol should remain aware of network failure and needs to perform an LSE rescan and recovery every time the node is reconnected to the network. The user should be alerted to expect access to the VSP when this happens.
- A transaction log can be employed to speed up the recovery process instead of using a directory scan, and if the number of changes to the VSP exceeds the log size, a full recovery could be performed.
- It can also be useful during recovery to perform full disk scan in a manner of fsck (“file system check” or “file system consistency check” in UNIX) or chkdsk (Windows) to ensure files have not been corrupted.
- When LSE data is consistent with the VSP, the networking thread begins server operations and u_ndfs.exe loads a VSP disk device kernel driver (ndfs.sys). The disk device driver (NDFS Driver) then listens to requests from the local operating system and applications, while u_ndfs.exe listens to requests from other nodes through the networking thread.
- Referring to
FIG. 11 , in operation, an application (for instance Microsoft Word) running on the host operating system, calls the I/O subsystem in the OS kernel and requests a portion of data with an offset (0 to file length) and size. (If the size is bigger than HBS, the kernel will fragment the request to smaller subsequent requests). The I/O subsystem then sends an IRP (I/O request packet) message to the responsible device driver module, NDFS driver. In case of a request to the VSP, the kernel device driver receives the request and passes it on to the network protocol thread, NDFS Service, for further processing based on the VSP geometry. - At the same time, when the server side of the networking thread receives a request from a client node through the network, an LSE I/O operation is performed on the local storage.
- Both client and server I/Os can be thought of as normal I/O operations with an exception that they are intercepted and passed through the NDFS driver and NDFS service like a proxy. N+M redundancy can thus be implemented with the network protocol transparent to both clients and servers.
- Referring now to
FIG. 13 , in further refined implementation of the invention, a separate kernel driver, NDFS Net Driver, is implemented for high-speed network communications instead of using Winsock. This driver implements its own layer-3 protocol and only reverts to IP/UDP in case of communication problems. - Also, instead of using the Windows file system for the LSE, a database, NDFS DB, can be used. Such a database implemented LSE can also prevent users from manipulating the raw data stored in a hidden directory as in the implementation of
FIG. 12 . - For the implementation of
FIG. 12 , a network protocol is used to provide communications between VSP nodes on the network. Preferably, every protocol packet comprises: -
- Protocol ID
- Protocol Version
- Geometry
- Function ID
- Function Data
- For the implementations of
FIGS. 12 and 13 , the following functions are defined: -
NDFS_FN_READ_FILE_REQUEST 0x0101 NDFS_FN_READ_FILE_REPLY 0x0201 NDFS_FN_WRITE_FILE 0x0202 NDFS_FN_CREATE_FILE 0x0102 NDFS_FN_DELETE_FILE 0x0103 NDFS_FN_RENAME_FILE 0x0104 NDFS_FN_SET_FILE_SIZE 0x0105 NDFS_FN_SET_FILE_ATTR 0x0106 NDFS_FN_QUERY_DIR_REQUEST 0x0207 NDFS_FN_QUERY_DIR_REPLY 0x0203 NDFS_FN_PING_REQUEST 0x0108 NDFS_FN_PING_REPLY 0x0204 NDFS_FN_WRITE_MIRRORED 0x0109 NDFS_FN_READ_MIRRORED_REQUEST 0x0205 NDFS_FN_READ_MIRRORED_REPLY 0x0206 - As can be seen above, every function has a unique id, and the highest order byte defines whether the given function is BROADCAST (1) or UNICAST (2) based.
- The functions can be categorized as carrying data or metadata (directory operations). Also defined are control functions such as PING, which do not directly influence the file system or data.
- Functions, which carry data are as follows:
-
- READ_REQUEST
- READ_REPLY
- WRITE
- WRITE_MIRRORED
- READ_MIRRORED_REQUEST
- READ_REPLY
- whereas functions, which carry metadata are as follows:
-
- CREATE—creates a file or directory with a given name and attributes
- DELETE—deletes a file or directory with its contents
- RENAME—renames a file or directory or its localization in directory structure (MOVE)
- SET_ATTR—changes file attributes
- SET_SIZE—sets file size. Note that the file size doesn't imply how much space the file physically occupies on the disk and is only an attribute.
- QUERY_DIR_REQUEST
- QUERY_DIR_REPLY
- In the present implementations, all metadata (directory information) is available on every participating node. All functions manipulating metadata are therefore BROADCAST based and do not require two way communications—the node modifying data is sent as a broadcast message to all other nodes to update the metadata. Verification of such operations is performed only on the requesting node.
- The rest of the metadata functions are used to read directory contents and are used in the recovery process. These functions are unicast based, because the implementations assume metadata to be consistent on all available nodes.
- After fragmentation of a file into clusters, the last fragment usually has a random size smaller than the full cluster size (unless the file size is rounded up to the full cluster size). Such a fragment cannot easily be distributed using N+M redundancy and is stored using 1+M redundancy (replication) using the function WRITE_MIRRORED. This is also valid for files that are smaller than cluster size. (Alternative implementations may have different functionality such as padding or reducing block size to 1 byte.)
- WRITE_MIRRORED is a BROADCAST function because an identical data portion is replicated to all nodes. It should be noted that for READ_MIRRORED operations, all data is available locally (because it is identical on every node) and no network I/O is required for such small portions of data (except for recovery purposes).
- Note that the mirrored block size has to be smaller than cluster size, however it can be larger than NBS size. In such cases more than one WRITE_MIRRORED packet has to be sent with a different offset for the data being written.
- In implementing N+M redundancy, clusters are divided into individual packets. To read data from a file, the broadcast function READ_REQUEST is used. The function is sent to all nodes with the cluster offset to be retrieved. Every node replies with unicast function READ_REPLY with its own data for the cluster at NBS size.
- The node performing READ_REQUEST waits for first number of data nodes READ_REPLY packets sufficient to recover the data. If enough packets are received, any following reply packets are discarded. The data then is processed by an N+M redundancy function to recover the original file data.
- Functions like REQUEST/REPLY have a 64-bit unique identification number generated from the computer's system clock inserted while sending REQUEST. The packet ID is stored to a queue. When the required amount of REPLY packets with same ID is received, the REQUEST ID is removed from the queue. Packets with IDs not matching those in the queue are discarded.
- The packet ID is also used in functions other than REQUEST/REPLY to prevent execution of functions on the same node as the sending node. When a node receives a REQUEST packet with an ID matching a REQUEST ID in the REQUEST queue, the REQUEST is removed from the queue. Otherwise the REQUEST function in the packet will be executed.
- The broadcast function PING_REQUEST is sent when the networking thread is started on a given node. In response, the node receives a number of unicast responses PING_REPLY from the other nodes, and if these are less than required, the VSP is suspended until quorum is reached.
- Every other node starting up sends following PING_REQUEST packets and this can be used to indicate to a node that the required number of nodes are now available, so that VSP operations can be resumed for read-only or read/write.
- The PING functions are used to establish the closest (lowest latency) machine to the requesting node and this is used when recovery is performed. As explained above, re-sync and recovery are initiated when a node starts up and connects to the network that has already reached quorum. This is done to synchronize any changes made to files when the node was off the network. When the recovery process is started, every file in every directory is marked with a special attribute. The attribute is removed after recovery is performed. During the recovery operation the disk is not visible to the local user. However, remote nodes can perform I/O operations on the locally stored files not marked with the recovery attribute. This ensures that data cannot be corrupted by desynchronization.
- The recovering node reads the directory from the lowest latency node using QUERY_DIR_REQUEST/RESPONSE functions. The directory is compared to locally stored metadata for the VSP. When comparing individual files, the following properties are taken into consideration:
-
- Name—if the file is present on the source machine and not present on the local node, the file will be created using the received metadata and the file recovery process will be performed. If the file exists on the local node and does not exist on the remote node it will be removed locally. Exactly same protocol applies to directories (which are accessed recursively).
- Size of file—if the locally stored file size is different to the source node the file, it is removed and recovered.
- Last modification time—if the modification time is different the file is deleted and recovered.
- File attributes (e.g. read-only, hidden, archive)—unlike the previous parameters, in case of a difference in file attributes, the file is not deleted and recovered, instead only the attributes are applied. In more extensive implementations, attributes such as Access Control List (ACL) and security information can be applied. Also, some implementation may also include several additional attributes such as file versioning or snapshots.
- Note that last modification time recovery wouldn't make sense if local time would be used on every machine. Instead every WRITE and WRITE_MIRRORED request carry a requesting node generated timestamp in the packet payload and this timestamp is assigned to the metadata for the file/directory on every node.
- Per-file data recovery process is performed by first retrieving the file size from the metadata (which prior to data recovery has to be “metadata recovered”). Then the file size is divided into cluster sizes and standard READ_REQUESTS performed to retrieve the data. An exception is the last cluster which is retrieved from the metadata source node (lowest latency) using READ_MIRRORED_REQUEST. The last part of recovery process comprises setting proper metadata parameters (size, attributes, last modification time) on the file.
- File and attribute comparison is performed recursively for all files and folders on the disk storage. When recovery is finished all data is in sync and normal operations are resumed.
- Alternative implementations of the invention can have dynamic recovery as opposed to recovery on startup only. For example, the networking thread can detect that the node lost communication with the other nodes and perform recovery each time communication is restored.
- As mentioned above, a live transaction log file (journaling) can assist such recovery and the node could periodically check the journal or its serial number to detect if any changes have been made that the node was unaware of. Also the journal checking and metadata and last cluster recovery should be performed in more distributed manner than just trusting the node with lowest latency.
- While the above implementations have been described as implemented in Windows platforms, it will be seen that the invention can equally be implemented with other operating systems, as despite operating system differences a similar architecture to that shown in
FIGS. 12 and 13 can be used. - In more extensive implementations of the invention, different security models can be applied to a VSP:
-
- Open Access—no additional security mechanisms, anyone with a compatible client can access the VSP. Only collision detection will have to be performed to avoid data corruption. Standard Windows ACLs and Active Directory authentication will apply.
- Symmetric Key Access—a node trying to access VSP will have to provide a shared pass-phrase. The data on LSE and/or protocol messages will be encrypted and the pass-phrase will be used to decrypt data on fly when doing N+M redundancy calculations.
- Certificate Security—in this security model, when forming a VSP, every node will have to exchange its public keys with every other node on the network. When a new node tries to access the VSP it will have to be authorized on every existing participating node (very high security).
- While the implementations above have been described in terms of active clients, servers and nodes, it will be seen that the invention can easily be made available to legacy clients, for example, using Windows Share. It may be particularly desirable to allow access only to clients which are more likely to be highly available, for example, a laptop.
- Further variations of the above described implementations are also possible. So for example, rather than using an IP or MAC to identify nodes participating in a VSP, a dedicated NODE_ID could be used. Administration functions could also be expanded to enable one node to be replaced with another node in the VSP, individual nodes to be added or removed from the VSP or the VSP geometry to be changed.
- Additionally the VSP could be implemented in a way that represents a continuous random access device formatted with a native file system such as FAT, NTFS or EXT/UFS on Unix. The VSP could also be used as virtual magnetic tape device for storing backups using traditional backup software.
- Native Filesystem usage represents a potential problem where multiple nodes, while updating the same volume, could corrupt the VSP file system meta data due to multi node locking. To mitigate this, either a clustered filesystem would be used, or each node could access only a separate virtualized partition at a time.
- For example, in a High Availability cluster such as Microsoft Cluster Server, Sun Cluster or HP Serviceguard, a HA Resource Group traditionally comprises a LUN or Disk Volume or partition residing on a shared storage (disk array or SAN) that is used only by this Resource Group and moves between nodes together with other resources. Referring now to
FIG. 15 , such a LUN or partition could be replaced with NDFS VSP formed out of cluster nodes and internal disks, so removing HA cluster software dependency on shared physical storage -
FIG. 16 is a block diagram of a machine in the example form of acomputer system 1600 within which instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. -
Example computer system 1600 includes a processor 1602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), amain memory 1604, and astatic memory 1606, which communicate with each other via abus 1608.Computer system 1600 may further include a video display device 1610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).Computer system 1600 also includes an alphanumeric input device 1612 (e.g., a keyboard), a user interface (UI) navigation device 1614 (e.g., a mouse), adisk drive unit 1616, a signal generation device 1618 (e.g., a speaker) and anetwork interface device 1620. -
Disk drive unit 1616 includes a machine-readable medium 1622 on which is stored one or more sets of instructions and data structures (e.g., software) 1624 embodying or utilized by any one or more of the methodologies or functions described herein.Instructions 1624 may also reside, completely or at least partially, withinmain memory 1604, withinstatic memory 1606, and/or withinprocessor 1602 during execution thereof bycomputer system 1600,main memory 1604 andprocessor 1602 also constituting machine-readable media. - While machine-
readable medium 1622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. -
Instructions 1624 may further be transmitted or received over acommunications network 1626 using a transmission medium.Instructions 1624 may be transmitted usingnetwork interface device 1620 and any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (“LAN”), a wide area network (“WAN”), the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., WiFi and WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. - Although an embodiment has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. For example, the described systems and methods may provide an educational benefit in other disciplines that by providing incentives for users to access the systems and methods. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The accompanying drawings that form a part hereof, show by way of illustration, and not of limitation, specific embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. This Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.
- Such embodiments of the inventive subject matter may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Thus, although specific embodiments have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description.
Claims (20)
1. A computer-implemented method comprising:
receiving a first data packet communicated from a sending device, the first data packet having an associated first sequence number;
receiving a second data packet from the sending device, the second data packet having an associated second sequence number;
determining, using one or more processors, whether the second sequence number is a next number in sequence following the first sequence number; and
responsive to determining that the second sequence number is not the next number in sequence following the first sequence number:
identifying at least one intervening sequence number between the first sequence number and the second sequence number;
determining whether at least one data packet associated with the at least one intervening sequence number is stored in a buffer; and
communicating a request to the sending device to re-send the second data packet responsive to determining that the at least one data packet associated with the at least one intervening sequence number is not stored in the buffer.
2. A method as recited in claim 1 , further comprising waiting a predetermined time period for receipt of the at least one data packet associated with the at least one intervening sequence number before communicating the request to the sending device to re-send the second data packet.
3. A method as recited in claim 1 , further comprising, prior to communicating the request to the sending device to re-send the second data packet, re-checking the buffer after a time period for the at least one data packet associated with the at least one intervening sequence number.
4. A method as recited in claim 1 , further comprising receiving the at least one data packet associated with the intervening sequence numbers not stored in the buffer responsive to communicating the request to the sending device to re-send the second data packet.
5. A method as recited in claim 1 , the receiving of the first data packet and the second data packet being performed without acknowledging receipt of the first data packet or the second data packet to the sending device.
6. A computer-implemented method of restoring data on a first data storage node that is part of a data storage system including a plurality of storage nodes, the method comprising:
marking a plurality of data entries stored in the first data storage node as dirty;
receiving a data index associated with the data storage system from a quorum of the data storage nodes in the data storage system;
comparing, using one or more processors, the data index with data entries stored in the first data storage node;
deleting data entries from the first data storage node that are not contained in the data index; and
modifying data entries stored in the first data storage node to match corresponding data entries in the data storage system based on the data index.
7. A method as recited in claim 6 , the modifying of the data entries stored in the first data storage node further comprising marking each modified data entry as clean.
8. A method as recited in claim 6 , the method performed responsive to initializing the first data storage node.
9. A method as recited in claim 6 , the method performed responsive to restarting the first data storage node.
10. A method as recited in claim 6 , the comparing of the data index with data entries stored in the first data storage node further comprising comparing at least one of a file name, a file size, a date of creation, a date of last modification, a data attribute, data content, and a security descriptor.
11. A method as recited in claim 6 , the modifying of the data entries stored in the first data storage node including retrieving data from the quorum of data storage nodes.
12. A method as recited in claim 6 , the quorum of data storage nodes excluding the first data storage node.
13. A method as recited in claim 6 , the modifying of the data entries stored in the first data storage node managed by a virtual storage controller.
14. An apparatus comprising:
a memory to store received data packets; and
one or more processors coupled to the memory, the one or more processors configured to:
mark a plurality of data entries stored in a first data storage node as dirty;
receive a data index associated with a data storage system that includes a plurality of storage nodes, the data index received from a quorum of the data storage nodes in the data storage system;
compare the data index with data entries stored in the first data storage node;
delete data entries from the first data storage node that are not contained in the data index; and
modify data entries stored in the first data storage node to match corresponding data entries in the data storage system based on the data index.
15. The apparatus of claim 14 , the one or more processors further configured to mark each modified data entry as clean.
16. The apparatus of claim 14 , the data entries being modified responsive to initializing the apparatus.
17. The apparatus of claim 14 , the data entries modified responsive to restarting the apparatus after a period of time being disconnected from the data storage system.
18. The apparatus of claim 14 , the data index compared with the data entries by comparing at least one of a file name, a file size, a date of creation, a date of last modification, a data attribute, data content, and a security descriptor.
19. The apparatus of claim 14 , the one or more processors further configured to retrieve current data from the quorum of data storage nodes.
20. The apparatus of claim 14 , the quorum of data storage nodes excluding the first data storage node.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/492,633 US20130151653A1 (en) | 2007-06-22 | 2012-06-08 | Data management systems and methods |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IES2007/0453 | 2007-06-22 | ||
IES20070453 | 2007-06-22 | ||
US12/143,134 US20080320097A1 (en) | 2007-06-22 | 2008-06-20 | Network distributed file system |
US201161520560P | 2011-06-10 | 2011-06-10 | |
US13/492,633 US20130151653A1 (en) | 2007-06-22 | 2012-06-08 | Data management systems and methods |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/143,134 Continuation-In-Part US20080320097A1 (en) | 2007-06-22 | 2008-06-20 | Network distributed file system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130151653A1 true US20130151653A1 (en) | 2013-06-13 |
Family
ID=40137642
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/143,134 Abandoned US20080320097A1 (en) | 2007-06-22 | 2008-06-20 | Network distributed file system |
US13/492,633 Abandoned US20130151653A1 (en) | 2007-06-22 | 2012-06-08 | Data management systems and methods |
US13/492,615 Expired - Fee Related US9880753B2 (en) | 2007-06-22 | 2012-06-08 | Write requests in a distributed storage system |
US14/519,003 Abandoned US20150067093A1 (en) | 2007-06-22 | 2014-10-20 | Network Distributed File System |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/143,134 Abandoned US20080320097A1 (en) | 2007-06-22 | 2008-06-20 | Network distributed file system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/492,615 Expired - Fee Related US9880753B2 (en) | 2007-06-22 | 2012-06-08 | Write requests in a distributed storage system |
US14/519,003 Abandoned US20150067093A1 (en) | 2007-06-22 | 2014-10-20 | Network Distributed File System |
Country Status (2)
Country | Link |
---|---|
US (4) | US20080320097A1 (en) |
IE (1) | IES20080508A2 (en) |
Cited By (169)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130282994A1 (en) * | 2012-03-14 | 2013-10-24 | Convergent.Io Technologies Inc. | Systems, methods and devices for management of virtual memory systems |
US20140081989A1 (en) * | 2012-07-30 | 2014-03-20 | Steve K Chen | Wavefront muxing and demuxing for cloud data storage and transport |
US20140188803A1 (en) * | 2012-12-31 | 2014-07-03 | Martyn Roland James | Systems and methods for automatic synchronization of recently modified data |
US20140195217A1 (en) * | 2013-01-09 | 2014-07-10 | Apple Inc. | Simulated input/output devices |
US20140337296A1 (en) * | 2013-05-10 | 2014-11-13 | Bryan Knight | Techniques to recover files in a storage network |
US20150117258A1 (en) * | 2013-10-30 | 2015-04-30 | Samsung Sds Co., Ltd. | Apparatus and method for changing status of cluster nodes, and recording medium having the program recorded therein |
US20150156172A1 (en) * | 2012-06-15 | 2015-06-04 | Alcatel Lucent | Architecture of privacy protection system for recommendation services |
US20150236892A1 (en) * | 2014-02-20 | 2015-08-20 | Sercomm Corporation | Network device and method for maintaining network connection |
US20160283141A1 (en) * | 2015-03-27 | 2016-09-29 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US9525738B2 (en) | 2014-06-04 | 2016-12-20 | Pure Storage, Inc. | Storage system architecture |
US9563506B2 (en) | 2014-06-04 | 2017-02-07 | Pure Storage, Inc. | Storage cluster |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US20170242770A1 (en) * | 2015-02-19 | 2017-08-24 | Netapp, Inc. | Manager election for erasure coding groups |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US20170286446A1 (en) * | 2016-04-01 | 2017-10-05 | Tuxera Corporation | Systems and methods for enabling modifications of multiple data objects within a file system volume |
US9798477B2 (en) | 2014-06-04 | 2017-10-24 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US20170346887A1 (en) * | 2016-05-24 | 2017-11-30 | International Business Machines Corporation | Cooperative download among low-end devices under resource constrained environment |
US9836245B2 (en) | 2014-07-02 | 2017-12-05 | Pure Storage, Inc. | Non-volatile RAM and flash memory in a non-volatile solid-state storage |
US9836234B2 (en) | 2014-06-04 | 2017-12-05 | Pure Storage, Inc. | Storage cluster |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10067841B2 (en) * | 2014-07-08 | 2018-09-04 | Netapp, Inc. | Facilitating n-way high availability storage services |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US10114714B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Redundant, fault-tolerant, distributed remote procedure call cache in a storage system |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10178169B2 (en) | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US10185506B2 (en) | 2014-07-03 | 2019-01-22 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US10216411B2 (en) | 2014-08-07 | 2019-02-26 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US10229010B2 (en) * | 2013-11-22 | 2019-03-12 | Netapp, Inc. | Methods for preserving state across a failure and devices thereof |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US10303547B2 (en) | 2014-06-04 | 2019-05-28 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US10324812B2 (en) | 2014-08-07 | 2019-06-18 | Pure Storage, Inc. | Error recovery in a storage cluster |
US10324790B1 (en) * | 2015-12-17 | 2019-06-18 | Amazon Technologies, Inc. | Flexible data storage device mapping for data storage systems |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US10372617B2 (en) | 2014-07-02 | 2019-08-06 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10394762B1 (en) | 2015-07-01 | 2019-08-27 | Amazon Technologies, Inc. | Determining data redundancy in grid encoded data storage systems |
US10394789B1 (en) | 2015-12-07 | 2019-08-27 | Amazon Technologies, Inc. | Techniques and systems for scalable request handling in data processing systems |
US10430306B2 (en) | 2014-06-04 | 2019-10-01 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US10437790B1 (en) | 2016-09-28 | 2019-10-08 | Amazon Technologies, Inc. | Contextual optimization for data storage systems |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US10498580B1 (en) | 2014-08-20 | 2019-12-03 | Pure Storage, Inc. | Assigning addresses in a storage system |
US10496327B1 (en) | 2016-09-28 | 2019-12-03 | Amazon Technologies, Inc. | Command parallelization for data storage systems |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US10528419B2 (en) | 2014-08-07 | 2020-01-07 | Pure Storage, Inc. | Mapping around defective flash memory of a storage array |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US10579474B2 (en) | 2014-08-07 | 2020-03-03 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US10592336B1 (en) | 2016-03-24 | 2020-03-17 | Amazon Technologies, Inc. | Layered indexing for asynchronous retrieval of redundancy coded data |
US10614239B2 (en) | 2016-09-30 | 2020-04-07 | Amazon Technologies, Inc. | Immutable cryptographically secured ledger-backed databases |
US10614254B2 (en) * | 2017-12-12 | 2020-04-07 | John Almeida | Virus immune computer system and method |
US10642970B2 (en) * | 2017-12-12 | 2020-05-05 | John Almeida | Virus immune computer system and method |
US10642813B1 (en) | 2015-12-14 | 2020-05-05 | Amazon Technologies, Inc. | Techniques and systems for storage and processing of operational data |
US10650902B2 (en) | 2017-01-13 | 2020-05-12 | Pure Storage, Inc. | Method for processing blocks of flash memory |
US10657097B1 (en) | 2016-09-28 | 2020-05-19 | Amazon Technologies, Inc. | Data payload aggregation for data storage systems |
US10678664B1 (en) | 2016-03-28 | 2020-06-09 | Amazon Technologies, Inc. | Hybridized storage operation for redundancy coded data storage systems |
US10678452B2 (en) | 2016-09-15 | 2020-06-09 | Pure Storage, Inc. | Distributed deletion of a file and directory hierarchy |
US10691812B2 (en) | 2014-07-03 | 2020-06-23 | Pure Storage, Inc. | Secure data replication in a storage grid |
US10705732B1 (en) | 2017-12-08 | 2020-07-07 | Pure Storage, Inc. | Multiple-apartment aware offlining of devices for disruptive and destructive operations |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US10810157B1 (en) | 2016-09-28 | 2020-10-20 | Amazon Technologies, Inc. | Command aggregation for data storage operations |
US10831594B2 (en) | 2016-07-22 | 2020-11-10 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US10866778B2 (en) * | 2018-06-21 | 2020-12-15 | Amadeus S.A.S. | Cross device display synchronization |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10977128B1 (en) | 2015-06-16 | 2021-04-13 | Amazon Technologies, Inc. | Adaptive data loss mitigation for redundancy coding systems |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US10983866B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Mapping defective memory in a storage system |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US11068389B2 (en) | 2017-06-11 | 2021-07-20 | Pure Storage, Inc. | Data resiliency with heterogeneous storage |
US11068363B1 (en) | 2014-06-04 | 2021-07-20 | Pure Storage, Inc. | Proactively rebuilding data in a storage cluster |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11113161B2 (en) | 2016-03-28 | 2021-09-07 | Amazon Technologies, Inc. | Local storage clustering for redundancy coded data storage system |
US11137980B1 (en) | 2016-09-27 | 2021-10-05 | Amazon Technologies, Inc. | Monotonic time-based data storage |
US11190580B2 (en) | 2017-07-03 | 2021-11-30 | Pure Storage, Inc. | Stateful connection resets |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11204895B1 (en) | 2016-09-28 | 2021-12-21 | Amazon Technologies, Inc. | Data payload clustering for data storage systems |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US11269888B1 (en) | 2016-11-28 | 2022-03-08 | Amazon Technologies, Inc. | Archival data storage for structured data |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11281624B1 (en) | 2016-09-28 | 2022-03-22 | Amazon Technologies, Inc. | Client-based batching of data payload |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US11307998B2 (en) | 2017-01-09 | 2022-04-19 | Pure Storage, Inc. | Storage efficiency of encrypted host system data |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US11386060B1 (en) | 2015-09-23 | 2022-07-12 | Amazon Technologies, Inc. | Techniques for verifiably processing data in distributed computing systems |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US11544143B2 (en) | 2014-08-07 | 2023-01-03 | Pure Storage, Inc. | Increased data reliability |
US11550752B2 (en) | 2014-07-03 | 2023-01-10 | Pure Storage, Inc. | Administrative actions via a reserved filename |
US11567917B2 (en) | 2015-09-30 | 2023-01-31 | Pure Storage, Inc. | Writing data and metadata into storage |
US11581943B2 (en) | 2016-10-04 | 2023-02-14 | Pure Storage, Inc. | Queues reserved for direct access via a user application |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US11650976B2 (en) | 2011-10-14 | 2023-05-16 | Pure Storage, Inc. | Pattern matching using hash tables in storage system |
US11675762B2 (en) | 2015-06-26 | 2023-06-13 | Pure Storage, Inc. | Data structures for key management |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11714708B2 (en) | 2017-07-31 | 2023-08-01 | Pure Storage, Inc. | Intra-device redundancy scheme |
US11722455B2 (en) | 2017-04-27 | 2023-08-08 | Pure Storage, Inc. | Storage cluster address resolution |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
US11836348B2 (en) | 2018-04-27 | 2023-12-05 | Pure Storage, Inc. | Upgrade for system with differing capacities |
US11842053B2 (en) | 2016-12-19 | 2023-12-12 | Pure Storage, Inc. | Zone namespace |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US11847013B2 (en) | 2018-02-18 | 2023-12-19 | Pure Storage, Inc. | Readable data determination |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11893023B2 (en) | 2015-09-04 | 2024-02-06 | Pure Storage, Inc. | Deterministic searching using compressed indexes |
US11922070B2 (en) | 2016-10-04 | 2024-03-05 | Pure Storage, Inc. | Granting access to a storage device based on reservations |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US11995318B2 (en) | 2016-10-28 | 2024-05-28 | Pure Storage, Inc. | Deallocated block determination |
US11994723B2 (en) | 2021-12-30 | 2024-05-28 | Pure Storage, Inc. | Ribbon cable alignment apparatus |
US11995336B2 (en) | 2018-04-25 | 2024-05-28 | Pure Storage, Inc. | Bucket views |
US12001700B2 (en) | 2021-03-18 | 2024-06-04 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8990340B1 (en) * | 2006-06-27 | 2015-03-24 | Fingerprint Cards Ab | Aggregation system |
US9323681B2 (en) | 2008-09-18 | 2016-04-26 | Avere Systems, Inc. | File storage system, cache appliance, and method |
US8214404B2 (en) | 2008-07-11 | 2012-07-03 | Avere Systems, Inc. | Media aware distributed data layout |
US9342528B2 (en) * | 2010-04-01 | 2016-05-17 | Avere Systems, Inc. | Method and apparatus for tiered storage |
US9087066B2 (en) * | 2009-04-24 | 2015-07-21 | Swish Data Corporation | Virtual disk from network shares and file servers |
US20100274886A1 (en) * | 2009-04-24 | 2010-10-28 | Nelson Nahum | Virtualized data storage in a virtualized server environment |
US9239840B1 (en) | 2009-04-24 | 2016-01-19 | Swish Data Corporation | Backup media conversion via intelligent virtual appliance adapter |
US9389926B2 (en) | 2010-05-05 | 2016-07-12 | Red Hat, Inc. | Distributed resource contention detection |
US8229961B2 (en) | 2010-05-05 | 2012-07-24 | Red Hat, Inc. | Management of latency and throughput in a cluster file system |
TWI592805B (en) * | 2010-10-01 | 2017-07-21 | 傅冠彰 | System and method for sharing network storage and computing resource |
US11418580B2 (en) * | 2011-04-01 | 2022-08-16 | Pure Storage, Inc. | Selective generation of secure signatures in a distributed storage network |
US9203900B2 (en) | 2011-09-23 | 2015-12-01 | Netapp, Inc. | Storage area network attached clustered storage system |
US8706935B1 (en) * | 2012-04-06 | 2014-04-22 | Datacore Software Corporation | Data consolidation using a common portion accessible by multiple devices |
CN103793425B (en) | 2012-10-31 | 2017-07-14 | 国际商业机器公司 | Data processing method and device for distributed system |
CN106104502B (en) * | 2014-03-20 | 2019-03-22 | 慧与发展有限责任合伙企业 | System, method and medium for storage system affairs |
US11531495B2 (en) * | 2014-04-21 | 2022-12-20 | David Lane Smith | Distributed storage system for long term data storage |
US10146475B2 (en) * | 2014-09-09 | 2018-12-04 | Toshiba Memory Corporation | Memory device performing control of discarding packet |
US10409769B1 (en) * | 2014-09-29 | 2019-09-10 | EMC IP Holding Company LLC | Data archiving in data storage system environments |
US10282112B2 (en) | 2014-11-04 | 2019-05-07 | Rubrik, Inc. | Network optimized deduplication of virtual machine snapshots |
US10795726B2 (en) * | 2014-11-17 | 2020-10-06 | Hitachi, Ltd. | Processing requests received online and dividing processing requests for batch processing |
US10469580B2 (en) * | 2014-12-12 | 2019-11-05 | International Business Machines Corporation | Clientless software defined grid |
US10554749B2 (en) | 2014-12-12 | 2020-02-04 | International Business Machines Corporation | Clientless software defined grid |
US9923965B2 (en) | 2015-06-05 | 2018-03-20 | International Business Machines Corporation | Storage mirroring over wide area network circuits with dynamic on-demand capacity |
US10929431B2 (en) * | 2015-08-28 | 2021-02-23 | Hewlett Packard Enterprise Development Lp | Collision handling during an asynchronous replication |
US10581680B2 (en) | 2015-11-25 | 2020-03-03 | International Business Machines Corporation | Dynamic configuration of network features |
US10057327B2 (en) | 2015-11-25 | 2018-08-21 | International Business Machines Corporation | Controlled transfer of data over an elastic network |
US9923839B2 (en) | 2015-11-25 | 2018-03-20 | International Business Machines Corporation | Configuring resources to exploit elastic network capability |
US10216441B2 (en) | 2015-11-25 | 2019-02-26 | International Business Machines Corporation | Dynamic quality of service for storage I/O port allocation |
US10177993B2 (en) | 2015-11-25 | 2019-01-08 | International Business Machines Corporation | Event-based data transfer scheduling using elastic network optimization criteria |
US9923784B2 (en) | 2015-11-25 | 2018-03-20 | International Business Machines Corporation | Data transfer using flexible dynamic elastic network service provider relationships |
US20170262191A1 (en) * | 2016-03-08 | 2017-09-14 | Netapp, Inc. | Reducing write tail latency in storage systems |
KR101758558B1 (en) * | 2016-03-29 | 2017-07-26 | 엘에스산전 주식회사 | Energy managemnet server and energy managemnet system having thereof |
US10122795B2 (en) * | 2016-05-31 | 2018-11-06 | International Business Machines Corporation | Consistency level driven data storage in a dispersed storage network |
US11128740B2 (en) | 2017-05-31 | 2021-09-21 | Fmad Engineering Kabushiki Gaisha | High-speed data packet generator |
US10423358B1 (en) | 2017-05-31 | 2019-09-24 | FMAD Engineering GK | High-speed data packet capture and storage with playback capabilities |
US11222076B2 (en) * | 2017-05-31 | 2022-01-11 | Microsoft Technology Licensing, Llc | Data set state visualization comparison lock |
US10990326B2 (en) | 2017-05-31 | 2021-04-27 | Fmad Engineering Kabushiki Gaisha | High-speed replay of captured data packets |
US11392317B2 (en) | 2017-05-31 | 2022-07-19 | Fmad Engineering Kabushiki Gaisha | High speed data packet flow processing |
US11036438B2 (en) | 2017-05-31 | 2021-06-15 | Fmad Engineering Kabushiki Gaisha | Efficient storage architecture for high speed packet capture |
US10754557B2 (en) * | 2017-09-26 | 2020-08-25 | Seagate Technology Llc | Data storage system with asynchronous data replication |
US11334438B2 (en) | 2017-10-10 | 2022-05-17 | Rubrik, Inc. | Incremental file system backup using a pseudo-virtual disk |
US11372729B2 (en) | 2017-11-29 | 2022-06-28 | Rubrik, Inc. | In-place cloud instance restore |
US10768844B2 (en) * | 2018-05-15 | 2020-09-08 | International Business Machines Corporation | Internal striping inside a single device |
US10915381B2 (en) * | 2018-10-16 | 2021-02-09 | Ngd Systems, Inc. | System and method for computational storage device intercommunication |
US11520742B2 (en) | 2018-12-24 | 2022-12-06 | Cloudbrink, Inc. | Data mesh parallel file system caching |
US11086821B2 (en) * | 2019-06-11 | 2021-08-10 | Dell Products L.P. | Identifying file exclusions for write filter overlays |
CN113986124B (en) * | 2021-10-25 | 2024-02-23 | 深信服科技股份有限公司 | User configuration file access method, device, equipment and medium |
US20230125556A1 (en) * | 2021-10-25 | 2023-04-27 | Whitestar Communications, Inc. | Secure autonomic recovery from unusable data structure via a trusted device in a secure peer-to-peer data network |
US11438224B1 (en) | 2022-01-14 | 2022-09-06 | Bank Of America Corporation | Systems and methods for synchronizing configurations across multiple computing clusters |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050237994A1 (en) * | 2000-04-17 | 2005-10-27 | Mo-Han Fong | Dual protocol layer automatic retransmission request scheme for wireless air interface |
US7042869B1 (en) * | 2000-09-01 | 2006-05-09 | Qualcomm, Inc. | Method and apparatus for gated ACK/NAK channel in a communication system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5931935A (en) * | 1997-04-15 | 1999-08-03 | Microsoft Corporation | File system primitive allowing reprocessing of I/O requests by multiple drivers in a layered driver I/O system |
US20050015461A1 (en) * | 2003-07-17 | 2005-01-20 | Bruno Richard | Distributed file system |
US8090880B2 (en) * | 2006-11-09 | 2012-01-03 | Microsoft Corporation | Data consistency within a federation infrastructure |
US7363444B2 (en) * | 2005-01-10 | 2008-04-22 | Hewlett-Packard Development Company, L.P. | Method for taking snapshots of data |
US20070067332A1 (en) * | 2005-03-14 | 2007-03-22 | Gridiron Software, Inc. | Distributed, secure digital file storage and retrieval |
US7546427B2 (en) * | 2005-09-30 | 2009-06-09 | Cleversafe, Inc. | System for rebuilding dispersed data |
US7979867B2 (en) * | 2006-05-28 | 2011-07-12 | International Business Machines Corporation | Managing a device in a distributed file system, using plug and play |
-
2008
- 2008-06-20 US US12/143,134 patent/US20080320097A1/en not_active Abandoned
- 2008-06-20 IE IE20080508A patent/IES20080508A2/en not_active IP Right Cessation
-
2012
- 2012-06-08 US US13/492,633 patent/US20130151653A1/en not_active Abandoned
- 2012-06-08 US US13/492,615 patent/US9880753B2/en not_active Expired - Fee Related
-
2014
- 2014-10-20 US US14/519,003 patent/US20150067093A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050237994A1 (en) * | 2000-04-17 | 2005-10-27 | Mo-Han Fong | Dual protocol layer automatic retransmission request scheme for wireless air interface |
US7042869B1 (en) * | 2000-09-01 | 2006-05-09 | Qualcomm, Inc. | Method and apparatus for gated ACK/NAK channel in a communication system |
Cited By (295)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US11650976B2 (en) | 2011-10-14 | 2023-05-16 | Pure Storage, Inc. | Pattern matching using hash tables in storage system |
US10019159B2 (en) * | 2012-03-14 | 2018-07-10 | Open Invention Network Llc | Systems, methods and devices for management of virtual memory systems |
US20130282994A1 (en) * | 2012-03-14 | 2013-10-24 | Convergent.Io Technologies Inc. | Systems, methods and devices for management of virtual memory systems |
US9602472B2 (en) * | 2012-06-15 | 2017-03-21 | Alcatel Lucent | Methods and systems for privacy protection of network end users including profile slicing |
US20150156172A1 (en) * | 2012-06-15 | 2015-06-04 | Alcatel Lucent | Architecture of privacy protection system for recommendation services |
US20140081989A1 (en) * | 2012-07-30 | 2014-03-20 | Steve K Chen | Wavefront muxing and demuxing for cloud data storage and transport |
US10152524B2 (en) * | 2012-07-30 | 2018-12-11 | Spatial Digital Systems, Inc. | Wavefront muxing and demuxing for cloud data storage and transport |
US9678978B2 (en) * | 2012-12-31 | 2017-06-13 | Carbonite, Inc. | Systems and methods for automatic synchronization of recently modified data |
US10496609B2 (en) | 2012-12-31 | 2019-12-03 | Carbonite, Inc. | Systems and methods for automatic synchronization of recently modified data |
US20140188803A1 (en) * | 2012-12-31 | 2014-07-03 | Martyn Roland James | Systems and methods for automatic synchronization of recently modified data |
US9418181B2 (en) * | 2013-01-09 | 2016-08-16 | Apple Inc. | Simulated input/output devices |
US20140195217A1 (en) * | 2013-01-09 | 2014-07-10 | Apple Inc. | Simulated input/output devices |
US20140337296A1 (en) * | 2013-05-10 | 2014-11-13 | Bryan Knight | Techniques to recover files in a storage network |
US9736023B2 (en) * | 2013-10-30 | 2017-08-15 | Samsung Sds Co., Ltd. | Apparatus and method for changing status of cluster nodes, and recording medium having the program recorded therein |
US20150117258A1 (en) * | 2013-10-30 | 2015-04-30 | Samsung Sds Co., Ltd. | Apparatus and method for changing status of cluster nodes, and recording medium having the program recorded therein |
US10229010B2 (en) * | 2013-11-22 | 2019-03-12 | Netapp, Inc. | Methods for preserving state across a failure and devices thereof |
US9571333B2 (en) * | 2014-02-20 | 2017-02-14 | Sercomm Corporation | Network device and method for maintaining network connection |
US20150236892A1 (en) * | 2014-02-20 | 2015-08-20 | Sercomm Corporation | Network device and method for maintaining network connection |
US9563506B2 (en) | 2014-06-04 | 2017-02-07 | Pure Storage, Inc. | Storage cluster |
US9967342B2 (en) | 2014-06-04 | 2018-05-08 | Pure Storage, Inc. | Storage system architecture |
US10574754B1 (en) | 2014-06-04 | 2020-02-25 | Pure Storage, Inc. | Multi-chassis array with multi-level load balancing |
US9798477B2 (en) | 2014-06-04 | 2017-10-24 | Pure Storage, Inc. | Scalable non-uniform storage sizes |
US10809919B2 (en) | 2014-06-04 | 2020-10-20 | Pure Storage, Inc. | Scalable storage capacities |
US11310317B1 (en) | 2014-06-04 | 2022-04-19 | Pure Storage, Inc. | Efficient load balancing |
US11822444B2 (en) | 2014-06-04 | 2023-11-21 | Pure Storage, Inc. | Data rebuild independent of error detection |
US9836234B2 (en) | 2014-06-04 | 2017-12-05 | Pure Storage, Inc. | Storage cluster |
US11068363B1 (en) | 2014-06-04 | 2021-07-20 | Pure Storage, Inc. | Proactively rebuilding data in a storage cluster |
US9934089B2 (en) | 2014-06-04 | 2018-04-03 | Pure Storage, Inc. | Storage cluster |
US11057468B1 (en) | 2014-06-04 | 2021-07-06 | Pure Storage, Inc. | Vast data storage system |
US11714715B2 (en) | 2014-06-04 | 2023-08-01 | Pure Storage, Inc. | Storage system accommodating varying storage capacities |
US11036583B2 (en) | 2014-06-04 | 2021-06-15 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US11652884B2 (en) | 2014-06-04 | 2023-05-16 | Pure Storage, Inc. | Customized hash algorithms |
US11671496B2 (en) | 2014-06-04 | 2023-06-06 | Pure Storage, Inc. | Load balacing for distibuted computing |
US11677825B2 (en) | 2014-06-04 | 2023-06-13 | Pure Storage, Inc. | Optimized communication pathways in a vast storage system |
US11385799B2 (en) | 2014-06-04 | 2022-07-12 | Pure Storage, Inc. | Storage nodes supporting multiple erasure coding schemes |
US10838633B2 (en) | 2014-06-04 | 2020-11-17 | Pure Storage, Inc. | Configurable hyperconverged multi-tenant storage system |
US10671480B2 (en) | 2014-06-04 | 2020-06-02 | Pure Storage, Inc. | Utilization of erasure codes in a storage system |
US10430306B2 (en) | 2014-06-04 | 2019-10-01 | Pure Storage, Inc. | Mechanism for persisting messages in a storage system |
US11960371B2 (en) | 2014-06-04 | 2024-04-16 | Pure Storage, Inc. | Message persistence in a zoned system |
US11593203B2 (en) | 2014-06-04 | 2023-02-28 | Pure Storage, Inc. | Coexisting differing erasure codes |
US10379763B2 (en) | 2014-06-04 | 2019-08-13 | Pure Storage, Inc. | Hyperconverged storage system with distributable processing power |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US9525738B2 (en) | 2014-06-04 | 2016-12-20 | Pure Storage, Inc. | Storage system architecture |
US10303547B2 (en) | 2014-06-04 | 2019-05-28 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US11500552B2 (en) | 2014-06-04 | 2022-11-15 | Pure Storage, Inc. | Configurable hyperconverged multi-tenant storage system |
US11138082B2 (en) | 2014-06-04 | 2021-10-05 | Pure Storage, Inc. | Action determination based on redundancy level |
US11886308B2 (en) | 2014-07-02 | 2024-01-30 | Pure Storage, Inc. | Dual class of service for unified file and object messaging |
US10877861B2 (en) | 2014-07-02 | 2020-12-29 | Pure Storage, Inc. | Remote procedure call cache for distributed system |
US11079962B2 (en) | 2014-07-02 | 2021-08-03 | Pure Storage, Inc. | Addressable non-volatile random access memory |
US11922046B2 (en) | 2014-07-02 | 2024-03-05 | Pure Storage, Inc. | Erasure coded data within zoned drives |
US10572176B2 (en) | 2014-07-02 | 2020-02-25 | Pure Storage, Inc. | Storage cluster operation using erasure coded data |
US9836245B2 (en) | 2014-07-02 | 2017-12-05 | Pure Storage, Inc. | Non-volatile RAM and flash memory in a non-volatile solid-state storage |
US10817431B2 (en) | 2014-07-02 | 2020-10-27 | Pure Storage, Inc. | Distributed storage addressing |
US11385979B2 (en) | 2014-07-02 | 2022-07-12 | Pure Storage, Inc. | Mirrored remote procedure call cache |
US11604598B2 (en) | 2014-07-02 | 2023-03-14 | Pure Storage, Inc. | Storage cluster with zoned drives |
US10114714B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Redundant, fault-tolerant, distributed remote procedure call cache in a storage system |
US10114757B2 (en) | 2014-07-02 | 2018-10-30 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US10372617B2 (en) | 2014-07-02 | 2019-08-06 | Pure Storage, Inc. | Nonrepeating identifiers in an address space of a non-volatile solid-state storage |
US11550752B2 (en) | 2014-07-03 | 2023-01-10 | Pure Storage, Inc. | Administrative actions via a reserved filename |
US10185506B2 (en) | 2014-07-03 | 2019-01-22 | Pure Storage, Inc. | Scheduling policy for queues in a non-volatile solid-state storage |
US10198380B1 (en) | 2014-07-03 | 2019-02-05 | Pure Storage, Inc. | Direct memory access data movement |
US11928076B2 (en) | 2014-07-03 | 2024-03-12 | Pure Storage, Inc. | Actions for reserved filenames |
US10691812B2 (en) | 2014-07-03 | 2020-06-23 | Pure Storage, Inc. | Secure data replication in a storage grid |
US9747229B1 (en) | 2014-07-03 | 2017-08-29 | Pure Storage, Inc. | Self-describing data format for DMA in a non-volatile solid-state storage |
US11494498B2 (en) | 2014-07-03 | 2022-11-08 | Pure Storage, Inc. | Storage data decryption |
US11392522B2 (en) | 2014-07-03 | 2022-07-19 | Pure Storage, Inc. | Transfer of segmented data |
US10853285B2 (en) | 2014-07-03 | 2020-12-01 | Pure Storage, Inc. | Direct memory access data format |
US10067841B2 (en) * | 2014-07-08 | 2018-09-04 | Netapp, Inc. | Facilitating n-way high availability storage services |
US10990283B2 (en) | 2014-08-07 | 2021-04-27 | Pure Storage, Inc. | Proactive data rebuild based on queue feedback |
US11204830B2 (en) | 2014-08-07 | 2021-12-21 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US10579474B2 (en) | 2014-08-07 | 2020-03-03 | Pure Storage, Inc. | Die-level monitoring in a storage cluster |
US11620197B2 (en) | 2014-08-07 | 2023-04-04 | Pure Storage, Inc. | Recovering error corrected data |
US10324812B2 (en) | 2014-08-07 | 2019-06-18 | Pure Storage, Inc. | Error recovery in a storage cluster |
US11656939B2 (en) | 2014-08-07 | 2023-05-23 | Pure Storage, Inc. | Storage cluster memory characterization |
US10983866B2 (en) | 2014-08-07 | 2021-04-20 | Pure Storage, Inc. | Mapping defective memory in a storage system |
US11442625B2 (en) | 2014-08-07 | 2022-09-13 | Pure Storage, Inc. | Multiple read data paths in a storage system |
US11544143B2 (en) | 2014-08-07 | 2023-01-03 | Pure Storage, Inc. | Increased data reliability |
US10528419B2 (en) | 2014-08-07 | 2020-01-07 | Pure Storage, Inc. | Mapping around defective flash memory of a storage array |
US10216411B2 (en) | 2014-08-07 | 2019-02-26 | Pure Storage, Inc. | Data rebuild on feedback from a queue in a non-volatile solid-state storage |
US11080154B2 (en) | 2014-08-07 | 2021-08-03 | Pure Storage, Inc. | Recovering error corrected data |
US11188476B1 (en) | 2014-08-20 | 2021-11-30 | Pure Storage, Inc. | Virtual addressing in a storage system |
US11734186B2 (en) | 2014-08-20 | 2023-08-22 | Pure Storage, Inc. | Heterogeneous storage with preserved addressing |
US10498580B1 (en) | 2014-08-20 | 2019-12-03 | Pure Storage, Inc. | Assigning addresses in a storage system |
US11023340B2 (en) * | 2015-02-19 | 2021-06-01 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10795789B2 (en) * | 2015-02-19 | 2020-10-06 | Netapp, Inc. | Efficient recovery of erasure coded data |
US10503621B2 (en) * | 2015-02-19 | 2019-12-10 | Netapp, Inc. | Manager election for erasure coding groups |
US20170242732A1 (en) * | 2015-02-19 | 2017-08-24 | Netapp, Inc. | Efficient recovery of erasure coded data |
US10817393B2 (en) * | 2015-02-19 | 2020-10-27 | Netapp, Inc. | Manager election for erasure coding groups |
US20170242770A1 (en) * | 2015-02-19 | 2017-08-24 | Netapp, Inc. | Manager election for erasure coding groups |
US20190251009A1 (en) * | 2015-02-19 | 2019-08-15 | Netapp, Inc. | Manager election for erasure coding groups |
US10353740B2 (en) * | 2015-02-19 | 2019-07-16 | Netapp, Inc. | Efficient recovery of erasure coded data |
US9948615B1 (en) | 2015-03-16 | 2018-04-17 | Pure Storage, Inc. | Increased storage unit encryption based on loss of trust |
US11294893B2 (en) | 2015-03-20 | 2022-04-05 | Pure Storage, Inc. | Aggregation of queries |
US11775428B2 (en) | 2015-03-26 | 2023-10-03 | Pure Storage, Inc. | Deletion immunity for unreferenced data |
US9940234B2 (en) | 2015-03-26 | 2018-04-10 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US10853243B2 (en) | 2015-03-26 | 2020-12-01 | Pure Storage, Inc. | Aggressive data deduplication using lazy garbage collection |
US10353635B2 (en) | 2015-03-27 | 2019-07-16 | Pure Storage, Inc. | Data control across multiple logical arrays |
US11188269B2 (en) | 2015-03-27 | 2021-11-30 | Pure Storage, Inc. | Configuration for multiple logical storage arrays |
US20160283141A1 (en) * | 2015-03-27 | 2016-09-29 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10082985B2 (en) * | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US11240307B2 (en) | 2015-04-09 | 2022-02-01 | Pure Storage, Inc. | Multiple communication paths in a storage system |
US10178169B2 (en) | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US11722567B2 (en) | 2015-04-09 | 2023-08-08 | Pure Storage, Inc. | Communication paths for storage devices having differing capacities |
US10693964B2 (en) | 2015-04-09 | 2020-06-23 | Pure Storage, Inc. | Storage unit communication within a storage system |
US11144212B2 (en) | 2015-04-10 | 2021-10-12 | Pure Storage, Inc. | Independent partitions within an array |
US9672125B2 (en) | 2015-04-10 | 2017-06-06 | Pure Storage, Inc. | Ability to partition an array into two or more logical arrays with independently running software |
US10496295B2 (en) | 2015-04-10 | 2019-12-03 | Pure Storage, Inc. | Representing a storage array as two or more logical arrays with respective virtual local area networks (VLANS) |
US11231956B2 (en) | 2015-05-19 | 2022-01-25 | Pure Storage, Inc. | Committed transactions in a storage system |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US10712942B2 (en) | 2015-05-27 | 2020-07-14 | Pure Storage, Inc. | Parallel update to maintain coherency |
US9817576B2 (en) | 2015-05-27 | 2017-11-14 | Pure Storage, Inc. | Parallel update to NVRAM |
US10977128B1 (en) | 2015-06-16 | 2021-04-13 | Amazon Technologies, Inc. | Adaptive data loss mitigation for redundancy coding systems |
US11675762B2 (en) | 2015-06-26 | 2023-06-13 | Pure Storage, Inc. | Data structures for key management |
US10394762B1 (en) | 2015-07-01 | 2019-08-27 | Amazon Technologies, Inc. | Determining data redundancy in grid encoded data storage systems |
US11704073B2 (en) | 2015-07-13 | 2023-07-18 | Pure Storage, Inc | Ownership determination for accessing a file |
US10983732B2 (en) | 2015-07-13 | 2021-04-20 | Pure Storage, Inc. | Method and system for accessing a file |
US11232079B2 (en) | 2015-07-16 | 2022-01-25 | Pure Storage, Inc. | Efficient distribution of large directories |
US10108355B2 (en) | 2015-09-01 | 2018-10-23 | Pure Storage, Inc. | Erase block state detection |
US11099749B2 (en) | 2015-09-01 | 2021-08-24 | Pure Storage, Inc. | Erase detection logic for a storage system |
US11740802B2 (en) | 2015-09-01 | 2023-08-29 | Pure Storage, Inc. | Error correction bypass for erased pages |
US11893023B2 (en) | 2015-09-04 | 2024-02-06 | Pure Storage, Inc. | Deterministic searching using compressed indexes |
US11386060B1 (en) | 2015-09-23 | 2022-07-12 | Amazon Technologies, Inc. | Techniques for verifiably processing data in distributed computing systems |
US11489668B2 (en) | 2015-09-30 | 2022-11-01 | Pure Storage, Inc. | Secret regeneration in a storage system |
US11971828B2 (en) | 2015-09-30 | 2024-04-30 | Pure Storage, Inc. | Logic module for use with encoded instructions |
US11567917B2 (en) | 2015-09-30 | 2023-01-31 | Pure Storage, Inc. | Writing data and metadata into storage |
US10887099B2 (en) | 2015-09-30 | 2021-01-05 | Pure Storage, Inc. | Data encryption in a distributed system |
US10211983B2 (en) | 2015-09-30 | 2019-02-19 | Pure Storage, Inc. | Resharing of a split secret |
US10853266B2 (en) | 2015-09-30 | 2020-12-01 | Pure Storage, Inc. | Hardware assisted data lookup methods |
US9768953B2 (en) | 2015-09-30 | 2017-09-19 | Pure Storage, Inc. | Resharing of a split secret |
US11838412B2 (en) | 2015-09-30 | 2023-12-05 | Pure Storage, Inc. | Secret regeneration from distributed shares |
US11070382B2 (en) | 2015-10-23 | 2021-07-20 | Pure Storage, Inc. | Communication in a distributed architecture |
US10277408B2 (en) | 2015-10-23 | 2019-04-30 | Pure Storage, Inc. | Token based communication |
US11582046B2 (en) | 2015-10-23 | 2023-02-14 | Pure Storage, Inc. | Storage system communication |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US10394789B1 (en) | 2015-12-07 | 2019-08-27 | Amazon Technologies, Inc. | Techniques and systems for scalable request handling in data processing systems |
US11537587B2 (en) | 2015-12-14 | 2022-12-27 | Amazon Technologies, Inc. | Techniques and systems for storage and processing of operational data |
US10642813B1 (en) | 2015-12-14 | 2020-05-05 | Amazon Technologies, Inc. | Techniques and systems for storage and processing of operational data |
US10324790B1 (en) * | 2015-12-17 | 2019-06-18 | Amazon Technologies, Inc. | Flexible data storage device mapping for data storage systems |
US11204701B2 (en) | 2015-12-22 | 2021-12-21 | Pure Storage, Inc. | Token based transactions |
US10599348B2 (en) | 2015-12-22 | 2020-03-24 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10007457B2 (en) | 2015-12-22 | 2018-06-26 | Pure Storage, Inc. | Distributed transactions with token-associated execution |
US10592336B1 (en) | 2016-03-24 | 2020-03-17 | Amazon Technologies, Inc. | Layered indexing for asynchronous retrieval of redundancy coded data |
US10678664B1 (en) | 2016-03-28 | 2020-06-09 | Amazon Technologies, Inc. | Hybridized storage operation for redundancy coded data storage systems |
US11113161B2 (en) | 2016-03-28 | 2021-09-07 | Amazon Technologies, Inc. | Local storage clustering for redundancy coded data storage system |
US20170286446A1 (en) * | 2016-04-01 | 2017-10-05 | Tuxera Corporation | Systems and methods for enabling modifications of multiple data objects within a file system volume |
US10496607B2 (en) * | 2016-04-01 | 2019-12-03 | Tuxera Inc. | Systems and methods for enabling modifications of multiple data objects within a file system volume |
US11550473B2 (en) | 2016-05-03 | 2023-01-10 | Pure Storage, Inc. | High-availability storage array |
US10649659B2 (en) | 2016-05-03 | 2020-05-12 | Pure Storage, Inc. | Scaleable storage array |
US10261690B1 (en) | 2016-05-03 | 2019-04-16 | Pure Storage, Inc. | Systems and methods for operating a storage system |
US11847320B2 (en) | 2016-05-03 | 2023-12-19 | Pure Storage, Inc. | Reassignment of requests for high availability |
US20180248938A1 (en) * | 2016-05-24 | 2018-08-30 | International Business Machines Corporation | Cooperative download among low-end devices under resource constrained environment |
US20170346887A1 (en) * | 2016-05-24 | 2017-11-30 | International Business Machines Corporation | Cooperative download among low-end devices under resource constrained environment |
US10652324B2 (en) * | 2016-05-24 | 2020-05-12 | International Business Machines Corporation | Cooperative download among low-end devices under resource constrained environment |
US9961139B2 (en) * | 2016-05-24 | 2018-05-01 | International Business Machines Corporation | Cooperative download among low-end devices under resource constrained environment |
US11861188B2 (en) | 2016-07-19 | 2024-01-02 | Pure Storage, Inc. | System having modular accelerators |
US10768819B2 (en) | 2016-07-22 | 2020-09-08 | Pure Storage, Inc. | Hardware support for non-disruptive upgrades |
US10831594B2 (en) | 2016-07-22 | 2020-11-10 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US11449232B1 (en) | 2016-07-22 | 2022-09-20 | Pure Storage, Inc. | Optimal scheduling of flash operations |
US11409437B2 (en) | 2016-07-22 | 2022-08-09 | Pure Storage, Inc. | Persisting configuration information |
US11886288B2 (en) | 2016-07-22 | 2024-01-30 | Pure Storage, Inc. | Optimize data protection layouts based on distributed flash wear leveling |
US11604690B2 (en) | 2016-07-24 | 2023-03-14 | Pure Storage, Inc. | Online failure span determination |
US10216420B1 (en) | 2016-07-24 | 2019-02-26 | Pure Storage, Inc. | Calibration of flash channels in SSD |
US11080155B2 (en) | 2016-07-24 | 2021-08-03 | Pure Storage, Inc. | Identifying error types among flash memory |
US10366004B2 (en) | 2016-07-26 | 2019-07-30 | Pure Storage, Inc. | Storage system with elective garbage collection to reduce flash contention |
US11886334B2 (en) | 2016-07-26 | 2024-01-30 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11797212B2 (en) | 2016-07-26 | 2023-10-24 | Pure Storage, Inc. | Data migration for zoned drives |
US11340821B2 (en) | 2016-07-26 | 2022-05-24 | Pure Storage, Inc. | Adjustable migration utilization |
US11734169B2 (en) | 2016-07-26 | 2023-08-22 | Pure Storage, Inc. | Optimizing spool and memory space management |
US11030090B2 (en) | 2016-07-26 | 2021-06-08 | Pure Storage, Inc. | Adaptive data migration |
US10776034B2 (en) | 2016-07-26 | 2020-09-15 | Pure Storage, Inc. | Adaptive data migration |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US11301147B2 (en) | 2016-09-15 | 2022-04-12 | Pure Storage, Inc. | Adaptive concurrency for write persistence |
US11422719B2 (en) | 2016-09-15 | 2022-08-23 | Pure Storage, Inc. | Distributed file deletion and truncation |
US11656768B2 (en) | 2016-09-15 | 2023-05-23 | Pure Storage, Inc. | File deletion in a distributed system |
US10678452B2 (en) | 2016-09-15 | 2020-06-09 | Pure Storage, Inc. | Distributed deletion of a file and directory hierarchy |
US11922033B2 (en) | 2016-09-15 | 2024-03-05 | Pure Storage, Inc. | Batch data deletion |
US11137980B1 (en) | 2016-09-27 | 2021-10-05 | Amazon Technologies, Inc. | Monotonic time-based data storage |
US10810157B1 (en) | 2016-09-28 | 2020-10-20 | Amazon Technologies, Inc. | Command aggregation for data storage operations |
US11281624B1 (en) | 2016-09-28 | 2022-03-22 | Amazon Technologies, Inc. | Client-based batching of data payload |
US10437790B1 (en) | 2016-09-28 | 2019-10-08 | Amazon Technologies, Inc. | Contextual optimization for data storage systems |
US11204895B1 (en) | 2016-09-28 | 2021-12-21 | Amazon Technologies, Inc. | Data payload clustering for data storage systems |
US10496327B1 (en) | 2016-09-28 | 2019-12-03 | Amazon Technologies, Inc. | Command parallelization for data storage systems |
US10657097B1 (en) | 2016-09-28 | 2020-05-19 | Amazon Technologies, Inc. | Data payload aggregation for data storage systems |
US10614239B2 (en) | 2016-09-30 | 2020-04-07 | Amazon Technologies, Inc. | Immutable cryptographically secured ledger-backed databases |
US11922070B2 (en) | 2016-10-04 | 2024-03-05 | Pure Storage, Inc. | Granting access to a storage device based on reservations |
US11581943B2 (en) | 2016-10-04 | 2023-02-14 | Pure Storage, Inc. | Queues reserved for direct access via a user application |
US11995318B2 (en) | 2016-10-28 | 2024-05-28 | Pure Storage, Inc. | Deallocated block determination |
US11269888B1 (en) | 2016-11-28 | 2022-03-08 | Amazon Technologies, Inc. | Archival data storage for structured data |
US11842053B2 (en) | 2016-12-19 | 2023-12-12 | Pure Storage, Inc. | Zone namespace |
US11307998B2 (en) | 2017-01-09 | 2022-04-19 | Pure Storage, Inc. | Storage efficiency of encrypted host system data |
US11762781B2 (en) | 2017-01-09 | 2023-09-19 | Pure Storage, Inc. | Providing end-to-end encryption for data stored in a storage system |
US11289169B2 (en) | 2017-01-13 | 2022-03-29 | Pure Storage, Inc. | Cycled background reads |
US11955187B2 (en) | 2017-01-13 | 2024-04-09 | Pure Storage, Inc. | Refresh of differing capacity NAND |
US10650902B2 (en) | 2017-01-13 | 2020-05-12 | Pure Storage, Inc. | Method for processing blocks of flash memory |
US10979223B2 (en) | 2017-01-31 | 2021-04-13 | Pure Storage, Inc. | Separate encryption for a solid-state drive |
US10942869B2 (en) | 2017-03-30 | 2021-03-09 | Pure Storage, Inc. | Efficient coding in a storage system |
US11449485B1 (en) | 2017-03-30 | 2022-09-20 | Pure Storage, Inc. | Sequence invalidation consolidation in a storage system |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US11592985B2 (en) | 2017-04-05 | 2023-02-28 | Pure Storage, Inc. | Mapping LUNs in a storage memory |
US11016667B1 (en) | 2017-04-05 | 2021-05-25 | Pure Storage, Inc. | Efficient mapping for LUNs in storage memory with holes in address space |
US11869583B2 (en) | 2017-04-27 | 2024-01-09 | Pure Storage, Inc. | Page write requirements for differing types of flash memory |
US10141050B1 (en) | 2017-04-27 | 2018-11-27 | Pure Storage, Inc. | Page writes for triple level cell flash memory |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US11722455B2 (en) | 2017-04-27 | 2023-08-08 | Pure Storage, Inc. | Storage cluster address resolution |
US11467913B1 (en) | 2017-06-07 | 2022-10-11 | Pure Storage, Inc. | Snapshots with crash consistency in a storage system |
US11068389B2 (en) | 2017-06-11 | 2021-07-20 | Pure Storage, Inc. | Data resiliency with heterogeneous storage |
US11138103B1 (en) | 2017-06-11 | 2021-10-05 | Pure Storage, Inc. | Resiliency groups |
US11782625B2 (en) | 2017-06-11 | 2023-10-10 | Pure Storage, Inc. | Heterogeneity supportive resiliency groups |
US11947814B2 (en) | 2017-06-11 | 2024-04-02 | Pure Storage, Inc. | Optimizing resiliency group formation stability |
US11190580B2 (en) | 2017-07-03 | 2021-11-30 | Pure Storage, Inc. | Stateful connection resets |
US11689610B2 (en) | 2017-07-03 | 2023-06-27 | Pure Storage, Inc. | Load balancing reset packets |
US11714708B2 (en) | 2017-07-31 | 2023-08-01 | Pure Storage, Inc. | Intra-device redundancy scheme |
US10877827B2 (en) | 2017-09-15 | 2020-12-29 | Pure Storage, Inc. | Read voltage optimization |
US10210926B1 (en) | 2017-09-15 | 2019-02-19 | Pure Storage, Inc. | Tracking of optimum read voltage thresholds in nand flash devices |
US11704066B2 (en) | 2017-10-31 | 2023-07-18 | Pure Storage, Inc. | Heterogeneous erase blocks |
US11024390B1 (en) | 2017-10-31 | 2021-06-01 | Pure Storage, Inc. | Overlapping RAID groups |
US10545687B1 (en) | 2017-10-31 | 2020-01-28 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US10496330B1 (en) | 2017-10-31 | 2019-12-03 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US11604585B2 (en) | 2017-10-31 | 2023-03-14 | Pure Storage, Inc. | Data rebuild when changing erase block sizes during drive replacement |
US11074016B2 (en) | 2017-10-31 | 2021-07-27 | Pure Storage, Inc. | Using flash storage devices with different sized erase blocks |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US11086532B2 (en) | 2017-10-31 | 2021-08-10 | Pure Storage, Inc. | Data rebuild with changing erase block sizes |
US10515701B1 (en) | 2017-10-31 | 2019-12-24 | Pure Storage, Inc. | Overlapping raid groups |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US11741003B2 (en) | 2017-11-17 | 2023-08-29 | Pure Storage, Inc. | Write granularity for storage system |
US11275681B1 (en) | 2017-11-17 | 2022-03-15 | Pure Storage, Inc. | Segmented write requests |
US10990566B1 (en) | 2017-11-20 | 2021-04-27 | Pure Storage, Inc. | Persistent file locks in a storage system |
US10719265B1 (en) | 2017-12-08 | 2020-07-21 | Pure Storage, Inc. | Centralized, quorum-aware handling of device reservation requests in a storage system |
US10929053B2 (en) | 2017-12-08 | 2021-02-23 | Pure Storage, Inc. | Safe destructive actions on drives |
US10705732B1 (en) | 2017-12-08 | 2020-07-07 | Pure Storage, Inc. | Multiple-apartment aware offlining of devices for disruptive and destructive operations |
US10614254B2 (en) * | 2017-12-12 | 2020-04-07 | John Almeida | Virus immune computer system and method |
US11132438B2 (en) * | 2017-12-12 | 2021-09-28 | Atense, Inc. | Virus immune computer system and method |
US10970421B2 (en) * | 2017-12-12 | 2021-04-06 | John Almeida | Virus immune computer system and method |
US10817623B2 (en) * | 2017-12-12 | 2020-10-27 | John Almeida | Virus immune computer system and method |
US10642970B2 (en) * | 2017-12-12 | 2020-05-05 | John Almeida | Virus immune computer system and method |
US11782614B1 (en) | 2017-12-21 | 2023-10-10 | Pure Storage, Inc. | Encrypting data to optimize data reduction |
US10929031B2 (en) | 2017-12-21 | 2021-02-23 | Pure Storage, Inc. | Maximizing data reduction in a partially encrypted volume |
US10733053B1 (en) | 2018-01-31 | 2020-08-04 | Pure Storage, Inc. | Disaster recovery for high-bandwidth distributed archives |
US11966841B2 (en) | 2018-01-31 | 2024-04-23 | Pure Storage, Inc. | Search acceleration for artificial intelligence |
US10976948B1 (en) | 2018-01-31 | 2021-04-13 | Pure Storage, Inc. | Cluster expansion mechanism |
US11442645B2 (en) | 2018-01-31 | 2022-09-13 | Pure Storage, Inc. | Distributed storage system expansion mechanism |
US11797211B2 (en) | 2018-01-31 | 2023-10-24 | Pure Storage, Inc. | Expanding data structures in a storage system |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US10915813B2 (en) | 2018-01-31 | 2021-02-09 | Pure Storage, Inc. | Search acceleration for artificial intelligence |
US11847013B2 (en) | 2018-02-18 | 2023-12-19 | Pure Storage, Inc. | Readable data determination |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11995336B2 (en) | 2018-04-25 | 2024-05-28 | Pure Storage, Inc. | Bucket views |
US10853146B1 (en) | 2018-04-27 | 2020-12-01 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US11836348B2 (en) | 2018-04-27 | 2023-12-05 | Pure Storage, Inc. | Upgrade for system with differing capacities |
US10931450B1 (en) | 2018-04-27 | 2021-02-23 | Pure Storage, Inc. | Distributed, lock-free 2-phase commit of secret shares using multiple stateless controllers |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US10866778B2 (en) * | 2018-06-21 | 2020-12-15 | Amadeus S.A.S. | Cross device display synchronization |
US11438279B2 (en) | 2018-07-23 | 2022-09-06 | Pure Storage, Inc. | Non-disruptive conversion of a clustered service from single-chassis to multi-chassis |
US11868309B2 (en) | 2018-09-06 | 2024-01-09 | Pure Storage, Inc. | Queue management for data relocation |
US11500570B2 (en) | 2018-09-06 | 2022-11-15 | Pure Storage, Inc. | Efficient relocation of data utilizing different programming modes |
US11846968B2 (en) | 2018-09-06 | 2023-12-19 | Pure Storage, Inc. | Relocation of data for heterogeneous storage systems |
US11520514B2 (en) | 2018-09-06 | 2022-12-06 | Pure Storage, Inc. | Optimized relocation of data based on data characteristics |
US11354058B2 (en) | 2018-09-06 | 2022-06-07 | Pure Storage, Inc. | Local relocation of data stored at a storage device of a storage system |
US10454498B1 (en) | 2018-10-18 | 2019-10-22 | Pure Storage, Inc. | Fully pipelined hardware engine design for fast and efficient inline lossless data compression |
US10976947B2 (en) | 2018-10-26 | 2021-04-13 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US11899582B2 (en) | 2019-04-12 | 2024-02-13 | Pure Storage, Inc. | Efficient memory dump |
US11714572B2 (en) | 2019-06-19 | 2023-08-01 | Pure Storage, Inc. | Optimized data resiliency in a modular storage system |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US11822807B2 (en) | 2019-06-24 | 2023-11-21 | Pure Storage, Inc. | Data replication in a storage system |
US11893126B2 (en) | 2019-10-14 | 2024-02-06 | Pure Storage, Inc. | Data deletion for a multi-tenant environment |
US11416144B2 (en) | 2019-12-12 | 2022-08-16 | Pure Storage, Inc. | Dynamic use of segment or zone power loss protection in a flash device |
US11947795B2 (en) | 2019-12-12 | 2024-04-02 | Pure Storage, Inc. | Power loss protection based on write requirements |
US11704192B2 (en) | 2019-12-12 | 2023-07-18 | Pure Storage, Inc. | Budgeting open blocks based on power loss protection |
US11847331B2 (en) | 2019-12-12 | 2023-12-19 | Pure Storage, Inc. | Budgeting open blocks of a storage unit based on power loss prevention |
US11188432B2 (en) | 2020-02-28 | 2021-11-30 | Pure Storage, Inc. | Data resiliency by partially deallocating data blocks of a storage device |
US11656961B2 (en) | 2020-02-28 | 2023-05-23 | Pure Storage, Inc. | Deallocation within a storage system |
US11507297B2 (en) | 2020-04-15 | 2022-11-22 | Pure Storage, Inc. | Efficient management of optimal read levels for flash storage systems |
US11256587B2 (en) | 2020-04-17 | 2022-02-22 | Pure Storage, Inc. | Intelligent access to a storage device |
US11474986B2 (en) | 2020-04-24 | 2022-10-18 | Pure Storage, Inc. | Utilizing machine learning to streamline telemetry processing of storage media |
US11416338B2 (en) | 2020-04-24 | 2022-08-16 | Pure Storage, Inc. | Resiliency scheme to enhance storage performance |
US11775491B2 (en) | 2020-04-24 | 2023-10-03 | Pure Storage, Inc. | Machine learning model for storage system |
US11768763B2 (en) | 2020-07-08 | 2023-09-26 | Pure Storage, Inc. | Flash secure erase |
US11681448B2 (en) | 2020-09-08 | 2023-06-20 | Pure Storage, Inc. | Multiple device IDs in a multi-fabric module storage system |
US11513974B2 (en) | 2020-09-08 | 2022-11-29 | Pure Storage, Inc. | Using nonce to control erasure of data blocks of a multi-controller storage system |
US12001688B2 (en) | 2020-09-28 | 2024-06-04 | Pure Storage, Inc. | Utilizing data views to optimize secure data access in a storage system |
US11487455B2 (en) | 2020-12-17 | 2022-11-01 | Pure Storage, Inc. | Dynamic block allocation to optimize storage system performance |
US11789626B2 (en) | 2020-12-17 | 2023-10-17 | Pure Storage, Inc. | Optimizing block allocation in a data storage system |
US11614880B2 (en) | 2020-12-31 | 2023-03-28 | Pure Storage, Inc. | Storage system with selectable write paths |
US11847324B2 (en) | 2020-12-31 | 2023-12-19 | Pure Storage, Inc. | Optimizing resiliency groups for data regions of a storage system |
US11630593B2 (en) | 2021-03-12 | 2023-04-18 | Pure Storage, Inc. | Inline flash memory qualification in a storage system |
US12001700B2 (en) | 2021-03-18 | 2024-06-04 | Pure Storage, Inc. | Dynamically selecting segment heights in a heterogeneous RAID group |
US11507597B2 (en) | 2021-03-31 | 2022-11-22 | Pure Storage, Inc. | Data replication to meet a recovery point objective |
US11832410B2 (en) | 2021-09-14 | 2023-11-28 | Pure Storage, Inc. | Mechanical energy absorbing bracket apparatus |
US12001684B2 (en) | 2021-09-28 | 2024-06-04 | Pure Storage, Inc. | Optimizing dynamic power loss protection adjustment in a storage system |
US11994723B2 (en) | 2021-12-30 | 2024-05-28 | Pure Storage, Inc. | Ribbon cable alignment apparatus |
US12008266B2 (en) | 2022-04-19 | 2024-06-11 | Pure Storage, Inc. | Efficient read by reconstruction |
Also Published As
Publication number | Publication date |
---|---|
IES20080508A2 (en) | 2008-12-10 |
US20150067093A1 (en) | 2015-03-05 |
US9880753B2 (en) | 2018-01-30 |
US20130145105A1 (en) | 2013-06-06 |
US20080320097A1 (en) | 2008-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9880753B2 (en) | Write requests in a distributed storage system | |
US10289338B2 (en) | Multi-class heterogeneous clients in a filesystem | |
US10534681B2 (en) | Clustered filesystems for mix of trusted and untrusted nodes | |
US9519657B2 (en) | Clustered filesystem with membership version support | |
US6950833B2 (en) | Clustered filesystem | |
US8010558B2 (en) | Relocation of metadata server with outstanding DMAPI requests | |
US20030028514A1 (en) | Extended attribute caching in clustered filesystem | |
US7765329B2 (en) | Messaging between heterogeneous clients of a storage area network | |
US20040210656A1 (en) | Failsafe operation of storage area network | |
US7593968B2 (en) | Recovery and relocation of a distributed name service in a cluster filesystem | |
US20040153841A1 (en) | Failure hierarchy in a cluster filesystem | |
IES85057Y1 (en) | Network distributed file system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TENOWARE R&D LTD., IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAWICKI, ANTONI;NOWAK, TOMASZ;MURPHY, KELLY;REEL/FRAME:028831/0681 Effective date: 20120821 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |