US9436722B1 - Parallel checksumming of data chunks of a shared data object using a log-structured file system - Google Patents
Parallel checksumming of data chunks of a shared data object using a log-structured file system Download PDFInfo
- Publication number
- US9436722B1 US9436722B1 US13/799,264 US201313799264A US9436722B1 US 9436722 B1 US9436722 B1 US 9436722B1 US 201313799264 A US201313799264 A US 201313799264A US 9436722 B1 US9436722 B1 US 9436722B1
- Authority
- US
- United States
- Prior art keywords
- log
- file system
- data
- data chunk
- structured file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/1858—Parallel file systems, i.e. file systems supporting multiple processors
-
- G06F17/30371—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2365—Ensuring data consistency and integrity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
Definitions
- the present invention relates to parallel storage in high performance computing environments.
- Parallel storage systems are widely used in many computing environments. Parallel storage systems provide high degrees of concurrency in which many distributed processes within a parallel application simultaneously access a shared file namespace.
- Parallel computing techniques are used in many industries and applications for implementing computationally intensive models or simulations.
- the Department of Energy uses a large number of distributed compute nodes tightly coupled into a supercomputer to model physics experiments.
- parallel computing techniques are often used for computing geological models that help predict the location of natural resources.
- each parallel process generates a portion, referred to as a data chunk, of a shared data object.
- Checksumming is a common technique to ensure data integrity.
- a checksum or hash sum is a fixed-size computed from a block of digital data to detect errors that may have been introduced during transmission or storage. The integrity of the data can be checked at any later time by recomputing the checksum and comparing the recomputed checksum with the stored checksum. If the two checksum values match, then the data was likely not altered.
- Embodiments of the present invention provide improved techniques for generating checksum values and for verifying the integrity of data.
- a method is provided for a client executing on one or more of a compute node and a burst buffer node in a parallel computing system to store a data chunk generated by the parallel computing system to a shared data object on a storage node in the parallel computing system.
- the client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object.
- the data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object.
- the storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client executing on a compute node or a burst buffer node.
- PLFS Parallel Log-Structured File System
- the checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.
- illustrative embodiments of the invention provide techniques for parallel checksumming of data being written to a shared object.
- FIG. 1 illustrates an exemplary conventional technique for generating checksums of data being stored to a shared object by a plurality of processes in a storage system
- FIG. 2 illustrates an exemplary distributed technique for generating checksums of data being stored to a shared object by a plurality of processes in a storage system in accordance with aspects of the present invention
- FIG. 3 illustrates an exemplary alternate distributed technique for generating checksums of data being stored to a shared object by a plurality of processes in a storage system in accordance with an alternate embodiment of the present invention
- FIG. 4 is a flow chart describing an exemplary LSFS checksum process incorporating aspects of the present invention.
- the present invention provides improved techniques for cooperative parallel writing of data to a shared object.
- one aspect of the present invention leverages the parallelism of concurrent writes to a shared object and the high interconnect speed of parallel supercomputer networks to generate the checksum values for the data in parallel as it is written.
- a further aspect of the invention leverages the parallel supercomputer networks to provide improved techniques for verifying the integrity of the checksummed data.
- Embodiments of the present invention will be described herein with reference to exemplary computing systems and data storage systems and associated servers, computers, storage units and devices and other processing devices. It is to be appreciated, however, that embodiments of the invention are not restricted to use with the particular illustrative system and device configurations shown. Moreover, the phrases “computing system” and “data storage system” as used herein are intended to be broadly construed, so as to encompass, for example, private or public cloud computing or storage systems, as well as other types of systems comprising distributed virtual infrastructure. However, a given embodiment may more generally comprise any arrangement of one or more processing devices. As used herein, the term “files” shall include complete files and portions of files, such as sub-files or shards.
- FIG. 1 illustrates an exemplary conventional storage system 100 that employs a conventional technique for generating checksums of data being stored to a shared object 150 by a plurality of processes.
- the exemplary storage system 100 may be implemented, for example, as a Parallel Log-Structured File System (PLFS) to make placement decisions automatically, as described in U.S. patent application Ser. No. 13/536,331, filed Jun. 28, 2012, entitled “Storing Files in a Parallel Computing System Using List-Based Index to Identify Replica Files,” (now U.S. Pat. No. 9,087,075), incorporated by reference herein, or it can be explicitly controlled by the application and administered by a storage daemon.
- PLFS Parallel Log-Structured File System
- the exemplary storage system 100 comprises a plurality of compute nodes 110 - 1 through 110 -N (collectively, compute nodes 110 ) where a distributed application process generates a corresponding portion 120 - 1 through 120 -N of a distributed shared data structure 150 or other information to store.
- the compute nodes 110 optionally store the portions 120 of the distributed data structure 150 in one or more nodes of the exemplary storage system 100 , such as an exemplary flash based storage node 140 .
- the exemplary hierarchical storage tiering system 100 optionally comprises one or more hard disk drives (not shown).
- the compute nodes 110 send their distributed data chunks 120 into a single file 150 .
- the single file 150 is striped into file system defined blocks, and then each block is checksummed.
- existing checksum approaches apply checksums on the shared data structure 150 only after it has been sent to the storage node 140 of the storage system 100 .
- the checksums 160 are applied to offset ranges on the data in sizes that are pre-defined by the file system 100 .
- the offset size of the checksums 160 does not typically align with the size of the data portions 120 (i.e., the file system defined blocks will typically not match the original memory layout).
- FIG. 2 illustrates an exemplary storage system 200 that generates checksums of data chunks 220 being stored to a shared object 250 by a plurality of processes in accordance with aspects of the present invention.
- the exemplary storage system 200 may be implemented, for example, as a Parallel Log-Structured File System.
- the exemplary storage system 200 comprises a plurality of compute nodes 210 - 1 through 210 -N (collectively, compute nodes 110 ) where a distributed application process generates a corresponding data chunk portion 220 - 1 through 220 -N (collectively, data chunks 220 ) of a distributed shared data object 250 to store.
- the distributed application executing on given compute node 210 in the parallel computing system 200 writes and reads the data chunks 220 that are part of the shared data object 250 using a log-structured file system (LSFS) client 205 - 1 through 205 -N executing on the given compute node 210 .
- LSFS log-structured file system
- the compute nodes 210 store the data chunk portions 220 of the distributed data structure 250 in one or more storage nodes of the exemplary storage system 200 , such as an exemplary LSFS server 240 .
- the LSFS server 240 may be implemented, for example, as a flash based storage node.
- the exemplary hierarchical storage tiering system 200 optionally comprises one or more hard disk drives (not shown).
- each LSFS client 205 applies a checksum function to each data chunk 220 to generate a corresponding checksum value 260 - 1 through 260 -N.
- Each data chunk 220 is then stored by the corresponding LSFS client 205 with the corresponding computed checksum 260 on the LSFS server 240 .
- the LSFS client 205 performs a data integrity check on the read operation, where the data chunk 220 and the corresponding checksum 260 are read from the LSFS server 240 and are provided to the corresponding LSFS client 205 on the compute node 210 for the data integrity check before being sent to the application.
- the data integrity check comprises recomputing the checksum 260 recompute and comparing the recomputed checksum 260 recompute with the stored checksum 260 stored . If the two checksum values 260 recompute and 260 stored match, then the data integrity is verified.
- FIG. 3 illustrates an exemplary storage system 300 that generates checksums of data chunks 220 being stored to a shared object 250 by a plurality of processes in accordance with an alternate embodiment of the present invention.
- the exemplary storage system 300 may be implemented, for example, as a Parallel Log-Structured File System.
- the exemplary storage system 300 comprises a plurality of compute nodes 210 - 1 through 210 -N (collectively, compute nodes 110 ) where a distributed application process generates a corresponding data chunk portion 220 - 1 through 220 -N (collectively, data chunks 220 ) of a distributed shared data object 250 to store, in a similar manner to FIG. 2 .
- the distributed application executing on given compute node 210 in the parallel computing system 200 writes and reads the data chunks 220 that are part of the shared data object 250 using a log-structured file system (LSFS) client 205 - 1 through 205 -N executing on the given compute node 210 , in a similar manner to FIG. 2 .
- the compute nodes 210 store the data chunk portions 220 of the distributed data structure 250 in one or more storage nodes of the exemplary storage system 200 , such as an exemplary LSFS server 240 .
- the LSFS server 240 may be implemented, for example, as a flash based storage node.
- the exemplary hierarchical storage tiering system 200 optionally comprises one or more hard disk drives (not shown).
- the exemplary storage system 300 also comprises one or more flash-based burst buffer nodes 310 - 1 through 310 - k that process the data chunks 220 that are written by the LSFS clients 205 to the LSFS server 240 , and are read by the LSFS clients 205 from the LSFS server 240 .
- the exemplary flash-based burst buffer nodes 310 comprise LSFS clients 305 in a similar manner to the LSFS clients 205 of FIG. 2 .
- each burst buffer node 310 applies a checksum function to each data chunk 220 to generate a corresponding checksum value 360 - 1 through 360 -N.
- Each data chunk 220 is then stored with the corresponding computed checksum 360 on the LSFS server 240 , in a similar manner to FIG. 2 .
- the burst buffer node 310 performs a data integrity check on the read operation, where the data chunk 220 and the corresponding checksum 360 are read from the LSFS server 240 and are provided to the the burst buffer node 310 for the data integrity check before being sent to the application executing on the compute node 210 .
- the data integrity check comprises recomputing the checksum 360 recompute and comparing the recomputed checksum 360 recompute with the stored checksum 360 stored . If the two checksum values 360 recompute and 360 stored match, then the data integrity is verified.
- FIGS. 2 and 3 can be combined such that checksumming is performed by the LSFS clients 205 executing on the compute nodes 210 and additional more computationally intensive checksumming and/or erasure coding is performed by the burst buffer nodes 310 .
- a distributed signature lookup service can be established across the network of burst buffer nodes 310 to reduce the latency to verify the checksums.
- FIG. 4 is a flow chart describing an exemplary LSFS checksum process 400 incorporating aspects of the present invention.
- the exemplary LSFS checksum process 400 is implemented by the LSFS clients 205 executing on the compute nodes 210 in the embodiment of FIG. 2 and by the flash-based burst buffer nodes 310 in the embodiment of FIG. 3 .
- the exemplary LSFS checksum process 400 initially performs a test during step 410 to determine if the current operation is a read operation or a write operation. If it is determined during step 410 that the current operation is a write operation, then the exemplary LSFS checksum process 400 obtains the data chunk from the application during step 420 . The exemplary LSFS checksum process 400 then computes the checksum for the data chunk during step 430 on the compute nodes 210 or the burst buffer nodes 310 . Finally, the data chunk is stored on the LSFS server 240 with the corresponding checksum as part of the shared object 250 during step 440 .
- a test is performed during step 470 to determine if the checksums match. If it is determined during step 470 that the checksums match, then the verified data chunk is provided to the application on the compute node 210 during step 480 . If, however, it is determined during step 470 that the checksums do not match, then the exemplary LSFS checksum process 400 indicates a failure to the application or corrects the error and provides corrected data during step 490 . For example, when the block being read does not match exactly a block that was checksummed but is comprised of pieces from several blocks, the burst buffer layer 310 can check the checksums from the multiple blocks and recompute a new checksum for the block being read and then send just the block and the checksum to the compute server 210 .
- the number of compute servers 210 is at least an order of magnitude greater than the number of storage servers 240 in HPC systems, thus it is much faster to perform the checksum computations on the compute servers 210 .
- the checksumming is performed on the data chunks 220 as they are being written by the LSFS client 205 as opposed to when they have been placed into the file 250 by the server 240 .
- the chunks 220 in a log-structured file system retain their original data organization whereas in existing approaches, the data in the chunks will almost always be reorganized into file system defined blocks. This can introduce additional latency as the file system will either wait for the blocks to be filled or do the checksumming multiple times each time the block is partially filled.
- aspects of the present invention leverage the parallelism of concurrent writes to a shared object and the high interconnect speed of parallel supercomputer networks to improve the generation of checksums during a write operation and to use the checksummed data to improve the data integrity on read operations.
- aspects of the present invention thus recognize that the log-structured file system eliminates the need for artificial file system boundaries because all block sizes perform equally well in a log-structured file system.
- PLFS files can be shared across many locations, data processing required to implement these functions can be performed more efficiently when there are multiple nodes cooperating on the data processing operations. Therefore, when this is run on a parallel system with a parallel language, such as MPI, PLFS can provide MPI versions of these functions which will allow it to exploit parallelism for more efficient data processing.
- the storage server node 240 can also optionally check the checksum as a mechanism to detect data corruption during the network transmission on a write operation from the compute nodes 210 to the storage server node 240 . In the event the checksum computed by the storage server node 240 does not match the checksum received from the compute node 210 with the data chunk 220 , the data can be re-transmitted to obtain the uncorrupted data.
- Such components can communicate with other elements over any type of network, such as a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, or various portions or combinations of these and other types of networks.
- WAN wide area network
- LAN local area network
- satellite network a satellite network
- telephone or cable network a telephone or cable network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/799,264 US9436722B1 (en) | 2013-03-13 | 2013-03-13 | Parallel checksumming of data chunks of a shared data object using a log-structured file system |
| US13/799,228 US9477682B1 (en) | 2013-03-13 | 2013-03-13 | Parallel compression of data chunks of a shared data object using a log-structured file system |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/799,264 US9436722B1 (en) | 2013-03-13 | 2013-03-13 | Parallel checksumming of data chunks of a shared data object using a log-structured file system |
| US13/799,228 US9477682B1 (en) | 2013-03-13 | 2013-03-13 | Parallel compression of data chunks of a shared data object using a log-structured file system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US9436722B1 true US9436722B1 (en) | 2016-09-06 |
Family
ID=56878232
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/799,228 Active 2034-04-26 US9477682B1 (en) | 2013-03-13 | 2013-03-13 | Parallel compression of data chunks of a shared data object using a log-structured file system |
| US13/799,264 Active 2034-04-16 US9436722B1 (en) | 2013-03-13 | 2013-03-13 | Parallel checksumming of data chunks of a shared data object using a log-structured file system |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/799,228 Active 2034-04-26 US9477682B1 (en) | 2013-03-13 | 2013-03-13 | Parallel compression of data chunks of a shared data object using a log-structured file system |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US9477682B1 (en) |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170024158A1 (en) * | 2015-07-21 | 2017-01-26 | Arm Limited | Method of and apparatus for generating a signature representative of the content of an array of data |
| US9749418B2 (en) * | 2015-08-06 | 2017-08-29 | Koc University | Efficient dynamic proofs of retrievability |
| US10194156B2 (en) | 2014-07-15 | 2019-01-29 | Arm Limited | Method of and apparatus for generating an output frame |
| CN109614037A (en) * | 2018-11-16 | 2019-04-12 | 新华三技术有限公司成都分公司 | Data routing inspection method, apparatus and distributed memory system |
| WO2022043409A1 (en) * | 2020-08-28 | 2022-03-03 | Siemens Aktiengesellschaft | Computer-implemented method for storing a dataset and computer network |
| US11399015B2 (en) | 2019-06-11 | 2022-07-26 | Bank Of America Corporation | Data security tool |
| US20240111657A1 (en) * | 2022-04-27 | 2024-04-04 | Microsoft Technology Licensing, Llc | Automatic correctness validation of database management systems |
| US12353279B2 (en) | 2023-08-02 | 2025-07-08 | International Business Machines Corporation | Offloading a validity check for a data block from a server to be performed by a client node |
Families Citing this family (49)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10411959B1 (en) | 2014-12-30 | 2019-09-10 | EMC IP Holding Company LLC | Data analytics for the internet of things |
| US10761743B1 (en) | 2017-07-17 | 2020-09-01 | EMC IP Holding Company LLC | Establishing data reliability groups within a geographically distributed data storage environment |
| US10817388B1 (en) | 2017-07-21 | 2020-10-27 | EMC IP Holding Company LLC | Recovery of tree data in a geographically distributed environment |
| US10880040B1 (en) | 2017-10-23 | 2020-12-29 | EMC IP Holding Company LLC | Scale-out distributed erasure coding |
| US10382554B1 (en) | 2018-01-04 | 2019-08-13 | Emc Corporation | Handling deletes with distributed erasure coding |
| US10817374B2 (en) | 2018-04-12 | 2020-10-27 | EMC IP Holding Company LLC | Meta chunks |
| US11301165B2 (en) | 2018-04-26 | 2022-04-12 | International Business Machines Corporation | Accelerating shared file checkpoint with local burst buffers |
| US10579297B2 (en) | 2018-04-27 | 2020-03-03 | EMC IP Holding Company LLC | Scaling-in for geographically diverse storage |
| US11023130B2 (en) | 2018-06-15 | 2021-06-01 | EMC IP Holding Company LLC | Deleting data in a geographically diverse storage construct |
| US10936196B2 (en) | 2018-06-15 | 2021-03-02 | EMC IP Holding Company LLC | Data convolution for geographically diverse storage |
| US11436203B2 (en) | 2018-11-02 | 2022-09-06 | EMC IP Holding Company LLC | Scaling out geographically diverse storage |
| US10901635B2 (en) | 2018-12-04 | 2021-01-26 | EMC IP Holding Company LLC | Mapped redundant array of independent nodes for data storage with high performance using logical columns of the nodes with different widths and different positioning patterns |
| US11119683B2 (en) * | 2018-12-20 | 2021-09-14 | EMC IP Holding Company LLC | Logical compaction of a degraded chunk in a geographically diverse data storage system |
| US10931777B2 (en) | 2018-12-20 | 2021-02-23 | EMC IP Holding Company LLC | Network efficient geographically diverse data storage system employing degraded chunks |
| US10892782B2 (en) | 2018-12-21 | 2021-01-12 | EMC IP Holding Company LLC | Flexible system and method for combining erasure-coded protection sets |
| US11520742B2 (en) | 2018-12-24 | 2022-12-06 | Cloudbrink, Inc. | Data mesh parallel file system caching |
| US11023331B2 (en) | 2019-01-04 | 2021-06-01 | EMC IP Holding Company LLC | Fast recovery of data in a geographically distributed storage environment |
| US10768840B2 (en) | 2019-01-04 | 2020-09-08 | EMC IP Holding Company LLC | Updating protection sets in a geographically distributed storage environment |
| US10942827B2 (en) | 2019-01-22 | 2021-03-09 | EMC IP Holding Company LLC | Replication of data in a geographically distributed storage environment |
| US10936239B2 (en) | 2019-01-29 | 2021-03-02 | EMC IP Holding Company LLC | Cluster contraction of a mapped redundant array of independent nodes |
| US10942825B2 (en) | 2019-01-29 | 2021-03-09 | EMC IP Holding Company LLC | Mitigating real node failure in a mapped redundant array of independent nodes |
| US10846003B2 (en) | 2019-01-29 | 2020-11-24 | EMC IP Holding Company LLC | Doubly mapped redundant array of independent nodes for data storage |
| US10866766B2 (en) | 2019-01-29 | 2020-12-15 | EMC IP Holding Company LLC | Affinity sensitive data convolution for data storage systems |
| US11029865B2 (en) | 2019-04-03 | 2021-06-08 | EMC IP Holding Company LLC | Affinity sensitive storage of data corresponding to a mapped redundant array of independent nodes |
| US10944826B2 (en) | 2019-04-03 | 2021-03-09 | EMC IP Holding Company LLC | Selective instantiation of a storage service for a mapped redundant array of independent nodes |
| US11113146B2 (en) | 2019-04-30 | 2021-09-07 | EMC IP Holding Company LLC | Chunk segment recovery via hierarchical erasure coding in a geographically diverse data storage system |
| US11121727B2 (en) | 2019-04-30 | 2021-09-14 | EMC IP Holding Company LLC | Adaptive data storing for data storage systems employing erasure coding |
| US11119686B2 (en) | 2019-04-30 | 2021-09-14 | EMC IP Holding Company LLC | Preservation of data during scaling of a geographically diverse data storage system |
| US11748004B2 (en) | 2019-05-03 | 2023-09-05 | EMC IP Holding Company LLC | Data replication using active and passive data storage modes |
| US11209996B2 (en) | 2019-07-15 | 2021-12-28 | EMC IP Holding Company LLC | Mapped cluster stretching for increasing workload in a data storage system |
| US11023145B2 (en) | 2019-07-30 | 2021-06-01 | EMC IP Holding Company LLC | Hybrid mapped clusters for data storage |
| US11449399B2 (en) | 2019-07-30 | 2022-09-20 | EMC IP Holding Company LLC | Mitigating real node failure of a doubly mapped redundant array of independent nodes |
| US11228322B2 (en) | 2019-09-13 | 2022-01-18 | EMC IP Holding Company LLC | Rebalancing in a geographically diverse storage system employing erasure coding |
| US11449248B2 (en) | 2019-09-26 | 2022-09-20 | EMC IP Holding Company LLC | Mapped redundant array of independent data storage regions |
| US11288139B2 (en) | 2019-10-31 | 2022-03-29 | EMC IP Holding Company LLC | Two-step recovery employing erasure coding in a geographically diverse data storage system |
| US11119690B2 (en) | 2019-10-31 | 2021-09-14 | EMC IP Holding Company LLC | Consolidation of protection sets in a geographically diverse data storage environment |
| US11435910B2 (en) | 2019-10-31 | 2022-09-06 | EMC IP Holding Company LLC | Heterogeneous mapped redundant array of independent nodes for data storage |
| US11144207B2 (en) * | 2019-11-07 | 2021-10-12 | International Business Machines Corporation | Accelerating memory compression of a physically scattered buffer |
| US11435957B2 (en) | 2019-11-27 | 2022-09-06 | EMC IP Holding Company LLC | Selective instantiation of a storage service for a doubly mapped redundant array of independent nodes |
| US11144220B2 (en) | 2019-12-24 | 2021-10-12 | EMC IP Holding Company LLC | Affinity sensitive storage of data corresponding to a doubly mapped redundant array of independent nodes |
| US11231860B2 (en) | 2020-01-17 | 2022-01-25 | EMC IP Holding Company LLC | Doubly mapped redundant array of independent nodes for data storage with high performance |
| US11507308B2 (en) | 2020-03-30 | 2022-11-22 | EMC IP Holding Company LLC | Disk access event control for mapped nodes supported by a real cluster storage system |
| US11288229B2 (en) | 2020-05-29 | 2022-03-29 | EMC IP Holding Company LLC | Verifiable intra-cluster migration for a chunk storage system |
| US11693983B2 (en) | 2020-10-28 | 2023-07-04 | EMC IP Holding Company LLC | Data protection via commutative erasure coding in a geographically diverse data storage system |
| US11875036B2 (en) | 2021-01-13 | 2024-01-16 | Samsung Electronics Co., Ltd. | Computing system including host and storage system and having increased write performance |
| US11847141B2 (en) | 2021-01-19 | 2023-12-19 | EMC IP Holding Company LLC | Mapped redundant array of independent nodes employing mapped reliability groups for data storage |
| US11625174B2 (en) | 2021-01-20 | 2023-04-11 | EMC IP Holding Company LLC | Parity allocation for a virtual redundant array of independent disks |
| US11449234B1 (en) | 2021-05-28 | 2022-09-20 | EMC IP Holding Company LLC | Efficient data access operations via a mapping layer instance for a doubly mapped redundant array of independent nodes |
| US11354191B1 (en) | 2021-05-28 | 2022-06-07 | EMC IP Holding Company LLC | Erasure coding in a large geographically diverse data storage system |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5953352A (en) * | 1997-06-23 | 1999-09-14 | Micron Electronics, Inc. | Method of checking data integrity for a raid 1 system |
| US20030226139A1 (en) * | 2002-05-28 | 2003-12-04 | Sheng Lee | System update protocol |
| US6952797B1 (en) * | 2000-10-25 | 2005-10-04 | Andy Kahn | Block-appended checksums |
| US20060123250A1 (en) * | 1999-07-16 | 2006-06-08 | Intertrust Technologies Corporation | Trusted storage systems and methods |
| US20080282105A1 (en) * | 2007-05-10 | 2008-11-13 | Deenadhayalan Veera W | Data integrity validation in storage systems |
| US20090183056A1 (en) * | 2008-01-16 | 2009-07-16 | Bluearc Uk Limited | Validating Objects in a Data Storage system |
| US8862561B1 (en) * | 2012-08-30 | 2014-10-14 | Google Inc. | Detecting read/write conflicts |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8819288B2 (en) * | 2007-09-14 | 2014-08-26 | Microsoft Corporation | Optimized data stream compression using data-dependent chunking |
| WO2009062029A1 (en) * | 2007-11-09 | 2009-05-14 | Carnegie Mellon University | Efficient high performance system for writing data from applications to a safe file system |
| US9104617B2 (en) * | 2008-11-13 | 2015-08-11 | International Business Machines Corporation | Using accelerators in a hybrid architecture for system checkpointing |
| US8849877B2 (en) * | 2010-08-31 | 2014-09-30 | Datadirect Networks, Inc. | Object file system |
| US20120089781A1 (en) * | 2010-10-11 | 2012-04-12 | Sandeep Ranade | Mechanism for retrieving compressed data from a storage cloud |
| US9619430B2 (en) * | 2012-02-24 | 2017-04-11 | Hewlett Packard Enterprise Development Lp | Active non-volatile memory post-processing |
| US9792182B2 (en) * | 2013-01-31 | 2017-10-17 | Hewlett Packard Enterprise Development Lp | Checkpoint generation |
-
2013
- 2013-03-13 US US13/799,228 patent/US9477682B1/en active Active
- 2013-03-13 US US13/799,264 patent/US9436722B1/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5953352A (en) * | 1997-06-23 | 1999-09-14 | Micron Electronics, Inc. | Method of checking data integrity for a raid 1 system |
| US20060123250A1 (en) * | 1999-07-16 | 2006-06-08 | Intertrust Technologies Corporation | Trusted storage systems and methods |
| US6952797B1 (en) * | 2000-10-25 | 2005-10-04 | Andy Kahn | Block-appended checksums |
| US20030226139A1 (en) * | 2002-05-28 | 2003-12-04 | Sheng Lee | System update protocol |
| US20080282105A1 (en) * | 2007-05-10 | 2008-11-13 | Deenadhayalan Veera W | Data integrity validation in storage systems |
| US20090183056A1 (en) * | 2008-01-16 | 2009-07-16 | Bluearc Uk Limited | Validating Objects in a Data Storage system |
| US8862561B1 (en) * | 2012-08-30 | 2014-10-14 | Google Inc. | Detecting read/write conflicts |
Non-Patent Citations (7)
| Title |
|---|
| Dai et al., "ELF: An Efficient Log Structured Flash File System for Micro Sensor Nodes", ACM SensSys, Baltimore, MD (2004). * |
| Dai et al., "Elf: An Efficient Log Structured Flash File System for Micro Sensor Nodes", ACM SenSys, Baltimore, MD (2004). |
| Hartman et al., "The Zebra Striped Network File System", ACM Transactions on Computer Systems (1994). * |
| John Bent, Garth Gibson, Gary Grider, Ben McClelland, Paul Nowoczynski, James Nunez, Milo Polte, Meghan Wingate, "SC '09 Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis", Nov. 14, 2009, ACM New York, NY, USA © 2009 Article No. 21. * |
| John H. Hartman, "The Zebra Striped Network File System", A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of Califomia at Berkeley (1994). * |
| John H. Hartman, "The Zebra Striped Network File System", A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California at Berkeley (1994). |
| Los Alamos National Laboratory, "PLFS: Parallel Log Structured File System", Jan. 14, 2009. * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10194156B2 (en) | 2014-07-15 | 2019-01-29 | Arm Limited | Method of and apparatus for generating an output frame |
| US20170024158A1 (en) * | 2015-07-21 | 2017-01-26 | Arm Limited | Method of and apparatus for generating a signature representative of the content of an array of data |
| US10832639B2 (en) * | 2015-07-21 | 2020-11-10 | Arm Limited | Method of and apparatus for generating a signature representative of the content of an array of data |
| US9749418B2 (en) * | 2015-08-06 | 2017-08-29 | Koc University | Efficient dynamic proofs of retrievability |
| CN109614037A (en) * | 2018-11-16 | 2019-04-12 | 新华三技术有限公司成都分公司 | Data routing inspection method, apparatus and distributed memory system |
| US11399015B2 (en) | 2019-06-11 | 2022-07-26 | Bank Of America Corporation | Data security tool |
| WO2022043409A1 (en) * | 2020-08-28 | 2022-03-03 | Siemens Aktiengesellschaft | Computer-implemented method for storing a dataset and computer network |
| US20240111657A1 (en) * | 2022-04-27 | 2024-04-04 | Microsoft Technology Licensing, Llc | Automatic correctness validation of database management systems |
| US12353279B2 (en) | 2023-08-02 | 2025-07-08 | International Business Machines Corporation | Offloading a validity check for a data block from a server to be performed by a client node |
Also Published As
| Publication number | Publication date |
|---|---|
| US9477682B1 (en) | 2016-10-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9436722B1 (en) | Parallel checksumming of data chunks of a shared data object using a log-structured file system | |
| US9244623B1 (en) | Parallel de-duplication of data chunks of a shared data object using a log-structured file system | |
| US10951236B2 (en) | Hierarchical data integrity verification of erasure coded data in a distributed computing system | |
| US10394634B2 (en) | Drive-based storage scrubbing | |
| US9298386B2 (en) | System and method for improved placement of blocks in a deduplication-erasure code environment | |
| US8386835B2 (en) | System and method for end-to-end data integrity in a network file system | |
| CN112749039B (en) | Method, apparatus and program product for data writing and data recovery | |
| US8370297B2 (en) | Approach for optimizing restores of deduplicated data | |
| US9323765B1 (en) | Scalable cloud file system with efficient integrity checks | |
| US8468423B2 (en) | Data verification using checksum sidefile | |
| US10666435B2 (en) | Multi-tenant encryption on distributed storage having deduplication and compression capability | |
| US10581602B2 (en) | End-to-end checksum in a multi-tenant encryption storage system | |
| US20110113313A1 (en) | Buffer transfer check on variable length data | |
| WO2019001521A1 (en) | Data storage method, storage device, client and system | |
| CN109074295B (en) | Data recovery with authenticity | |
| JP2019533397A (en) | Device and associated method for encoding and decoding data for erasure codes | |
| CN104408154A (en) | Repeated data deletion method and device | |
| US9767139B1 (en) | End-to-end data integrity in parallel storage systems | |
| CN104375905A (en) | Incremental backing up method and system based on data block | |
| US9256503B2 (en) | Data verification | |
| US10402262B1 (en) | Fencing for zipheader corruption for inline compression feature system and method | |
| US10243583B2 (en) | CPU error remediation during erasure code encoding | |
| US9098446B1 (en) | Recovery of corrupted erasure-coded data files | |
| US11243890B2 (en) | Compressed data verification | |
| JP2013050836A (en) | Storage system, method for checking data integrity, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENT, JOHN M.;FAIBISH, SORIN;SIGNING DATES FROM 20130422 TO 20130521;REEL/FRAME:030641/0466 |
|
| AS | Assignment |
Owner name: LOS ALAMOS NATIONAL SECURITY, LLC, NEW MEXICO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GRIDER, GARY A.;REEL/FRAME:031671/0220 Effective date: 20131123 |
|
| AS | Assignment |
Owner name: U.S. DEPARTMENT OF ENERGY, DISTRICT OF COLUMBIA Free format text: CONFIRMATORY LICENSE;ASSIGNOR:LOS ALAMOS NATIONAL SECURITY;REEL/FRAME:032363/0115 Effective date: 20131212 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: TRIAD NATIONAL SECURITY, LLC, NEW MEXICO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LOS ALAMOS NATIONAL SECURITY, LLC;REEL/FRAME:047485/0323 Effective date: 20181101 |
|
| AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
| AS | Assignment |
Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL INTERNATIONAL L.L.C., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |