EP1797510A2 - A storage system for randomly named blocks of data - Google Patents

A storage system for randomly named blocks of data

Info

Publication number
EP1797510A2
EP1797510A2 EP05808531A EP05808531A EP1797510A2 EP 1797510 A2 EP1797510 A2 EP 1797510A2 EP 05808531 A EP05808531 A EP 05808531A EP 05808531 A EP05808531 A EP 05808531A EP 1797510 A2 EP1797510 A2 EP 1797510A2
Authority
EP
European Patent Office
Prior art keywords
index
new
record
name
new record
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP05808531A
Other languages
German (de)
French (fr)
Inventor
Norman H. Margolus
Edwin Olson
Michael Sclafani
Corwin J. Coburn
Michael Fortson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Red Hat Inc
Original Assignee
Permabit Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Permabit Inc filed Critical Permabit Inc
Publication of EP1797510A2 publication Critical patent/EP1797510A2/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9014Indexing; Data structures therefor; Storage structures hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9017Indexing; Data structures therefor; Storage structures using directory or table look-up
    • G06F16/902Indexing; Data structures therefor; Storage structures using directory or table look-up using more than one table in sequence, i.e. systems with three or more layers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99934Query formulation, input preparation, or translation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface

Definitions

  • the invention relates to storage systems for computers, and particularly to systems designed for storage of large unstructured collections of data objects.
  • Some object storage systems use a cryptographic hash of a block of data to name the block.
  • a cryptographic hash is a function that deterministically computes a fixed width pseudo-random number (sometimes called a message digest or a fingerprint) from an input of any size.
  • the output of the SHA-256 cryptographic hashing algorithm is 256 bits wide (see National Institute of Standards and Technology, NIST FIPS PUB 180-2, "Secure Hash Standard," U.S. Department of Commerce, August 2002).
  • the Venti storage system is an example of an object storage system that uses a cryptographic hash of a block of data to name the block. In the Venti storage system storage space is conserved by avoiding storing duplicate copies of identical blocks, which have identical object names.
  • Another example of a storage system that uses cryptographic hashes for block naming is described in Margolus et. al, "A Data
  • This second example supports a network protocol that allows bandwidth to be conserved in storing hash-named blocks of data by answering a query as to whether the name already exists in the storage system, and only sending the block if it does not. Supporting this kind of protocol well requires a storage system that can answer a query about the existence or non-existence of one object out of a very large set of objects efficiently and quickly.
  • Bloom's technique (now called a Bloom Filter) is widely used today. It does not, however, provide a mechanism for indexing the data and finding it, only for testing whether it exists.
  • Venti storage system uses an append-log structure and makes no provision for ever changing, deleting or rearranging the stored items on disk.
  • Venti was designed for archival storage, the lack of deletion capability is a significant drawback when archiving sensitive data that must, under law, be retained for some period of time but can then be deleted.
  • the invention features a method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random, determining that the new record name is not already represented in the index by checking a first level index, combining the new record name with record name information already represented in the index to form a combined record name which is shorter than the new record name, adding the combined record name to the first level index to form a new first level index entry that represents the new record, adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name, determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name, and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name,
  • Each different record in the set may have a different entry in the first level index.
  • the process used for combining the new record name may comprise determining a portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index.
  • the invention may further comprise adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it, determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry.
  • the first level index may be stored in RAM and the second level index may be stored on disk.
  • the portion of information derived from the new record name may be derived by omitting some subset of the bits of the binary value that represents the new record name.
  • the combining may involve computing an arithmetic difference of at least portions of two record names or computing some other arithmetic or finite-field arithmetic operations involving at least portions of two record names.
  • the process of assigning the new record name may involve generating a pseudo-random number, or computing a cryptographic hash of at least a portion of the record itself, or computing a cryptographic hash of some combination of record identifying information which is known to be unique.
  • a portion of the index may represent a set of records for which record names were added to the index during a span of time that includes the time that the new record was added, and the portion may be retrieved as a unit in order to get additional information about the new record, and information about other records added during the span of time may be cached in RAM.
  • Records or index information may be stored in a sequential log-structure on disk, and extra information recording the bitwise XOR of a set of pieces comprising a segment of the sequential log-structure may be written to disk to allow unreadable sectors on disk to be reconstructed.
  • the space of possible record names may be divided up into a set of disjoint subspaces, each of which may be associated with one or more of a plurality of instances of the index.
  • the new record may be a block of content and the new record name may be a cryptographic hash of the block of content, and the index may be queried in order to avoid repeatedly transmitting or repeatedly storing the block of content.
  • the new record name may be added to the index a second time, and a reference count associated with the new record name may indicate that the new record has been added twice.
  • An annotation may be attached to the new entry in the first level index which includes information related to the new record or an indication of where additional information can be found. Information stored in the annotation attached to the new entry may be later represented elsewhere and may be removed from the entry in the first level index.
  • At least a portion of the index may be organized based on when records were added to the index. Only the portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index may be represented in the first level index. The sum of the lengths of the record names represented in the index may be larger than the sum of the lengths of the entries in the first level index.
  • the first level index may be divided into disjoint segments based on a fixed and predetermined ordering among all possible record names. Record or index information may be stored in a sequential log- structure on disk, and a reaper program may copy a segment of this log-structure elsewhere on disk omitting some of the information and freeing the segment for reuse.
  • Information related to the new record may be included in the segment and a reference count associated with the new record may be decremented to zero and the reaper program may not copy the information related to the new record before freeing the segment for reuse.
  • Records or index information may be stored in a sequential log- structure on disk, and a range of bytes in this log-structure may be marked as being unchangeable for a period of time, with this unchangeable status enforced by a storage resource underlying the data store. As long as the index is not filled beyond its design capacity, the chance that a randomly chosen record name can be determined to not be represented in the index by consulting the first level index alone may be over 98%. The capacity of the index may be limited only by the storage space available.
  • a set of records for which record names were added to the index during a span of time may be all stored in a localized region of a storage device, and a portion of the index representing the set of records may be stored with the set.
  • the new first level index entry may be written to disk and may be removed from RAM, and determining that a queried record name is already represented in the index may comprise accessing the new first level index entry on disk.
  • Information in the annotation attached to the new entry may be represented on disk and may be removed from the annotation.
  • the new first level index entry may not include information related to the location of data on disk.
  • the first level index entry may include an indication as to whether or not the entry comprises information other than record name information.
  • a copying process may be applied to the index which copies information from first level index entries to disk and removes the information from the first level index.
  • An annotation may be attached to the new entry in the first level index which includes an approximate disk location.
  • An annotation may be attached to a new entry in a second level index stored on disk which includes an approximate disk location related to the new record.
  • a plurality of reference counts may be associated with the new record name, with the sum of the plurality of reference counts reflecting the total number of times the record has been added to the index.
  • the reference count associated with the new record name may have a reference count component on disk and a reference count component in the first level index, and the sum of reference count components belonging to the new record may reflect the number of times that the new record name has been added to the index.
  • a reaper program may copy records or index data from old locations on disk to new locations on disk, omitting some information from the copy, and the reaper program may overwrite the old locations with patterns of data in order to obscure at least the omitted data and render it unreadable.
  • a reaper h program may copy records or index data from source locations on a source storage device to destination locations on a destination storage device, omitting some information from the copy and marking the source locations as free space, wherein the choice of destination storage device may be made based on a prediction about when the copied data will next be accessed or changed.
  • a segment of the first level index associated with the new record name may have a fixed size and location.
  • a segment of the first level index associated with the new record name may have a variable size or location.
  • a plurality of segments of the first level index may be stored in an array structure, and a pointer to a location within the array structure may define the start of a segment associated with the new record name.
  • the invention features a method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random, determining that the new record name is not already represented in the index by checking a first level index that does not contain information sufficient to reconstruct the complete record names of records that have already been added to the index, abbreviating the new record name to form a new abbreviated name that is shorter than the new record name but that is sufficient to distinguish it from record names already represented in the index, adding a representation of the abbreviated record name to the first level index to form a new first level index entry that represents the new record, adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name, determining that the first level index does not contain sufficient information to decide whether
  • the invention may further comprise adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it, determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry, wherein each different record in the set has a different entry in the first level index.
  • Figure 1 shows the transformations involved in encoding a sparse set of randomly distributed record numbers into an index list.
  • Figure 2 shows an example of truncating a block name for use in an index list.
  • Figure 3 shows a first level index divided up into index segments.
  • Figure 4 shows a byte-oriented entry format for the first level index.
  • Figure 5 shows a format for index entries used when different block names match (collide) when truncated.
  • Figure 6 shows a format for index entries of a segment of a second level index
  • Figure 7 shows an encoding of lease and reference count information into an annotation attached to a first level index entry.
  • Figure 8 shows a disk storage format organized as an append log of journal frames.
  • Figure 9 shows a disk journal frame structure.
  • Figure 10 illustrates the process of freeing space and compacting storage on disk (reaping) in the context of a shared block of storage.
  • Figure 11 illustrates how multiple data stores (four in the example) can be assigned ranges of block names based on some name bits.
  • Figure 12 again illustrates how multiple data stores (eight in the example) can be assigned ranges of block names.
  • Figure 13 illustrates how data stores assigned to a given address range can be ordered based on another part of the block name.
  • Figure 14 illustrates the addition of parity information to Eras to allow recovery from disk read errors.
  • Figure 15 illustrates read-error recovery when an error encompasses a region overlapping two adj acent chunks of an Era.
  • Figure 16 shows two sectors that are radially adjacent on a disk.
  • Figure 17 illustrates two alternatives for organizing parity information for read-error recovery when errors on radially adjacent sectors are correlated.
  • Figure 18 shows three alternative byte-oriented formats for entries in the first level index.
  • Figure 19 illustrates the use of byte-range retention leases to protect a data store journal from modification.
  • block name refers indifferently either to a name for a block of content that may be arbitrarily assigned or to a name based on a cryptographic hash of the block content. If all block names are based on a cryptographic hash of the block content (e.g,
  • block names are statistically guaranteed to be unique and randomly distributed. This same guarantee can also be made if all block names are based on a hash of some unique identifier associated with the block of content: for example, a file pathname along with a unique identifier for a file system. If both types of block names are used, then a block type can be prepended to the data to be hashed (content or identifier), to ensure that the data hashed is never the same in constructing the two kinds of names.
  • Block names are statistically guaranteed to be unique block identifiers.
  • the Data Repository envisioned in US 2002/0038296 Al and related applications can be implemented as a distributed collection of storage servers, each of which is assigned responsibility for some portions of a block-name address space. Each storage server is assigned a set of ranges of block-name values. Within each storage server, one or more Data Stores, each associated with physical disk storage devices, is ultimately responsible for storing and indexing large numbers of pseudo- randomly named blocks of data.
  • the chance that the separation between two adjacent values in the sorted list is four times the average is about 1.8%. This means that the chance that the first (log 2 N- 2) bits of the difference are all zero is over 98%. If differences between adjacent values (deltas) are stored in place of the original values in the sorted list, the same information is represented but in almost all cases, the first (log 2 N- 2) bits of the differences don't need to be represented. This, however, does not by itself provide a significant space savings, since N is so much smaller than L. As is indicated in Figure 1 , in this implementation the block names in the sorted list are truncated before computing the deltas.
  • a power of two value M (smaller than L) is chosen, and for each block name, all but the first log 2 Mbits are omitted (i.e., the range of values is reduced to M rather than L).
  • the probability that a given truncated block name collides with (i.e., matches) some other truncated block name is less than NIM (there are fewer than N choices out of M that result in a collision). This means that the fraction of the truncated block names that are not uniquely associated with a single full block name is less than NI2M (since both colliding names become one name).
  • the amount of space needed per block name is independent of both the size of the original block name and the number of names in the index.
  • the position at which block names are truncated i.e., the value of M
  • N the maximum number of entries that the data store is designed to index. This number needs to be known in any case, however, since the maximum memory requirements for the index are proportional to the maximum number of blocks being indexed.
  • the index list technique uses less than r+3 bits per item, and unlike the Bloom Filter provides a full index, with a distinct entry for each item indexed.
  • more definitive information must be accessed in order to verify that the name agrees to all Iog 2 £ bits.
  • This more definitive information can be kept on disk, and constitutes a second level of indexing.
  • the second level of index could, for example, simply be a complete hash table on disk. One access to the second level index on disk is sufficient to resolve any ambiguity.
  • the first level index in RAM, is constructed so that there is a low probability of finding that a queried name matches a first level index entry but is not actually present in the index.
  • the first level (in RAM) index indicates which names do not exist with no access to disk. Queries concerning names that do exist require one access to disk. This approach makes it practical for a storage client to always query when depositing content-named blocks into the storage system, in order to save bandwidth by avoiding transmitting blocks that are already stored. It also makes it efficient to share storage space when a previously stored content-named block is deposited again.
  • the index is queried to find out if the name already exists in the data store. In the course of this query, the block name of any colliding entry is retrieved. In the case of a collision, additional bits of both the old entry and the new entry are added to the first level index, so that both entries will represent a unique initial segment of the full block name.
  • each named block has a distinct entry in the first level index, one could simply annotate each entry with the location of the block on disk. This would add several bytes to each entry, but would always allow a named block to be retrieved with a single disk access. The disk access would retrieve both the block and the full block name (or enough information to reconstruct it), which would be tested to determine if it is the block being queried.
  • the second level index used for disambiguating collision cases could be a simple hash table on disk, and all retrievals could involve accessing this table to find both the full block name and block location, and then retrieving the named block itself. This second approach adds no data to the first-level index entries, but always takes two disk accesses to retrieve a block.
  • An intermediate scheme which adds a small annotation to each first-level index entry, is currently preferred.
  • This intermediate scheme performs about as well as the full annotation scheme (in which block location is put in the first-level index) when patterns in the write order of named blocks are reflected closely in the retrieval order.
  • Figure 3 illustrates the structure of the first level index, which would normally be kept in RAM.
  • the first level index is split up into segments, with each segment corresponding to a portion of the block name address space. This is accomplished in the illustration using an initial portion of the block name as a segment number.
  • a separate fixed size array structure is associated with each segment. Initially a small number of segments are allocated, and whenever a segment becomes full its address range is cut in half and part of its contents are moved to a newly allocated segment responsible for the other half of the range. The number of initial bits of a block name needed to identify the corresponding segment-array is variable.
  • Each segment of the first level index comprises a list of entries maintained in sorted order, with the order determined by the truncated block names that are represented. Entries have two parts: a delta value that records the difference between an entry and the previous entry, and an annotation that records information about the named block corresponding to the index entry. Every index entry corresponds to one block, and every block has a single index entry.
  • Figure 4 shows the byte-oriented index entry format used in the preferred implementation.
  • This format uses a one-byte delta value and two bytes of annotation. Two extra bytes of information are appended if there is delta overflow (difference too large, indicated by a delta of 2 8 -l). This allows a truncated value with 8 extra (higher order) bits to be represented. If this isn't enough (indicated by a delta of 2 16 -1), then more bytes are appended, etc.
  • This encoding uses about .3 extra bits per entry, on average, when the index is at maximum size.
  • the last collision entry is flagged, and the entry following it is a normal entry, with a delta relative to the preceding delta. Additional levels of collision record are defined (but not illustrated) in case two or more of the next-bits values are the same: different continuations past a common stem are again encoded.
  • the average number of extra bits used by this encoding is about 0.125 bits per entry when the index is at maximum size.
  • Figure 4 provides a byte oriented format for encoding index annotations: information about the named block that corresponds to the index entry, hi the preferred implementation, first level index entries are always an integer number of bytes long. This constraint is of course only a convenience.
  • index entries are three bytes long.
  • This format comprises 13 bits of an Era Number that associates one of up to 8K segments of a second level index on disk with the indexed block. Each segment is referred to as an Era Index, and is stored at a location on disk near to the named blocks that it indexes.
  • the Era Index consists of a list of entries with the format shown in Figure 6 (full block name, block type, and relative location of the block on disk).
  • the annotation also contains 3 bits that are used for keeping track of "reference counts" and "leases” (encoded as in Figure 7).
  • Content named blocks may be shared as components of larger objects.
  • the data store keeps track of a reference count, in order to know if all larger objects that reference a given block have been deleted, and so the shared block can itself be deleted. Clients of the data store explicitly tell the data store when to increment and decrement reference counts associated with content-named blocks. Most content-named blocks will have a reference count of either zero or one, since most blocks will not be shared. If the reference count is higher, extra bits are appended to the index entry annotation to allow this information to be represented.
  • Leases are useful for content-named blocks which have not yet been incorporated into any larger structure, and so have a reference count of zero. Leases are used to guarantee that a newly deposited block is retained for at least 24 hours before it becomes subject to deletion because it is not in use. When a content-named block is deposited, it is given a new lease. Every 24 hours, a background process turns all new leases into old leases and all old leases into no-lease. A content-named block with no lease and a reference count of zero may be deleted by the data store and its space reclaimed.
  • Figure 8 shows the logical disk format used by the Data Store. This format is designed to aid in the storage, indexing and retrieval of randomly named blocks of data.
  • Figure 8 shows the disk structure used by the preferred implementation of the data store.
  • the segments of indexing information start at predictable regularly-spaced positions on disk — every 64 MB in the illustration. This makes it possible to always find the indexes without resort to any stored information.
  • the space from the end of one index segment to the start of the next is used to store blocks of named data, as well as other persistent information.
  • the segment of storage space is called an Era and the segment of index is called an Era Index.
  • the Era Indexes are the segments of the second level index discussed earlier. They play a role similar to that played by directories in a file system: when one named block from an Era is accessed, its Era index is consulted and cached. If other named blocks from the same Era are read while that index remains in RAM, all of their locations on disk are known from the cached Era index and so they will all be read with one disk access per named block. Since the blocks in one Era are close together, any subset of them can be accessed quickly with little seeking.
  • journal frame An example of the journal frame structure used in the preferred implementation is shown in Figure 9.
  • the journal frame starts with a fixed value that is used to mark the start of every frame. A different pseudorandom value is chosen for this mark each time the disk is formatted — such a fixed value that helps delineate the start of a stored record is sometimes called a "magic number”. This is followed by a virtual Era number that helps verify that all of the frames belong to the same Era (virtual Era numbers have many more bits than actual Era numbers).
  • journal frames are of variable length, up to 64KB
  • a 32-bit checksum ends the journal frame, allowing data corruption to be readily detected.
  • the payload is a content named block, it includes additional information such as the reference count for the block at the time it was last written (obtained from the entry annotation in the first level index). Since the log is written sequentially, there is no need to leave any space on disk between journal frames, even though they are of variable length. The only exception is at the end of an Era, where some space is left unused so that the first journal frame of the next Era (which is the Era index for the current Era) always starts at a 64MB boundary.
  • the Era indexes are redundant, because they can be regenerated from the other journal frames.
  • the first level index, stored in RAM, is also redundant because it can be regenerated from the information in the journal.
  • the reaper is a program that runs as a background task, reclaiming freeable space on the disk and compacting retained data.
  • the reaper treats the disk as a circular buffer, with the highest address on the disk adjacent to the lowest. Whenever at least 1% of the space used by the journal is freeable (due to objects having been deleted) the reaper runs (also under some other circumstances).
  • the reaper starts at the oldest era that it has not yet processed and examines all journal frames in that era. It verifies the checksum of each journal frame and initiates a recovery procedure if a bad frame is found.
  • Any payload that is still relevant is copied to a new journal frame at the frontier, and the corresponding Era Number in the first level index is updated to point to the new location. Any payload that is not still relevant is omitted. If a frame is found which contains a named block which is not pointed to by the first level index, it is deemed no longer relevant and is omitted. This is how modifications to named blocks are handled: the replacement block is written to the Era at the frontier and its first level index entry is pointed to the new location. The reaper cleans up the old version as it comes across it. Once an Era has been reaped, its space is appended to the available free space.
  • Block A is a content-named block and is near the oldest part of the journal. Since Block A was written, its reference count has been changed twice, and journal frames have been written to disk to record these changes. The reference count in the first level index (in RAM) was updated as these increment/decrement requests were received, and is current.
  • the reaper copies Block A to the Era at the frontier, including the current reference count in the new journal frame.
  • the old copy of Block A can be added to the free space on disk as soon as the Era containing it is finished being reaped.
  • the records of changes in Block A's reference counts that occurred before it was reaped are no longer relevant: the reference count recorded along with the new copy of Block A is up to date and can be used in the event of a crash to rebuild the first level index.
  • the two reference count journal frames shown will be omitted when the reaper processes the Era's containing them, and their space will be freed at that time.
  • a Data Repository may comprise a number of storage servers, each of which may in turn comprise a number of data stores. Some number of the least significant bits of the block name may be used to define address ranges assigned to different data stores. Using address ranges for this purpose has the advantage that it distributes the indexing problem among the data stores in a scalable fashion. Since block names are randomly distributed, the fraction of the total storage assigned to each data store is very closely proportional to the total size of all the address ranges assigned to it. The same address range can be assigned to multiple data stores as part of a fault tolerance (e.g., replication) scheme. Figure 11 shows an example of an assignment of address ranges to a set of four data stores.
  • a fault tolerance e.g., replication
  • each address range is assigned to two data stores, as might be done in a system implementing two ⁇ fold replication of all data.
  • Figure 12 illustrates an assignment of address ranges to eight data stores.
  • Figure 13 shows a detail from Figure 12, focusing on the first column.
  • four data stores are assigned the address range where the first relevant name bits are both zero.
  • Figure 13 illustrates a method of assigning the data stores role-numbers in an equitable fashion. We first assign the stores in each address range a fixed order, and then we use an unused low-order portion of the block name to choose (essentially randomly) which data store will play role number 0. The other roles are then assigned in cyclic order.
  • Hard disks employ redundant encoding at the level of disk sectors to allow them to tolerate hardware problems and still read data correctly. Given that adding redundant information on disk subtracts from the space available for data storage, disk manufacturers add only as much error correction information as is necessary. A typical modern disk specifies that a sector on disk will be unreadable no more often than once in every 10 14 bits that are read.
  • the reaping mechanism described above continually copies and rewrites data. This prevents latent errors from accumulating, but it also causes the data on the disk to be read many times. If 25 500GB disks are each read completely once, this adds up to 10 14 bits. In storage systems with many large disks that are continually being reaped, one unreadable sector in 10 14 bits read would cause frequent failures.
  • parity information i.e., sum modulo 2 of all corresponding bits
  • corresponding sectors on D-I of the disks is recorded on the corresponding sector of the D-th disk. If a read error occurs on one disk, the unreadable sector can be reconstructed from the information on the other disks.
  • FIG. 14 illustrates the technique.
  • each Era is divided into E+l equal-sized chunks: E chunks containing data and one chunk containing parity information.
  • Each bit of the parity chunk C E is the sum modulo two (XOR) of the corresponding bits of all the data chunks Q. If one chunk contains unreadable data, it can be reconstructed from the other chunks of the Era by XOR-ing them all together.
  • the chunk size is related to operating system buffer sizes and errors are only localized by the operating system to entire chunks.
  • the region containing the unreadable sectors (BQ and A ⁇ in the illustration of Figure 15) can still be identified by using the checksums in the journal frames (see Figure 9). Once two adjacent chunks containing unreadable sectors have been identified, each possible alignment of a chunk-sized region overlapping the two is assumed in turn and the data is tentatively corrected based on that assumption. The first alignment that produces correct checksums in all journal frames is used as the definitive correction.
  • Figure 16 shows a schematic diagram of a disk, showing tracks and sectors.
  • a track on a disk consists of all of the data that can be accessed without moving the read/write heads radially (i.e., without seeking). It might be the case that, for adjacent tracks of data on a disk, sectors that are on different tracks but adjacent to each other radially may have correlated failures. This could be dealt with by making the Era size smaller than the storage capacity of any single track, so that the parity information in each Era can be used to deal with the sector errors independently. If this results in an inconveniently small Era, this could alternatively be dealt with by dividing an Era up into sections, each of which is smaller than any single track.
  • each section includes blocks of data and a parity block.
  • the parity blocks are all put into the last section, so that this looks essentially like the original scheme of Figure 14, but with the parity information at the end of the Era having additional structure.
  • First-level index on disk An on-disk first level index with a very low rate of false positives and direct pointers to block locations could act as a very compact alternative to a full hash table on disk, almost always providing a pointer to the block name with a single disk access. If some in-memory scheme for caching index entries were used in conjunction with an on-disk first-level index, the compactness of the on- disk index would be valuable in merging updates made to the in-memory cache into the on-disk index: the amount of data that would need to be read and written for an update pass over the entire on-disk structure would be reduced by a large factor.
  • First-level index using hash buckets A structure is described for the first level index in the preferred implementation which involves allocating space only as needed, splitting a fixed size segment of the index into two new fixed size segments whenever it becomes full.
  • fixed size hash buckets, each of which contains a segment of the index is a simple alternative. This approach involves pre-allocating the full space for the index, hi order to account for statistical variation in the filling of the hash buckets, a small percentage of extra space needs to be allocated to each hash bucket to accommodate a desired average filling.
  • First-level index using array with landmarks Another alternative structure that is logically possible for the first level index is a single long array — a first level index with just a single segment.
  • Accumulating space-usage statistics It is of interest to be able to accumulate statistics for the data store regarding space used (i.e., not freeable) and amount of shared storage. This can be accomplished by maintaining a running total of the space occupied by blocks with non-zero reference counts, and a separate total of the number of bytes referenced (i.e., sum of block size times reference count). These totals can be updated as reference counts are incremented and decremented as long as the size of the corresponding blocks are known. To make this information more efficient to access, a copy of the block size can be added to the Era Index entry of Figure 6.
  • Reference count deltas The reference count that was current when a block was last reaped is recorded along with the block. Only changes relative to this value need to be recorded in the first level index: each time a block is reaped and its reference count is recorded on disk, the value recorded in the first level index can be reset to zero. The full reference count for a block is then the sum of the base value stored with the block and the reference count delta stored in the first level index. All blocks with reference counts that haven't changed since they were last reaped will have reference count deltas of zero in the first level index. For efficiency in reaping and in accumulating space usage statistics, a copy of the base value of the reference count recorded with the block can be added to the Era Index entry of Figure 6.
  • Multiple reference counts per block If data from multiple sources (e.g., physical locations, administrative domains or file systems) has been deposited in a data store, it may be desirable to be able to efficiently separate out the data from a particular source at a later time, to be copied to another data store with correct reference counts. This need might arise, for example, in a data recovery scenario where data from multiple Data Repositories has been replicated to a single Data Repository, and the loss of several data stores at one of the source Repositories requires recovery of all blocks belonging to that source in some set of address ranges. To enable efficient separation by source, a separate reference count can be stored with each block for each defined data source that references it.
  • sources e.g., physical locations, administrative domains or file systems
  • a list of identifiers of sources associated with a given data block can be stored with that block, and reference count deltas in the block's first level index entry can refer to the ordinal number within the list to provide an efficient encoding.
  • the source identifier can be used directly to label the reference count delta in the first level index entry. For efficiency, a copy of the list of sources associated with a block and the corresponding reference counts (from the time the block was last reaped) can be added to the Era Index entry of Figure 6.
  • Figure 18 shows three examples of alternative byte-oriented entry formats for the first level index —
  • Figure 4 showed the format used in the preferred implementation.
  • Alternative format A uses more more bits for era numbers than the format of Figure 4 and reserves just one bit for other information. Every other piece of information that may be associated with a named block is assigned a default value, and if all pieces of information related to a particular entry have their default values, then no other information needs to be explicitly represented. For example, it is normally the case that most blocks haven't been recently deposited and so don't have leases, and so no bits need to be reserved in most entries for lease information, as is done in the format of Figure 4. If all extra information has its default value, a format A entry is three bytes long.
  • a second alternative entry format B is shown in Figure 18.
  • This format has one less bit of collision resistance than format A and uses the same extra-information flag and default conventions. In this format, no information about a second-level index is stored in the first level index, so that the first level index size is minimized.
  • a first level index using this format still identifies new block names efficiently, and caching of Era Index information may be sufficient to identify existing block names efficiently. Information recording the locations of new blocks might be cached in memory (perhaps as annotations) so that updates to an on-disk second level index (separate from the Era Indexes) can be aggregated.
  • a third alternative entry format C would be useful in an on-disk first level index of the kind discussed earlier in this section.
  • the annotation includes the full disk location of the named block.
  • we make the delta about twice as long, adding 7 more bits of collision resistance, so that the chance of a false positive match (which would result in an unnecessary disk read) is 2 '13 .
  • Two bytes are saved from the location information by only pointing to the 64KB chunk that contains the start of the named block.
  • AU reads are 130KB long, to ensure that the whole block (maximum 64KB long) is read.
  • Non-byte oriented entry formats can of course also be employed.
  • the encoding used in the preferred implementation uses about (r + 2.3) bits per delta, which is less than one bit more than the theoretical minimum.
  • First level index with more or less compaction The amount of compaction used in the first level index is a practical tradeoff: size versus speed and simplicity. For example, using non-byte aligned entries saves additional space, at the cost of additional complexity. Very simple implementations might use a separate hash table for all cases where the difference between adjacent sorted names is too big or too small for a fixed size delta representation, or embed full names directly into the list of deltas in such cases. Note that when a new name agrees with an existing name in the first level index up to its truncation point, only one of the names actually needs to be represented in the first level index with additional resolution in order to preserve the property that new names can collide with at most one existing name in the first level index.
  • Another simple alternative implementation would use truncated names in the first level index rather than deltas, truncating each name to a unique initial segment and relying on a separate compression process applied to segments of the first level index to reduce their size when they aren't being actively accessed.
  • Including other types of information in the index Several types of information have been mentioned as useful to include in a first level index entry annotation: leases, reference counts, block locations on disk, and the location on disk of additional indexing information.
  • leases, reference counts, block locations on disk, and the location on disk of additional indexing information The presence of a complete compact indexing structure to which other information related to individual named blocks can be attached obviously has many other uses.
  • Other information which could be attached to an index entry includes: locking information, temporary markers for blocks that should be copied somewhere or migrated, cached full block name, cached disk location, cached object metadata, age or activity information, other location information (which disk, which tape, etc.), security or authorization information, and time related information.
  • information that is initially attached to the first level index entries can be moved to the second level index entries when a block is reaped. Ii !!
  • Shredding or migrating data while reaping The reaper could provide special processing when deleting some kinds of blocks. For example, blocks that were retained for some period of time because of government regulatory requirements may require special shredding (multiple overwrites with random data) when they are finally deleted. Shredding could also be the norm. The reaper could also be involved in data migration, moving data which hasn't been accessed recently (and so is not expected to be accessed soon) or which has long-term retention requirements (and so will not change soon) to disks that can be turned off, or to offline media. In this case, at least the first level index information would need to be kept on media that remain accessible.
  • data can be moved to appropriate targets (storage devices or portions of storage devices) based on a prediction of when the data will next be needed, or next need to change.
  • Data which must not change during some period of time might even be aggregated on a storage resource where a retention period constraint is enforced by the storage resource.
  • Byte-range retention leases If access to a storage resource is shared by more than one data store (as it might be, for example, in a storage area network), it is desirable to have the shared storage resource prevent one data store from modifying journal frames written by another data store. It is also desirable to prevent software bugs in data store software from corrupting journal frames that have been fully written and closed to further modification. Both of these goals can be accomplished with byte-range retention leases.
  • a retention lease specifies that a range of storage locations can be read but cannot be modified by any process (including the data store process that originally wrote the data there) for some specified period of time, which cannot be decreased.
  • the range of bytes is not reserved for access by one process, it is reserved for access by no process.
  • Leases for regions that are part of the journal are periodically renewed, so that the journal remains unmodifiable. Journal frames that have been reaped and added to free space stop having their leases renewed, and these leases eventually expire and the space becomes available for reuse.
  • Retention leases are persistent across ordinary hardware reboots and resets, hi a typical data store usage scenario, leases might last for days or weeks — long enough that system maintenance is unlikely to prevent renewals for a long enough period that leases on unfreed journal frames expire.
  • Figure 19 provides an example use of retention leases. Region A was formerly part of the journal but is now free space in which leases have not yet expired. Region B consists of Eras that have been fully written and closed to further modification. Region C consists of space that can be exclusively written to by one particular data store process. Region D consists of free space that can be read or written by any process. In this example, retention leases are initiated, extended and released for entire Eras, rather than for individual journal frames.
  • Unified addressing of blocks We assumed, for simplicity, that in a multi data store system the bits derived from the block name that are used for distributing the blocks between different data stores are different than the bits that are used to distribute data between segments of the first level index. This made our randomness assumptions simpler, but it meant that the stored truncated names in the first level index didn't contain information about the address ranges used for inter-store distribution. If this assumption doesn't hold, and the same initial portion of each block name is used for both kinds of distribution, the main thing that changes is that the block names held by a particular data store are concentrated into a smaller total range, and so are the truncated names. Within each range, the names are still distributed randomly.
  • Block names might only be approximately random (i.e., characterized by a high entropy probability distribution), or only a portion of the block name may be approximately random. There should be enough randomness that, in a large list of sorted names, the differences between adjacent names are reasonably predictable. If that is the case, then we know where to truncate the names so that differences can usually be represented by a value that is small enough to be compact but is hardly ever zero (and so we rarely require additional information to represent names distinctly). Block names do not, of course, have to be created randomly or pseudorandomly to have a portion that is sufficiently random to work for the index.
  • blocks are named by long timestamps of when they were created, then the least significant portion of the timestamp may be quite random.
  • Varying other features The description of the preferred implementation was made very specific in order to promote clarity, but many features could be varied. For example, different cryptographic hash functions could be used, disks could be virtual disks (for example, in a storage area network) or even other kinds of media. All of the storage could be in RAM. On-disk structure could be very different, with different sizes and structure of Eras, different structure and placement of Era Indexes or even elimination of Era Indexes (and hence Eras) in favor of other kinds of second level indexes, or even putting more direct block location information into RAM.
  • the append log structure could be more sophisticated with more use of pointers to segments of disk data, so that information that hasn't changed is copied less.
  • the log structure could be abandoned in favor of some other structure, with no use made of temporal locality or temporal locality exploited in some other manner.
  • One data store might manage more than one set of storage resources, allocating named blocks to different resources and moving data among them based on storage and migration policies, access patterns and changes in the number, availability or nature of the resources.
  • indexing Reference is made throughout to blocks and block names, but blocks are just some of the possible record types, with associated record names, that could be indexed.
  • the indexing techniques disclosed here could also be applied in other contexts.
  • the compressed first level index technique might be useful in places where Bloom Filters are currently employed, particularly where a compact representation is important (e.g., sharing information about a Web cache across the network).
  • the first level index could also be used by itself to provide a compact index for a fixed set of randomly named records. It is to be understood that the foregoing description is intended to illustrate a few possible implementations of the invention. These and a great many other implementations are within the scope of the appended claims.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random, determining that the new record name is not already represented in the index by checking a first level index, combining the new record name with record name information already represented in the index to form a combined record name which is shorter than the new record name, adding the combined record name to the first level index to form a new first level index entry that represents the new record, adding a second new record to the set and assigning the second new record a second new record name which :is different than the new record name, determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name, and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name, wherein the first level index does not contain information sufficient to conclude that the new record name has been added to the index, wherein each different record in the set is assigned a different record name, and wherein at least a portion of the first level index is ordered based on record names.

Description

A STORAGE SYSTEM FOR RANDOMLY NAMED BLOCKS OF DATA
CROSS-REFERENCE TO RELATED APPLICATION
This application claims priority to U.S. Provisional Application Serial No. 60/616,653, filed on October 6, 2004.
TECHNICAL FIELD The invention relates to storage systems for computers, and particularly to systems designed for storage of large unstructured collections of data objects.
BACKGROUND
The performance of a modern file system depends upon assumptions about the structure of the file sets that it will store. File systems are not well suited to storing large sets of files with randomly chosen names or randomly chosen pathnames. An object storage system is similar to a file system but without the hierarchical directory structure. Objects may be named in an essentially random manner. Using an ordinary file system as an object storage system, to store hundreds of millions or billions of randomly named objects, results in very poor performance. If the set of object names is large and the names themselves are large, a complete list of names will not fit into random access memory. The straightforward alternative is to implement a hash table on disk, as is done for example in the Venti storage system described in Sean Quinlan and Sean Dorward, "Venti: a new approach to archival storage," in the Proceedings of the Conference on File and Storage Technologies (2002). This approach requires at least one access to an essentially randomly chosen disk location in order to get a pointer to the location of the object itself on disk.
Some object storage systems use a cryptographic hash of a block of data to name the block. A cryptographic hash is a function that deterministically computes a fixed width pseudo-random number (sometimes called a message digest or a fingerprint) from an input of any size. For example, the output of the SHA-256 cryptographic hashing algorithm is 256 bits wide (see National Institute of Standards and Technology, NIST FIPS PUB 180-2, "Secure Hash Standard," U.S. Department of Commerce, August 2002). The Venti storage system is an example of an object storage system that uses a cryptographic hash of a block of data to name the block. In the Venti storage system storage space is conserved by avoiding storing duplicate copies of identical blocks, which have identical object names. Another example of a storage system that uses cryptographic hashes for block naming is described in Margolus et. al, "A Data
Repository and Method for Promoting Network Storage of Data," US 2002/0038296 Al, March 28 2002. This second example supports a network protocol that allows bandwidth to be conserved in storing hash-named blocks of data by answering a query as to whether the name already exists in the storage system, and only sending the block if it does not. Supporting this kind of protocol well requires a storage system that can answer a query about the existence or non-existence of one object out of a very large set of objects efficiently and quickly.
This is the problem of detecting set membership. One of the earliest and most important contributions to this subject came from Burton H. Bloom in "Space/Time Tradeoffs in Hash Coding with Allowable Errors," Communications of the ACM,
July 1970. He observed that the problem can be simplified by allowing a small rate of false positive answers, which then need to be resolved using some other mechanism. His hashing technique requires about bits of storage per element of the set represented, in order to have a false-positive rate of 2~r. Note that this storage requirement depends only on the number of elements in the set, and not on how big the elements are. Bloom's technique (now called a Bloom Filter) is widely used today. It does not, however, provide a mechanism for indexing the data and finding it, only for testing whether it exists.
In the domain of text indexing and searching, the problem of efficiently storing indexes for large collections of text records has been studied. One technique used there is Inverted File Indexing, which is described for example in the book by Witten, Moffat and Bell, "Managing Gigabytes," Morgan Kaufmann (1999). This technique involves sorting record numbers in the index and only representing differences in lists of record numbers. This technique wouldn't, however, save a significant fraction of the space in an index involving a sparse space of record numbers, as is the case with long hash-based names.
In addition to the problem of indexing randomly named objects, there is also the problem of organizing their storage on disk for efficient access and modification. The Venti storage system uses an append-log structure and makes no provision for ever changing, deleting or rearranging the stored items on disk. Although Venti was designed for archival storage, the lack of deletion capability is a significant drawback when archiving sensitive data that must, under law, be retained for some period of time but can then be deleted.
SUMMARY
In general, the invention features a method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random, determining that the new record name is not already represented in the index by checking a first level index, combining the new record name with record name information already represented in the index to form a combined record name which is shorter than the new record name, adding the combined record name to the first level index to form a new first level index entry that represents the new record, adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name, determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name, and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name, wherein the first level index does not contain information sufficient to conclude that the new record name has been added to the index, wherein each different record in the set is assigned a different record name, and wherein at least a portion of the first level index is ordered based on record names.
In preferred implementations, one or more of the following features may be incorporated. Each different record in the set may have a different entry in the first level index. The process used for combining the new record name may comprise determining a portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index. The invention may further comprise adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it, determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry. The first level index may be stored in RAM and the second level index may be stored on disk. The portion of information derived from the new record name may be derived by omitting some subset of the bits of the binary value that represents the new record name. The combining may involve computing an arithmetic difference of at least portions of two record names or computing some other arithmetic or finite-field arithmetic operations involving at least portions of two record names. The process of assigning the new record name may involve generating a pseudo-random number, or computing a cryptographic hash of at least a portion of the record itself, or computing a cryptographic hash of some combination of record identifying information which is known to be unique. A portion of the index may represent a set of records for which record names were added to the index during a span of time that includes the time that the new record was added, and the portion may be retrieved as a unit in order to get additional information about the new record, and information about other records added during the span of time may be cached in RAM. Records or index information may be stored in a sequential log-structure on disk, and extra information recording the bitwise XOR of a set of pieces comprising a segment of the sequential log-structure may be written to disk to allow unreadable sectors on disk to be reconstructed. The space of possible record names may be divided up into a set of disjoint subspaces, each of which may be associated with one or more of a plurality of instances of the index. Different indexes associated with the same subspace may be assigned different roles based on a portion of the record name. The new record may be a block of content and the new record name may be a cryptographic hash of the block of content, and the index may be queried in order to avoid repeatedly transmitting or repeatedly storing the block of content. The new record name may be added to the index a second time, and a reference count associated with the new record name may indicate that the new record has been added twice. An annotation may be attached to the new entry in the first level index which includes information related to the new record or an indication of where additional information can be found. Information stored in the annotation attached to the new entry may be later represented elsewhere and may be removed from the entry in the first level index. At least a portion of the index may be organized based on when records were added to the index. Only the portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index may be represented in the first level index. The sum of the lengths of the record names represented in the index may be larger than the sum of the lengths of the entries in the first level index. The first level index may be divided into disjoint segments based on a fixed and predetermined ordering among all possible record names. Record or index information may be stored in a sequential log- structure on disk, and a reaper program may copy a segment of this log-structure elsewhere on disk omitting some of the information and freeing the segment for reuse. Information related to the new record may be included in the segment and a reference count associated with the new record may be decremented to zero and the reaper program may not copy the information related to the new record before freeing the segment for reuse. Records or index information may be stored in a sequential log- structure on disk, and a range of bytes in this log-structure may be marked as being unchangeable for a period of time, with this unchangeable status enforced by a storage resource underlying the data store. As long as the index is not filled beyond its design capacity, the chance that a randomly chosen record name can be determined to not be represented in the index by consulting the first level index alone may be over 98%. The capacity of the index may be limited only by the storage space available. A set of records for which record names were added to the index during a span of time may be all stored in a localized region of a storage device, and a portion of the index representing the set of records may be stored with the set. The new first level index entry may be written to disk and may be removed from RAM, and determining that a queried record name is already represented in the index may comprise accessing the new first level index entry on disk. Information in the annotation attached to the new entry may be represented on disk and may be removed from the annotation. The new first level index entry may not include information related to the location of data on disk. The first level index entry may include an indication as to whether or not the entry comprises information other than record name information. A copying process may be applied to the index which copies information from first level index entries to disk and removes the information from the first level index. An annotation may be attached to the new entry in the first level index which includes an approximate disk location. An annotation may be attached to a new entry in a second level index stored on disk which includes an approximate disk location related to the new record. A plurality of reference counts may be associated with the new record name, with the sum of the plurality of reference counts reflecting the total number of times the record has been added to the index. The reference count associated with the new record name may have a reference count component on disk and a reference count component in the first level index, and the sum of reference count components belonging to the new record may reflect the number of times that the new record name has been added to the index. A reaper program may copy records or index data from old locations on disk to new locations on disk, omitting some information from the copy, and the reaper program may overwrite the old locations with patterns of data in order to obscure at least the omitted data and render it unreadable. A reaper h program may copy records or index data from source locations on a source storage device to destination locations on a destination storage device, omitting some information from the copy and marking the source locations as free space, wherein the choice of destination storage device may be made based on a prediction about when the copied data will next be accessed or changed. A segment of the first level index associated with the new record name may have a fixed size and location. A segment of the first level index associated with the new record name may have a variable size or location. A plurality of segments of the first level index may be stored in an array structure, and a pointer to a location within the array structure may define the start of a segment associated with the new record name.
In a further aspect, the invention features a method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random, determining that the new record name is not already represented in the index by checking a first level index that does not contain information sufficient to reconstruct the complete record names of records that have already been added to the index, abbreviating the new record name to form a new abbreviated name that is shorter than the new record name but that is sufficient to distinguish it from record names already represented in the index, adding a representation of the abbreviated record name to the first level index to form a new first level index entry that represents the new record, adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name, determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name, and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name, wherein each different record in the set is assigned a different record name, wherein the first level index is ordered based on abbreviated record names, and wherein a segment of the first level index is stored in a compacted form which is shorter than the sum of the lengths of the abbreviated record names represented in it. hi preferred implementations, one or more of the following features may be incorporated. The invention may further comprise adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it, determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry, wherein each different record in the set has a different entry in the first level index. Other features and advantages of the invention will be apparent from the drawings, detailed description, and claims.
DESCRIPTION OF DRAWINGS
Figure 1 shows the transformations involved in encoding a sparse set of randomly distributed record numbers into an index list. Figure 2 shows an example of truncating a block name for use in an index list.
Figure 3 shows a first level index divided up into index segments.
Figure 4 shows a byte-oriented entry format for the first level index.
Figure 5 shows a format for index entries used when different block names match (collide) when truncated. Figure 6 shows a format for index entries of a segment of a second level index
(era index).
Figure 7 shows an encoding of lease and reference count information into an annotation attached to a first level index entry. Figure 8 shows a disk storage format organized as an append log of journal frames.
Figure 9 shows a disk journal frame structure.
Figure 10 illustrates the process of freeing space and compacting storage on disk (reaping) in the context of a shared block of storage.
Figure 11 illustrates how multiple data stores (four in the example) can be assigned ranges of block names based on some name bits.
Figure 12 again illustrates how multiple data stores (eight in the example) can be assigned ranges of block names. Figure 13 illustrates how data stores assigned to a given address range can be ordered based on another part of the block name.
Figure 14 illustrates the addition of parity information to Eras to allow recovery from disk read errors.
Figure 15 illustrates read-error recovery when an error encompasses a region overlapping two adj acent chunks of an Era.
Figure 16 shows two sectors that are radially adjacent on a disk.
Figure 17 illustrates two alternatives for organizing parity information for read-error recovery when errors on radially adjacent sectors are correlated.
Figure 18 shows three alternative byte-oriented formats for entries in the first level index.
Figure 19 illustrates the use of byte-range retention leases to protect a data store journal from modification.
DETAILED DESCRIPTION
There are a great many different implementations of the invention possible, too many to possibly describe herein. Some possible implementations that are presently preferred are described below. It cannot be emphasized too strongly, however, that these are descriptions of implementations of the invention, and not descriptions of the invention, which is not limited to the detailed implementations described in this section but is described in broader terms in the claims. INTRODUCTION
In this description we will use the term block name to refer indifferently either to a name for a block of content that may be arbitrarily assigned or to a name based on a cryptographic hash of the block content. If all block names are based on a cryptographic hash of the block content (e.g,
SHA-256), then block names are statistically guaranteed to be unique and randomly distributed. This same guarantee can also be made if all block names are based on a hash of some unique identifier associated with the block of content: for example, a file pathname along with a unique identifier for a file system. If both types of block names are used, then a block type can be prepended to the data to be hashed (content or identifier), to ensure that the data hashed is never the same in constructing the two kinds of names. As long as the block type for a content-based name is different from the block type for a unique-identifier-based name, the chances of an accidental agreement (collision) between a pair of names of the two types is no greater than for any pair of names of one type or the other. Block names, as defined here, are statistically guaranteed to be unique block identifiers.
The Data Repository envisioned in US 2002/0038296 Al and related applications can be implemented as a distributed collection of storage servers, each of which is assigned responsibility for some portions of a block-name address space. Each storage server is assigned a set of ranges of block-name values. Within each storage server, one or more Data Stores, each associated with physical disk storage devices, is ultimately responsible for storing and indexing large numbers of pseudo- randomly named blocks of data.
Indexing the Data Store
The initial prototype of the Data Repository used a Data Store that embedded block names into an ordinary Linux ext2 filesystem. Even after tuning the mapping between block names and pathnames, as the number of named blocks in the store reached a few million, it took dozens of disk seeks, on average, to access each stored block. The problem of simply querying whether a given block name was already in use was similarly inefficient. Achieving bandwidth and storage savings for content- named blocks depends on this query. An obvious alternative for implementing a simple and fast indexing scheme would be to keep all of the index information in RAM. Given 256-bit hash-based block names and an expectation of storing and indexing several hundred million named blocks per storage server, this at first seemed impractical. A mechanism that makes it practical is illustrated in Figure 1. This mechanism exploits the predictable properties of a large set of high-quality pseudo-random numbers. To simplify the discussion here, it will be assumed that address ranges based on some number of the least significant bits of the block names are used to assign ranges of block names to Data Stores, so that the rest of the bits can be assumed to be random. As is indicated in Figure 1 , the index is maintained in sorted order. Given a maximum of N numbers to index (e.g., a few hundred million) and a range of name values of size L (e.g., 2256), the average separation between adjacent values in the sorted list is LIN. The distribution of differences between adjacent values in this sorted list is exponential: the chance that the separation will be more than x times the average is exp(-x) in the limit of large N. This can be seen by regarding the values in the list as binary fractions with an average separation of VN, and observing that the probability of a difference greater than x/N is (l-x/N)N.
Thus, for example, the chance that the separation between two adjacent values in the sorted list is four times the average is about 1.8%. This means that the chance that the first (log2N- 2) bits of the difference are all zero is over 98%. If differences between adjacent values (deltas) are stored in place of the original values in the sorted list, the same information is represented but in almost all cases, the first (log2N- 2) bits of the differences don't need to be represented. This, however, does not by itself provide a significant space savings, since N is so much smaller than L. As is indicated in Figure 1 , in this implementation the block names in the sorted list are truncated before computing the deltas. A power of two value M (smaller than L) is chosen, and for each block name, all but the first log2Mbits are omitted (i.e., the range of values is reduced to M rather than L). The probability that a given truncated block name collides with (i.e., matches) some other truncated block name is less than NIM (there are fewer than N choices out of M that result in a collision). This means that the fraction of the truncated block names that are not uniquely associated with a single full block name is less than NI2M (since both colliding names become one name). Thus, for example, if M=32N, the fraction of the truncated values that represent collisions is about 1.6%, and the truncated value is only 5 bits longer than log2N. Putting these two observations about the improbability of big deltas and small deltas together (see Figure 2), one finds that the probability that it is necessary to store more than a 7-bit difference in order to represent a unique initial segment of each block name is about 3.4%: a 1.8% chance that any of the first (log27V- 2) bits need to be represented, and a 1.6% chance that any of the bits past (log2N+ 5) need to be represented. By including a small amount of extra information in these 3.4% of the cases, it is possible to represent a unique initial segment of each of the block names using an average of less than one byte per block name. This is a reduction in space of a factor of 32 for SHA-256 based block names.
The amount of space needed per block name is independent of both the size of the original block name and the number of names in the index. The position at which block names are truncated (i.e., the value of M) depends on the value of N, the maximum number of entries that the data store is designed to index. This number needs to be known in any case, however, since the maximum memory requirements for the index are proportional to the maximum number of blocks being indexed.
Querying the Index Under the proposed scheme, for almost all block names only the first log2M bits of the name are represented in the index list. This means that, when the index is at its maximum size of //entries, the chance that a randomly chosen name collides with an existing entry in the index list is about NIM. This is the chance that a queried name that matches in the index list is not actually in the list of full block names. This is the false-positive rate of the index list as a membership tester. If M=32N, this is about 3%. If M=647V (one more bit) this is about 1.6%. There is no chance that the index list will incorrectly indicate that a queried item is not in the full list.
This compares favorably with the Bloom Filter technique mentioned in the Background section, which requires r log2e bits per indexed item to achieve a false positive rate of 2~r. The index list technique uses less than r+3 bits per item, and unlike the Bloom Filter provides a full index, with a distinct entry for each item indexed. In the case where the queried name agrees to log2Mbits with an entry in the list, more definitive information must be accessed in order to verify that the name agrees to all Iog2£ bits. This more definitive information can be kept on disk, and constitutes a second level of indexing. The second level of index could, for example, simply be a complete hash table on disk. One access to the second level index on disk is sufficient to resolve any ambiguity. The first level index, in RAM, is constructed so that there is a low probability of finding that a queried name matches a first level index entry but is not actually present in the index. To a good approximation, the first level (in RAM) index indicates which names do not exist with no access to disk. Queries concerning names that do exist require one access to disk. This approach makes it practical for a storage client to always query when depositing content-named blocks into the storage system, in order to save bandwidth by avoiding transmitting blocks that are already stored. It also makes it efficient to share storage space when a previously stored content-named block is deposited again.
Adding an Entry to the Index
When a new named block is written to the data store, the index is queried to find out if the name already exists in the data store. In the course of this query, the block name of any colliding entry is retrieved. In the case of a collision, additional bits of both the old entry and the new entry are added to the first level index, so that both entries will represent a unique initial segment of the full block name.
Retrieving a Named Block
Since each named block has a distinct entry in the first level index, one could simply annotate each entry with the location of the block on disk. This would add several bytes to each entry, but would always allow a named block to be retrieved with a single disk access. The disk access would retrieve both the block and the full block name (or enough information to reconstruct it), which would be tested to determine if it is the block being queried. Alternatively, the second level index used for disambiguating collision cases could be a simple hash table on disk, and all retrievals could involve accessing this table to find both the full block name and block location, and then retrieving the named block itself. This second approach adds no data to the first-level index entries, but always takes two disk accesses to retrieve a block. An intermediate scheme, which adds a small annotation to each first-level index entry, is currently preferred. This intermediate scheme performs about as well as the full annotation scheme (in which block location is put in the first-level index) when patterns in the write order of named blocks are reflected closely in the retrieval order. By storing segments of second level index information close to the data blocks that they index, and that are written at about the same time, both storage and retrieval of the data blocks can also be made more efficient.
THE DATA STORE The Data Store disclosed here is only one possible realization of the approach outlined in the Introduction. Some possible alternatives and enhancements will be discussed in the section on Other Implementations. The indexing technique used here is also widely applicable.
Figure 3 illustrates the structure of the first level index, which would normally be kept in RAM. The first level index is split up into segments, with each segment corresponding to a portion of the block name address space. This is accomplished in the illustration using an initial portion of the block name as a segment number. In the preferred implementation a separate fixed size array structure is associated with each segment. Initially a small number of segments are allocated, and whenever a segment becomes full its address range is cut in half and part of its contents are moved to a newly allocated segment responsible for the other half of the range. The number of initial bits of a block name needed to identify the corresponding segment-array is variable.
Each segment of the first level index comprises a list of entries maintained in sorted order, with the order determined by the truncated block names that are represented. Entries have two parts: a delta value that records the difference between an entry and the previous entry, and an annotation that records information about the named block corresponding to the index entry. Every index entry corresponds to one block, and every block has a single index entry.
Encoding Deltas
Figure 4 shows the byte-oriented index entry format used in the preferred implementation. This format uses a one-byte delta value and two bytes of annotation. Two extra bytes of information are appended if there is delta overflow (difference too large, indicated by a delta of 28-l). This allows a truncated value with 8 extra (higher order) bits to be represented. If this isn't enough (indicated by a delta of 216-1), then more bytes are appended, etc. This encoding uses about .3 extra bits per entry, on average, when the index is at maximum size.
Collisions (delta of zero) are handled most simply by using an auxiliary table with a full representation of one of the pair of colliding block names. This approach requires about 1.9 extra bits per entry, on average, when the index is at maximum size. The auxiliary table is always checked first in any index lookup. A more compact representation is used to handle collisions in the preferred implementation. A few extra bits are added to entries in the first level index to make colliding entries distinct. This approach is illustrated in Figure 5. A delta of 0 is used to signal the beginning of a collision record. This is followed by a delta that encodes the log2M bit truncated value that collided. Individual entries for the colliding block names then follow, each containing the next few bits past the original point of truncation and a normal entry annotation. The last collision entry is flagged, and the entry following it is a normal entry, with a delta relative to the preceding delta. Additional levels of collision record are defined (but not illustrated) in case two or more of the next-bits values are the same: different continuations past a common stem are again encoded. The average number of extra bits used by this encoding is about 0.125 bits per entry when the index is at maximum size.
Encoding Annotations
Figure 4 provides a byte oriented format for encoding index annotations: information about the named block that corresponds to the index entry, hi the preferred implementation, first level index entries are always an integer number of bytes long. This constraint is of course only a convenience.
In the index format of Figure 4, most index entries are three bytes long. This format comprises 13 bits of an Era Number that associates one of up to 8K segments of a second level index on disk with the indexed block. Each segment is referred to as an Era Index, and is stored at a location on disk near to the named blocks that it indexes. The Era Index consists of a list of entries with the format shown in Figure 6 (full block name, block type, and relative location of the block on disk). The annotation also contains 3 bits that are used for keeping track of "reference counts" and "leases" (encoded as in Figure 7). Content named blocks may be shared as components of larger objects. The data store keeps track of a reference count, in order to know if all larger objects that reference a given block have been deleted, and so the shared block can itself be deleted. Clients of the data store explicitly tell the data store when to increment and decrement reference counts associated with content-named blocks. Most content-named blocks will have a reference count of either zero or one, since most blocks will not be shared. If the reference count is higher, extra bits are appended to the index entry annotation to allow this information to be represented.
Leases are useful for content-named blocks which have not yet been incorporated into any larger structure, and so have a reference count of zero. Leases are used to guarantee that a newly deposited block is retained for at least 24 hours before it becomes subject to deletion because it is not in use. When a content-named block is deposited, it is given a new lease. Every 24 hours, a background process turns all new leases into old leases and all old leases into no-lease. A content-named block with no lease and a reference count of zero may be deleted by the data store and its space reclaimed.
On-Disk Format
Figure 8 shows the logical disk format used by the Data Store. This format is designed to aid in the storage, indexing and retrieval of randomly named blocks of data.
In a modern file system, advantage is taken of the fact that items that are stored in the same directory are more likely to be accessed together than files in different directories. This allows a file system to optimize access to disk by caching directory information for files that have recently been accessed, and thus reduce the amount of disk activity needed to find the location of stored data.
In a data store with randomly named blocks of data, there are no directory structures available to provide hints as to which blocks are likely to be accessed together. An alternative clue is available: temporal locality. Blocks of data that are written at about the same time are more likely to be read at about the same time. This suggests that the on-disk format for the data store should have the structure of an append-log: new information is written immediately after the latest information previously written. Segments of indexing information are inserted at intervals into this log. This structure allows fast writing, since all data is written to the same place (thus avoiding disk seeks). This structure keeps data that was written at about the same time close together on disk. This structure also provides a natural way to index information that was written at about the same time. By writing all data as journal frames with extra information attached to aid recovery, and by making the structure of the on-disk log regular, recovery from system failure is made easier and more reliable.
Figure 8 shows the disk structure used by the preferred implementation of the data store. The segments of indexing information start at predictable regularly-spaced positions on disk — every 64 MB in the illustration. This makes it possible to always find the indexes without resort to any stored information. The space from the end of one index segment to the start of the next is used to store blocks of named data, as well as other persistent information. To reflect the fact that the data stored there has all been written in a span of time, the segment of storage space is called an Era and the segment of index is called an Era Index.
The Era Indexes are the segments of the second level index discussed earlier. They play a role similar to that played by directories in a file system: when one named block from an Era is accessed, its Era index is consulted and cached. If other named blocks from the same Era are read while that index remains in RAM, all of their locations on disk are known from the cached Era index and so they will all be read with one disk access per named block. Since the blocks in one Era are close together, any subset of them can be accessed quickly with little seeking.
There is an advantage in having a Data Store correspond to a hard disk or RAID array, since there is at most one frontier per disk or array at which write activity can occur. Since the Era Index number in the annotation is of fixed size, as the capacity of storage devices grows either the number of bits used to encode the Era number or the size of an Era must get larger. Journal Frame
To aid in crash recovery, each item written to disk is enclosed in a journal frame. An example of the journal frame structure used in the preferred implementation is shown in Figure 9. The journal frame starts with a fixed value that is used to mark the start of every frame. A different pseudorandom value is chosen for this mark each time the disk is formatted — such a fixed value that helps delineate the start of a stored record is sometimes called a "magic number". This is followed by a virtual Era number that helps verify that all of the frames belong to the same Era (virtual Era numbers have many more bits than actual Era numbers). Then follows a sequence number to help guarantee that no journal frames have been missed, a frame type which reflects what kind of information has been journaled, and then the length of the payload of information being protected (named blocks are of variable length, up to 64KB) followed by the payload itself. A 32-bit checksum ends the journal frame, allowing data corruption to be readily detected. If the payload is a content named block, it includes additional information such as the reference count for the block at the time it was last written (obtained from the entry annotation in the first level index). Since the log is written sequentially, there is no need to leave any space on disk between journal frames, even though they are of variable length. The only exception is at the end of an Era, where some space is left unused so that the first journal frame of the next Era (which is the Era index for the current Era) always starts at a 64MB boundary.
In a crash recovery scenario, the Era indexes are redundant, because they can be regenerated from the other journal frames. The first level index, stored in RAM, is also redundant because it can be regenerated from the information in the journal.
The Reaper
If information is appended indefinitely to the frontier of the disk append-log, eventually the disk will be filled. The reaper is a program that runs as a background task, reclaiming freeable space on the disk and compacting retained data. The reaper treats the disk as a circular buffer, with the highest address on the disk adjacent to the lowest. Whenever at least 1% of the space used by the journal is freeable (due to objects having been deleted) the reaper runs (also under some other circumstances). The reaper starts at the oldest era that it has not yet processed and examines all journal frames in that era. It verifies the checksum of each journal frame and initiates a recovery procedure if a bad frame is found. Any payload that is still relevant is copied to a new journal frame at the frontier, and the corresponding Era Number in the first level index is updated to point to the new location. Any payload that is not still relevant is omitted. If a frame is found which contains a named block which is not pointed to by the first level index, it is deemed no longer relevant and is omitted. This is how modifications to named blocks are handled: the replacement block is written to the Era at the frontier and its first level index entry is pointed to the new location. The reaper cleans up the old version as it comes across it. Once an Era has been reaped, its space is appended to the available free space.
The way that the reaper deals with reference counts is illustrated in Figure 10. In the "before" picture, Block A is a content-named block and is near the oldest part of the journal. Since Block A was written, its reference count has been changed twice, and journal frames have been written to disk to record these changes. The reference count in the first level index (in RAM) was updated as these increment/decrement requests were received, and is current.
The reaper copies Block A to the Era at the frontier, including the current reference count in the new journal frame. The old copy of Block A can be added to the free space on disk as soon as the Era containing it is finished being reaped. The records of changes in Block A's reference counts that occurred before it was reaped are no longer relevant: the reference count recorded along with the new copy of Block A is up to date and can be used in the event of a crash to rebuild the first level index. The two reference count journal frames shown will be omitted when the reaper processes the Era's containing them, and their space will be freed at that time.
Multiple Data Stores
As discussed earlier, a Data Repository may comprise a number of storage servers, each of which may in turn comprise a number of data stores. Some number of the least significant bits of the block name may be used to define address ranges assigned to different data stores. Using address ranges for this purpose has the advantage that it distributes the indexing problem among the data stores in a scalable fashion. Since block names are randomly distributed, the fraction of the total storage assigned to each data store is very closely proportional to the total size of all the address ranges assigned to it. The same address range can be assigned to multiple data stores as part of a fault tolerance (e.g., replication) scheme. Figure 11 shows an example of an assignment of address ranges to a set of four data stores. Here we have only shown the name bits that are involved in the address ranges assignments to data stores. Note that in this example, each address range is assigned to two data stores, as might be done in a system implementing two¬ fold replication of all data. Similarly, Figure 12 illustrates an assignment of address ranges to eight data stores.
Figure 13 shows a detail from Figure 12, focusing on the first column. Here four data stores are assigned the address range where the first relevant name bits are both zero. In such a case, it may sometimes be necessary to distinguish the data stores that are assigned a range, having each play a different role. This could be done using a fixed order, but this has the drawback that if some roles involve more computational, network or storage load (e.g., one store is the primary replica source, or some block types are only replicated once), the extra burden would always fall on the same store.
Figure 13 illustrates a method of assigning the data stores role-numbers in an equitable fashion. We first assign the stores in each address range a fixed order, and then we use an unused low-order portion of the block name to choose (essentially randomly) which data store will play role number 0. The other roles are then assigned in cyclic order.
Tolerating Read Errors
Hard disks employ redundant encoding at the level of disk sectors to allow them to tolerate hardware problems and still read data correctly. Given that adding redundant information on disk subtracts from the space available for data storage, disk manufacturers add only as much error correction information as is necessary. A typical modern disk specifies that a sector on disk will be unreadable no more often than once in every 1014 bits that are read.
The reaping mechanism described above continually copies and rewrites data. This prevents latent errors from accumulating, but it also causes the data on the disk to be read many times. If 25 500GB disks are each read completely once, this adds up to 1014 bits. In storage systems with many large disks that are continually being reaped, one unreadable sector in 1014 bits read would cause frequent failures.
In RAID systems, a group of D disks is coupled and parity information (i.e., sum modulo 2 of all corresponding bits) for corresponding sectors on D-I of the disks is recorded on the corresponding sector of the D-th disk. If a read error occurs on one disk, the unreadable sector can be reconstructed from the information on the other disks.
A similar technique can be employed to deal with unreadable sectors in the on-disk journal of the present invention. Figure 14 illustrates the technique. Here each Era is divided into E+l equal-sized chunks: E chunks containing data and one chunk containing parity information. Each bit of the parity chunk CE is the sum modulo two (XOR) of the corresponding bits of all the data chunks Q. If one chunk contains unreadable data, it can be reconstructed from the other chunks of the Era by XOR-ing them all together.
If we assume that unreadable sectors occur randomly, the chance of two bad sectors occurring in the same Era is very small. If an Era is 64MB and an unreadable sector occurs once in 1014 bits read, the chance of encountering a second unreadable sector in an Era that already contains one is about one in 40,000. Thus we would need to read a million 500GB disks completely before we would expect to see two unreadable sectors in the same Era.
If there is some spatial correlation, so that bad sectors immediately adjacent to other bad sectors are more likely, this can be dealt with by increasing the size of the chunks. As long as no more than one chunk in an Era contains an error, the error will be recoverable. In fact, as long as the sequence of bytes containing the error is shorter than a chunk (even if it overlaps two chunks), the error is still recoverable. This is illustrated in Figure 15. Here we show an example of an Era with just four chunks, the last of which is the bitwise XOR of the first three. If the shaded region consisting of B0 and A\ is unreadable, then it can be recovered. BQ is recovered by XOR-ing together the corresponding regions of the other three chunks (namely .B1, B2 and B3), while A\ is similarly recovered by XOR-ing together Ao, ^2 and A3.
If the regions containing errors can be localized to a fraction of a chunk, then this technique can be applied directly. In the preferred implementation, the chunk size is related to operating system buffer sizes and errors are only localized by the operating system to entire chunks. In this case, the region containing the unreadable sectors (BQ and A\ in the illustration of Figure 15) can still be identified by using the checksums in the journal frames (see Figure 9). Once two adjacent chunks containing unreadable sectors have been identified, each possible alignment of a chunk-sized region overlapping the two is assumed in turn and the data is tentatively corrected based on that assumption. The first alignment that produces correct checksums in all journal frames is used as the definitive correction.
This technique can be extended to deal with localized correlations across disk tracks. Figure 16 shows a schematic diagram of a disk, showing tracks and sectors. A track on a disk consists of all of the data that can be accessed without moving the read/write heads radially (i.e., without seeking). It might be the case that, for adjacent tracks of data on a disk, sectors that are on different tracks but adjacent to each other radially may have correlated failures. This could be dealt with by making the Era size smaller than the storage capacity of any single track, so that the parity information in each Era can be used to deal with the sector errors independently. If this results in an inconveniently small Era, this could alternatively be dealt with by dividing an Era up into sections, each of which is smaller than any single track. This approach is illustrated in Figure 17. In alternative A, each section includes blocks of data and a parity block. In the preferred alternative B, the parity blocks are all put into the last section, so that this looks essentially like the original scheme of Figure 14, but with the parity information at the end of the Era having additional structure.
Other Implementations
First-level index on disk: An on-disk first level index with a very low rate of false positives and direct pointers to block locations could act as a very compact alternative to a full hash table on disk, almost always providing a pointer to the block name with a single disk access. If some in-memory scheme for caching index entries were used in conjunction with an on-disk first-level index, the compactness of the on- disk index would be valuable in merging updates made to the in-memory cache into the on-disk index: the amount of data that would need to be read and written for an update pass over the entire on-disk structure would be reduced by a large factor. First-level index using hash buckets: A structure is described for the first level index in the preferred implementation which involves allocating space only as needed, splitting a fixed size segment of the index into two new fixed size segments whenever it becomes full. There are many alternative structures which could be used. For example, fixed size hash buckets, each of which contains a segment of the index, is a simple alternative. This approach involves pre-allocating the full space for the index, hi order to account for statistical variation in the filling of the hash buckets, a small percentage of extra space needs to be allocated to each hash bucket to accommodate a desired average filling. First-level index using array with landmarks: Another alternative structure that is logically possible for the first level index is a single long array — a first level index with just a single segment. This would be very slow, since the deltas would always have to be traversed from the start. This could be sped up, however, by inserting a set of landmark-entries regularly spaced in the range of possible names, and maintaining external pointers that track the positions of these landmark entries. If the landmark entries are initially evenly spaced in an array sized for the maximum number of entries that the index supports, this is very similar to the hash-bucket approach, but has the advantage that no extra space needs to be allocated to allow for statistical variation in the filling of the different hash buckets. If a bucket overflows, entries after it (including a landmark) can be moved down a bit to make room. This makes it practical to use much smaller hash buckets (with concomitantly greater statistical fluctuation in filling), so that the amount of linear search (traversing a list of deltas) for each lookup is reduced.
Accumulating space-usage statistics: It is of interest to be able to accumulate statistics for the data store regarding space used (i.e., not freeable) and amount of shared storage. This can be accomplished by maintaining a running total of the space occupied by blocks with non-zero reference counts, and a separate total of the number of bytes referenced (i.e., sum of block size times reference count). These totals can be updated as reference counts are incremented and decremented as long as the size of the corresponding blocks are known. To make this information more efficient to access, a copy of the block size can be added to the Era Index entry of Figure 6.
Reference count deltas: The reference count that was current when a block was last reaped is recorded along with the block. Only changes relative to this value need to be recorded in the first level index: each time a block is reaped and its reference count is recorded on disk, the value recorded in the first level index can be reset to zero. The full reference count for a block is then the sum of the base value stored with the block and the reference count delta stored in the first level index. All blocks with reference counts that haven't changed since they were last reaped will have reference count deltas of zero in the first level index. For efficiency in reaping and in accumulating space usage statistics, a copy of the base value of the reference count recorded with the block can be added to the Era Index entry of Figure 6.
Multiple reference counts per block: If data from multiple sources (e.g., physical locations, administrative domains or file systems) has been deposited in a data store, it may be desirable to be able to efficiently separate out the data from a particular source at a later time, to be copied to another data store with correct reference counts. This need might arise, for example, in a data recovery scenario where data from multiple Data Repositories has been replicated to a single Data Repository, and the loss of several data stores at one of the source Repositories requires recovery of all blocks belonging to that source in some set of address ranges. To enable efficient separation by source, a separate reference count can be stored with each block for each defined data source that references it. If only reference count deltas are stored in the first level index, then blocks that haven't been referenced since the last time they were reaped will have all deltas of zero, and this state can be efficiently encoded in the first level index as the default state. A list of identifiers of sources associated with a given data block can be stored with that block, and reference count deltas in the block's first level index entry can refer to the ordinal number within the list to provide an efficient encoding. When a source references a data block for the first time, the source identifier can be used directly to label the reference count delta in the first level index entry. For efficiency, a copy of the list of sources associated with a block and the corresponding reference counts (from the time the block was last reaped) can be added to the Era Index entry of Figure 6.
First-level index with default values: Figure 18 shows three examples of alternative byte-oriented entry formats for the first level index — Figure 4 showed the format used in the preferred implementation. Alternative format A uses more more bits for era numbers than the format of Figure 4 and reserves just one bit for other information. Every other piece of information that may be associated with a named block is assigned a default value, and if all pieces of information related to a particular entry have their default values, then no other information needs to be explicitly represented. For example, it is normally the case that most blocks haven't been recently deposited and so don't have leases, and so no bits need to be reserved in most entries for lease information, as is done in the format of Figure 4. If all extra information has its default value, a format A entry is three bytes long.
First-level index without pointers to second-level index: A second alternative entry format B is shown in Figure 18. This format has one less bit of collision resistance than format A and uses the same extra-information flag and default conventions. In this format, no information about a second-level index is stored in the first level index, so that the first level index size is minimized. A first level index using this format still identifies new block names efficiently, and caching of Era Index information may be sufficient to identify existing block names efficiently. Information recording the locations of new blocks might be cached in memory (perhaps as annotations) so that updates to an on-disk second level index (separate from the Era Indexes) can be aggregated.
First-level index with approximate disk locations: A third alternative entry format C would be useful in an on-disk first level index of the kind discussed earlier in this section. In this alternative, the annotation includes the full disk location of the named block. In this case, we make the delta about twice as long, adding 7 more bits of collision resistance, so that the chance of a false positive match (which would result in an unnecessary disk read) is 2'13. Two bytes are saved from the location information by only pointing to the 64KB chunk that contains the start of the named block. AU reads are 130KB long, to ensure that the whole block (maximum 64KB long) is read. Some extra information is included in the annotation in the rare case where the first journal frame in the region read can't be found by scanning for the fixed value (magic number) that marks its start.
First-level index with non-byte-aligned entries: Non-byte oriented entry formats can of course also be employed. Variable length Golomb codes are designed precisely for storing the kinds of geometrically distributed deltas that we are dealing with here, and representing annotations with exactly the number of bits required may reduce memory usage slightly. Ignoring the overhead of handling collisions (which is a separate issue), the theoretical limit for codes in this context is an average of (r + Iog2e) bits to represent each delta (where r = log2(M/Λ/)), and Golomb codes will come very close to this limit. The encoding used in the preferred implementation uses about (r + 2.3) bits per delta, which is less than one bit more than the theoretical minimum. First level index with more or less compaction: The amount of compaction used in the first level index is a practical tradeoff: size versus speed and simplicity. For example, using non-byte aligned entries saves additional space, at the cost of additional complexity. Very simple implementations might use a separate hash table for all cases where the difference between adjacent sorted names is too big or too small for a fixed size delta representation, or embed full names directly into the list of deltas in such cases. Note that when a new name agrees with an existing name in the first level index up to its truncation point, only one of the names actually needs to be represented in the first level index with additional resolution in order to preserve the property that new names can collide with at most one existing name in the first level index. Another simple alternative implementation would use truncated names in the first level index rather than deltas, truncating each name to a unique initial segment and relying on a separate compression process applied to segments of the first level index to reduce their size when they aren't being actively accessed.
Including other types of information in the index: Several types of information have been mentioned as useful to include in a first level index entry annotation: leases, reference counts, block locations on disk, and the location on disk of additional indexing information. The presence of a complete compact indexing structure to which other information related to individual named blocks can be attached obviously has many other uses. Other information which could be attached to an index entry includes: locking information, temporary markers for blocks that should be copied somewhere or migrated, cached full block name, cached disk location, cached object metadata, age or activity information, other location information (which disk, which tape, etc.), security or authorization information, and time related information. As long as almost all blocks have their default values for the optional information, allowing for it doesn't appreciably increase the size of the index. Furthermore, as was discussed in the reference count delta and space usage examples above, information that is initially attached to the first level index entries can be moved to the second level index entries when a block is reaped. Ii !!
Shredding or migrating data while reaping: The reaper could provide special processing when deleting some kinds of blocks. For example, blocks that were retained for some period of time because of government regulatory requirements may require special shredding (multiple overwrites with random data) when they are finally deleted. Shredding could also be the norm. The reaper could also be involved in data migration, moving data which hasn't been accessed recently (and so is not expected to be accessed soon) or which has long-term retention requirements (and so will not change soon) to disks that can be turned off, or to offline media. In this case, at least the first level index information would need to be kept on media that remain accessible. More generally, data can be moved to appropriate targets (storage devices or portions of storage devices) based on a prediction of when the data will next be needed, or next need to change. Data which must not change during some period of time might even be aggregated on a storage resource where a retention period constraint is enforced by the storage resource. Byte-range retention leases: If access to a storage resource is shared by more than one data store (as it might be, for example, in a storage area network), it is desirable to have the shared storage resource prevent one data store from modifying journal frames written by another data store. It is also desirable to prevent software bugs in data store software from corrupting journal frames that have been fully written and closed to further modification. Both of these goals can be accomplished with byte-range retention leases. A retention lease specifies that a range of storage locations can be read but cannot be modified by any process (including the data store process that originally wrote the data there) for some specified period of time, which cannot be decreased. The range of bytes is not reserved for access by one process, it is reserved for access by no process. Leases for regions that are part of the journal are periodically renewed, so that the journal remains unmodifiable. Journal frames that have been reaped and added to free space stop having their leases renewed, and these leases eventually expire and the space becomes available for reuse. Retention leases are persistent across ordinary hardware reboots and resets, hi a typical data store usage scenario, leases might last for days or weeks — long enough that system maintenance is unlikely to prevent renewals for a long enough period that leases on unfreed journal frames expire. Figure 19 provides an example use of retention leases. Region A was formerly part of the journal but is now free space in which leases have not yet expired. Region B consists of Eras that have been fully written and closed to further modification. Region C consists of space that can be exclusively written to by one particular data store process. Region D consists of free space that can be read or written by any process. In this example, retention leases are initiated, extended and released for entire Eras, rather than for individual journal frames.
Unified addressing of blocks: We assumed, for simplicity, that in a multi data store system the bits derived from the block name that are used for distributing the blocks between different data stores are different than the bits that are used to distribute data between segments of the first level index. This made our randomness assumptions simpler, but it meant that the stored truncated names in the first level index didn't contain information about the address ranges used for inter-store distribution. If this assumption doesn't hold, and the same initial portion of each block name is used for both kinds of distribution, the main thing that changes is that the block names held by a particular data store are concentrated into a smaller total range, and so are the truncated names. Within each range, the names are still distributed randomly. This changes the appropriate point at which to truncate the block names, since the average separation depends upon the total available range and the maximum number of blocks being indexed and stored. If the assignment, of ranges to a data store changes with time but the total number of named blocks that the store can index doesn't change, then it may be that the mean separation between entries (and hence the point at which block names need to be truncated to form entries in the first level index) changes. This adds some complexity. Regenerating the first level index with entries truncated at a different position might best be done incrementally by the reaper, since in general all of the full block names would have to be re-read from disk.
Randomness and block names: Block names might only be approximately random (i.e., characterized by a high entropy probability distribution), or only a portion of the block name may be approximately random. There should be enough randomness that, in a large list of sorted names, the differences between adjacent names are reasonably predictable. If that is the case, then we know where to truncate the names so that differences can usually be represented by a value that is small enough to be compact but is hardly ever zero (and so we rarely require additional information to represent names distinctly). Block names do not, of course, have to be created randomly or pseudorandomly to have a portion that is sufficiently random to work for the index. For example, if blocks are named by long timestamps of when they were created, then the least significant portion of the timestamp may be quite random. Varying other features: The description of the preferred implementation was made very specific in order to promote clarity, but many features could be varied. For example, different cryptographic hash functions could be used, disks could be virtual disks (for example, in a storage area network) or even other kinds of media. All of the storage could be in RAM. On-disk structure could be very different, with different sizes and structure of Eras, different structure and placement of Era Indexes or even elimination of Era Indexes (and hence Eras) in favor of other kinds of second level indexes, or even putting more direct block location information into RAM. The append log structure could be more sophisticated with more use of pointers to segments of disk data, so that information that hasn't changed is copied less. The log structure could be abandoned in favor of some other structure, with no use made of temporal locality or temporal locality exploited in some other manner. If there are multiple data store instances running on the same (or tightly coupled) physical hardware, they may share some resources. For example, some of them might share a single common first level index. One data store might manage more than one set of storage resources, allocating named blocks to different resources and moving data among them based on storage and migration policies, access patterns and changes in the number, availability or nature of the resources.
Other kinds of indexing: Reference is made throughout to blocks and block names, but blocks are just some of the possible record types, with associated record names, that could be indexed. The indexing techniques disclosed here could also be applied in other contexts. For example, the compressed first level index technique (with or without the handling of collisions) might be useful in places where Bloom Filters are currently employed, particularly where a compact representation is important (e.g., sharing information about a Web cache across the network). The first level index could also be used by itself to provide a compact index for a fixed set of randomly named records. It is to be understood that the foregoing description is intended to illustrate a few possible implementations of the invention. These and a great many other implementations are within the scope of the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising: adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random; determining that the new record name is not already represented in the index by checking a first level index; combining the new record name with record name information already represented in the index to form a combined record name which is shorter than the new record name; adding the combined record name to the first level index to form a new first level index entry that represents the new record; adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name; determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name; and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name, wherein the first level index does not contain information sufficient to conclude that the new record name has been added to the index, wherein each different record in the set is assigned a different record name, and wherein at least a portion of the first level index is ordered based on record names.
2. The method of claim 1 wherein each different record in the set has a different entry in the first level index.
3. The method of claim 1 wherein the process used for combining the new record name comprises determining a portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index.
4. The method of claim 1 further comprising: adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it; determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry.
5. The method of claim 4 wherein the first level index is stored in RAM and the second level index is stored on disk.
6. The method of claim 3 wherein the portion of information derived from the new record name is derived by omitting some subset of the bits of the binary value that represents the new record name.
7. The method of claim 1 wherein the combining involves computing an arithmetic difference of at least portions of two record names or computing some other arithmetic or finite-field arithmetic operations involving at least portions of two record names.
8. The method of claim 1 wherein the process of assigning the new record name involves generating a pseudo-random number, or computing a cryptographic hash of at least a portion of the record itself, or computing a cryptographic hash of some combination of record identifying information which is known to be unique.
9. The method of claim 1 wherein a portion of the index represents a set of records for which record names were added to the index during a span of time that includes the time that the new record was added, and the portion is retrieved as a unit in order to get additional information about the new record, and information about other records added during the span of time is cached in RAM.
10. The method of claim 1 wherein records or index information are stored in a sequential log-structure on disk, and extra information recording the bitwise XOR of a set of pieces comprising a segment of the sequential log-structure is written to disk to allow unreadable sectors on disk to be reconstructed.
11. The method of claim 1 wherein the space of possible record names is divided up into a set of disjoint subspaces, each of which is associated with one or more of a plurality of instances of the index.
12. The method of claim 11 wherein different indexes associated with the same subspace are assigned different roles based on a portion of the record name.
13. The method of claim 1 wherein the new record is a block of content and the new record name is a cryptographic hash of the block of content, and the index is queried in order to avoid repeatedly transmitting or repeatedly storing the block of content.
14. The method of claim 1 wherein the new record name is added to the index a second time, and a reference count associated with the new record name indicates that the new record has been added twice.
15. The method of claim 1 wherein an annotation is attached to the new entry in the first level index which includes information related to the new record or an indication of where additional information can be found.
16. The method of claim 15 wherein information stored in the annotation attached to the new entry is later represented elsewhere and removed from the entry in the first level index.
17. The method of claim 1 wherein at least a portion of the index is organized based on when records were added to the index.
18. The method of claim 3 wherein only the portion of information derived from the new record name that is sufficient to distinguish it from record names already represented in the index is represented in the first level index.
19. The method of claim 1 wherein the sum of the lengths of the record names represented in the index is larger than the sum of the lengths of the entries in the first level index.
20. The method of claim 1 wherein the first level index is divided into disjoint segments based on a fixed and predetermined ordering among all possible record names.
21. The method of claim 1 wherein record or index information is stored in a sequential log-structure on disk, and a reaper program copies a segment of this log- structure elsewhere on disk omitting some of the information and freeing the segment for reuse.
22. The method of claim 21 wherein information related to the new record is included in the segment and a reference count associated with the new record is decremented to zero and the reaper program does not copy the information related to the new record before freeing the segment for reuse.
23. The method of claim 1 wherein records or index information are stored in a sequential log-structure on disk, and a range of bytes in this log-structure is marked \ as being unchangeable for a period of time, with this unchangeable status enforced by a storage resource underlying the data store.
24. The method of claim 1 wherein, as long as the index is not filled beyond its design capacity, the chance that a randomly chosen record name can be determined to not be represented in the index by consulting the first level index alone is over 98%.
25. The method of claim 24 wherein the capacity of the index is limited only by the storage space available.
26. The method of claim 1 wherein a set of records for which record names were added to the index during a span of time are all stored in a localized region of a storage device, and a portion of the index representing the set of records is stored with the set.
27. The method of claim 1 wherein the new first level index entry is written to disk and removed from RAM, and determining that a queried record name is already represented in the index comprises accessing the new first level index entry on disk.
28. The method of claim 15 wherein information in the annotation attached to the new entry is represented on disk and removed from the annotation.
29. The method of claim 1 wherein the new first level index entry does not include information related to the location of data on disk.
30. The method of claim 1 wherein the first level index entry includes an indication as to whether or not the entry comprises information other than record name information.
31. The method of claim 1 wherein a copying process is applied to the index which copies information from first level index entries to disk and removes the information from the first level index.
32. The method of claim 1 wherein an annotation is attached to the new entry in the first level index which includes an approximate disk location.
33. The method of claim 1 wherein an annotation is attached to a new entry in a second level index stored on disk which includes an approximate disk location related to the new record.
34. The method of claim 14 wherein a plurality of reference counts are associated with the new record name, with the sum of the plurality of reference counts reflecting the total number of times the record has been added to the index.
35. The method of claim 14 wherein the reference count associated with the new record name has a reference count component on disk and a reference count component in the first level index, and the sum of reference count components belonging to the new record reflects the number of times that the new record name has been added to the index.
36. The method of claim 1 wherein a reaper program copies records or index data from old locations on disk to new locations on disk, omitting some information from the copy, and the reaper program overwrites the old locations with patterns of data in order to obscure at least the omitted data and render it unreadable.
37. The method of claim 1 wherein a reaper h program copies records or index data from source locations on a source storage device to destination locations on a destination storage device, omitting some information from the copy and marking the source locations as free space, wherein the choice of destination storage device is made based on a prediction about when the copied data will next be accessed or changed.
38. The method of claim 20 wherein a segment of the first level index associated with the new record name has a fixed size and location.
39. The method of claim 20 wherein a segment of the first level index associated with the new record name has a variable size or location.
40. The method of claim 20 wherein a plurality of segments of the first level index are stored in an array structure, and a pointer to a location within the array structure defines the start of a segment associated with the new record name.
41. A method for constructing an index suitable for indexing a large set of records identified by long generally randomly distributed record names, and for answering membership queries about the set, the method comprising: adding a new record to the set and assigning the new record a new record name using a process designed to produce names where at least a portion of each name is at least approximately random; determining that the new record name is not already represented in the index by checking a first level index that does not contain information sufficient to reconstruct the complete record names of records that have already been added to the index; abbreviating the new record name to form a new abbreviated name that is shorter than the new record name but that is sufficient to distinguish it from record names already represented in the index; adding a representation of the abbreviated record name to the first level index to form a new first level index entry that represents the new record; adding a second new record to the set and assigning the second new record a second new record name which is different than the new record name; determining that the first level index does not contain sufficient information to decide whether or not the second new record name is different than the first new record name; and adding an entry to the first level index that represents the second new record name and that is shorter than the second new record name, wherein each different record in the set is assigned a different record name, wherein the first level index is ordered based on abbreviated record names, and wherein a segment of the first level index is stored in a compacted form which is shorter than the sum of the lengths of the abbreviated record names represented in it.
42. The method of claim 41 further comprising: adding a new entry to a second level index that includes the complete new record name or enough information to reconstruct it; determining that a queried record name is already represented in the index by first determining that the queried record name is represented by the new first level index entry and then determining that the queried record name is represented by the new second level index entry, wherein each different record in the set has a different entry in the first level index.
EP05808531A 2004-10-06 2005-10-06 A storage system for randomly named blocks of data Ceased EP1797510A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61665304P 2004-10-06 2004-10-06
PCT/US2005/035994 WO2006042019A2 (en) 2004-10-06 2005-10-06 A storage system for randomly named blocks of data

Publications (1)

Publication Number Publication Date
EP1797510A2 true EP1797510A2 (en) 2007-06-20

Family

ID=36148914

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05808531A Ceased EP1797510A2 (en) 2004-10-06 2005-10-06 A storage system for randomly named blocks of data

Country Status (4)

Country Link
US (3) US7457813B2 (en)
EP (1) EP1797510A2 (en)
JP (1) JP4932726B2 (en)
WO (1) WO2006042019A2 (en)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2158542B1 (en) * 2006-04-04 2019-06-05 Red Hat, Inc. Storage assignment and erasure coding technique for scalable and fault tolerant storage system
CA2651323C (en) 2006-05-05 2016-02-16 Hybir Inc. Group based complete and incremental computer file backup system, process and apparatus
KR100817562B1 (en) * 2007-03-22 2008-03-27 주식회사 이너버스 Method for indexing a large scaled logfile, computer readable medium for storing program therein, and system for the preforming the same
US8768895B2 (en) * 2007-04-11 2014-07-01 Emc Corporation Subsegmenting for efficient storage, resemblance determination, and transmission
GB2466579B (en) 2007-10-25 2012-12-26 Hewlett Packard Development Co Data processing apparatus and method of deduplicating data
US8140637B2 (en) 2007-10-25 2012-03-20 Hewlett-Packard Development Company, L.P. Communicating chunks between devices
US8782368B2 (en) * 2007-10-25 2014-07-15 Hewlett-Packard Development Company, L.P. Storing chunks in containers
US8838541B2 (en) * 2007-10-25 2014-09-16 Hewlett-Packard Development Company, L.P. Data processing apparatus and method of processing data
US8209334B1 (en) * 2007-12-28 2012-06-26 Don Doerner Method to direct data to a specific one of several repositories
JP5132339B2 (en) * 2008-01-31 2013-01-30 キヤノン株式会社 Information processing apparatus, control method therefor, and computer program
US9021068B2 (en) * 2008-02-13 2015-04-28 International Business Machines Corporation Managing a networked storage configuration
US8028000B2 (en) * 2008-02-28 2011-09-27 Microsoft Corporation Data storage structure
DE112008003826B4 (en) 2008-04-25 2015-08-20 Hewlett-Packard Development Company, L.P. Data processing device and method for data processing
US8108446B1 (en) * 2008-06-27 2012-01-31 Symantec Corporation Methods and systems for managing deduplicated data using unilateral referencing
EP2347345A2 (en) * 2008-10-13 2011-07-27 Faroo Assets Limited System and method for distributed index searching of electronic content
US20100174968A1 (en) * 2009-01-02 2010-07-08 Microsoft Corporation Heirarchical erasure coding
US8145598B2 (en) * 2009-02-23 2012-03-27 Iron Mountain Incorporated Methods and systems for single instance storage of asset parts
US8397051B2 (en) 2009-02-23 2013-03-12 Autonomy, Inc. Hybrid hash tables
EP2348465A1 (en) * 2009-12-22 2011-07-27 Philip Morris Products S.A. Method and apparatus for storage of data for manufactured items
US8396873B2 (en) * 2010-03-10 2013-03-12 Emc Corporation Index searching using a bloom filter
US20110276744A1 (en) * 2010-05-05 2011-11-10 Microsoft Corporation Flash memory cache including for use with persistent key-value store
US9053032B2 (en) 2010-05-05 2015-06-09 Microsoft Technology Licensing, Llc Fast and low-RAM-footprint indexing for data deduplication
US8935487B2 (en) 2010-05-05 2015-01-13 Microsoft Corporation Fast and low-RAM-footprint indexing for data deduplication
US8463742B1 (en) 2010-09-17 2013-06-11 Permabit Technology Corp. Managing deduplication of stored data
US9110936B2 (en) 2010-12-28 2015-08-18 Microsoft Technology Licensing, Llc Using index partitioning and reconciliation for data deduplication
US8904128B2 (en) 2011-06-08 2014-12-02 Hewlett-Packard Development Company, L.P. Processing a request to restore deduplicated data
CN103890763B (en) * 2011-10-26 2017-09-12 国际商业机器公司 Information processor, data access method and computer-readable recording medium
US9069707B1 (en) 2011-11-03 2015-06-30 Permabit Technology Corp. Indexing deduplicated data
US9628108B2 (en) 2013-02-01 2017-04-18 Symbolic Io Corporation Method and apparatus for dense hyper IO digital retention
US9304703B1 (en) 2015-04-15 2016-04-05 Symbolic Io Corporation Method and apparatus for dense hyper IO digital retention
US10133636B2 (en) 2013-03-12 2018-11-20 Formulus Black Corporation Data storage and retrieval mediation system and methods for using same
US9817728B2 (en) 2013-02-01 2017-11-14 Symbolic Io Corporation Fast system state cloning
US9467294B2 (en) 2013-02-01 2016-10-11 Symbolic Io Corporation Methods and systems for storing and retrieving data
US9953042B1 (en) 2013-03-01 2018-04-24 Red Hat, Inc. Managing a deduplicated data index
US20140279356A1 (en) * 2013-03-13 2014-09-18 Nyse Group, Inc. Pairs trading system and method
US9639577B1 (en) * 2013-03-27 2017-05-02 Symantec Corporation Systems and methods for determining membership of an element within a set using a minimum of resources
US9451578B2 (en) * 2014-06-03 2016-09-20 Intel Corporation Temporal and spatial bounding of personal information
US9854436B2 (en) 2014-09-25 2017-12-26 Intel Corporation Location and proximity beacon technology to enhance privacy and security
US10061514B2 (en) 2015-04-15 2018-08-28 Formulus Black Corporation Method and apparatus for dense hyper IO digital retention
US10216748B1 (en) 2015-09-30 2019-02-26 EMC IP Holding Company LLC Segment index access management in a de-duplication system
WO2019126072A1 (en) 2017-12-18 2019-06-27 Formulus Black Corporation Random access memory (ram)-based computer systems, devices, and methods
US10942909B2 (en) * 2018-09-25 2021-03-09 Salesforce.Com, Inc. Efficient production and consumption for data changes in a database under high concurrency
US10725853B2 (en) 2019-01-02 2020-07-28 Formulus Black Corporation Systems and methods for memory failure prevention, management, and mitigation
US12032686B2 (en) * 2019-01-04 2024-07-09 Proofpoint, Inc. System and method for scalable file filtering using wildcards
US20210056085A1 (en) * 2019-08-19 2021-02-25 Gsi Technology Inc. Deduplication of data via associative similarity search
US20230334022A1 (en) * 2022-04-14 2023-10-19 The Hospital For Sick Children System and method for processing and storage of a time-series data stream

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3668647A (en) * 1970-06-12 1972-06-06 Ibm File access system
DE2941452C2 (en) * 1979-10-12 1982-06-24 Polygram Gmbh, 2000 Hamburg Method for coding analog signals
US5450553A (en) * 1990-06-15 1995-09-12 Kabushiki Kaisha Toshiba Digital signal processor including address generation by execute/stop instruction designated
US5717908A (en) * 1993-02-25 1998-02-10 Intel Corporation Pattern recognition system using a four address arithmetic logic unit
US5990810A (en) 1995-02-17 1999-11-23 Williams; Ross Neil Method for partitioning a block of data into subblocks and for storing and communcating such subblocks
US5870747A (en) * 1996-07-09 1999-02-09 Informix Software, Inc. Generalized key indexes
US5813008A (en) * 1996-07-12 1998-09-22 Microsoft Corporation Single instance storage of information
US5963956A (en) * 1997-02-27 1999-10-05 Telcontar System and method of optimizing database queries in two or more dimensions
US6119133A (en) * 1998-04-16 2000-09-12 International Business Machines Corporation Extensible method and apparatus for retrieving files having unique record identifiers as file names during program execution
US6070164A (en) * 1998-05-09 2000-05-30 Information Systems Corporation Database method and apparatus using hierarchical bit vector index structure
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US6496830B1 (en) * 1999-06-11 2002-12-17 Oracle Corp. Implementing descending indexes with a descend function
US6366900B1 (en) * 1999-07-23 2002-04-02 Unisys Corporation Method for analyzing the conditional status of specialized files
US7412462B2 (en) * 2000-02-18 2008-08-12 Burnside Acquisition, Llc Data repository and method for promoting network storage of data
US6625591B1 (en) * 2000-09-29 2003-09-23 Emc Corporation Very efficient in-memory representation of large file system directories
US6654855B1 (en) * 2000-10-26 2003-11-25 Emc Corporation Method and apparatus for improving the efficiency of cache memories using chained metrics
US7007141B2 (en) * 2001-01-30 2006-02-28 Data Domain, Inc. Archival data storage system and method
SG103289A1 (en) * 2001-05-25 2004-04-29 Meng Soon Cheo System for indexing textual and non-textual files
EP1407386A2 (en) * 2001-06-21 2004-04-14 ISC, Inc. Database indexing method and apparatus
US6912645B2 (en) 2001-07-19 2005-06-28 Lucent Technologies Inc. Method and apparatus for archival data storage
US6871263B2 (en) * 2001-08-28 2005-03-22 Sedna Patent Services, Llc Method and apparatus for striping data onto a plurality of disk drives
GB2379526A (en) * 2001-09-10 2003-03-12 Simon Alan Spacey A method and apparatus for indexing and searching data
US6782452B2 (en) * 2001-12-11 2004-08-24 Arm Limited Apparatus and method for processing data using a merging cache line fill to allow access to cache entries before a line fill is completed
US6928526B1 (en) * 2002-12-20 2005-08-09 Datadomain, Inc. Efficient data storage system
WO2004074968A2 (en) 2003-02-21 2004-09-02 Caringo, Inc. Additional hash functions in content-based addressing
US7676390B2 (en) * 2003-09-04 2010-03-09 General Electric Company Techniques for performing business analysis based on incomplete and/or stage-based data
US7107416B2 (en) * 2003-09-08 2006-09-12 International Business Machines Corporation Method, system, and program for implementing retention policies to archive records
US20050149375A1 (en) * 2003-12-05 2005-07-07 Wefers Wolfgang M. Systems and methods for handling and managing workflows
US20050210028A1 (en) * 2004-03-18 2005-09-22 Shoji Kodama Data write protection in a storage area network and network attached storage mixed environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
WO2006042019A3 (en) 2006-08-31
US7457813B2 (en) 2008-11-25
USRE45350E1 (en) 2015-01-20
US20060112112A1 (en) 2006-05-25
US20060116990A1 (en) 2006-06-01
JP2008516342A (en) 2008-05-15
US7457800B2 (en) 2008-11-25
JP4932726B2 (en) 2012-05-16
WO2006042019A2 (en) 2006-04-20

Similar Documents

Publication Publication Date Title
US7457813B2 (en) Storage system for randomly named blocks of data
US8843454B2 (en) Elimination of duplicate objects in storage clusters
US6704730B2 (en) Hash file system and method for use in a commonality factoring system
US9454318B2 (en) Efficient data storage system
US7725437B2 (en) Providing an index for a data store
US7434015B2 (en) Efficient data storage system
US8463787B2 (en) Storing nodes representing respective chunks of files in a data store
US9430156B1 (en) Method to increase random I/O performance with low memory overheads
US9235475B1 (en) Metadata optimization for network replication using representative of metadata batch
US6912645B2 (en) Method and apparatus for archival data storage
US9436558B1 (en) System and method for fast backup and restoring using sorted hashes
EP1269350A1 (en) Hash file system and method for use in a commonality factoring system
AU2001238269A1 (en) Hash file system and method for use in a commonality factoring system
WO2007127360A2 (en) System and method for sampling based elimination of duplicate data
WO2010033962A1 (en) Log structured content addressable deduplicating storage
Denehy et al. Duplicate management for reference data
CN109522283A (en) A kind of data de-duplication method and system
US8156126B2 (en) Method for the allocation of data on physical media by a file system that eliminates duplicate data
CN112131194A (en) File storage control method and device of read-only file system and storage medium
CN117519576A (en) Data storage buffer of deduplication storage system
Bobbarjung Improving the performance of highly reliable storage systems

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070410

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: BURNSIDE ACQUISITION, LLC

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PERMABIT TECHNOLOGY CORPORATION

17Q First examination report despatched

Effective date: 20150710

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: PERMABIT TECHNOLOGY CORPORATION

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: RED HAT, INC.

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20171211