WO2013165389A1 - Determining segment boundaries for deduplication - Google Patents

Determining segment boundaries for deduplication Download PDF

Info

Publication number
WO2013165389A1
WO2013165389A1 PCT/US2012/035917 US2012035917W WO2013165389A1 WO 2013165389 A1 WO2013165389 A1 WO 2013165389A1 US 2012035917 W US2012035917 W US 2012035917W WO 2013165389 A1 WO2013165389 A1 WO 2013165389A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
locations
sequence
data chunks
chunks
Prior art date
Application number
PCT/US2012/035917
Other languages
French (fr)
Inventor
Kave Eshghi
David M. Falkinder
Mark D. Lillibridge
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to EP12876001.4A priority Critical patent/EP2845106A4/en
Priority to US14/395,491 priority patent/US20150088840A1/en
Priority to CN201280072861.XA priority patent/CN104246720B/en
Priority to PCT/US2012/035917 priority patent/WO2013165389A1/en
Publication of WO2013165389A1 publication Critical patent/WO2013165389A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/174Redundancy elimination performed by the file system
    • G06F16/1748De-duplication implemented within the file system, e.g. based on file segments
    • G06F16/1752De-duplication implemented within the file system, e.g. based on file segments based on file chunks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • G06F3/0641De-duplication techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Definitions

  • Deduplication is a technique for eliminating redundant data, improving storage utilization, and reducing network traffic.
  • Storage-based data deduplication inspects large volumes of data and identifies entire files, or sections of files, that are identical, then reduces the number of instances of identical data.
  • an email system may contain 100 instances of the same one-megabyte file attachment. Each time the email system is backed up, each of the 100 instances of the attachment is stored, requiring 100 megabytes of storage space. With data deduplication, only one instance of the attachment is stored, thus saving 99 megabytes of storage space.
  • Figure 1 A illustrates a system for determining segment boundaries
  • Figure 1 B illustrates a system for determining segment boundaries
  • Figure 2 illustrates a method for determining segment boundaries
  • Figure 3 illustrates a storage device for determining segment boundaries
  • Figures 4A and 4B show a diagram of determining segment boundaries.
  • chunk refers to a continuous subset of a data stream.
  • segment refers to a group of continuous chunks. Each segment has two boundaries, one at its beginning and one at its end.
  • hash refers to an identification of a chunk that is created using a hash function.
  • interleaved data may comprise 1 a, 2a, 3a, 1 b, 2b, 1 c, 3b, 2c, where 1 a is the first block of underlying stream one, 1 b is the second block of underlying stream one, 2a is the first block of underlying stream two, etc.
  • the blocks may differ in length.
  • the term "deduplicate” refers to the act of logically storing a chunk, segment, or other division of data in a storage system or at a storage node such that there is only one physical copy (or, in some cases, a few copies) of each unique chunk at the system or node. For example, deduplicating ABC, DBC and EBF (where each letter represents a unique chunk) against an initially-empty storage node results in only one physical copy of B but three logical copies. Specifically, if a chunk is deduplicated against a storage location and the chunk is not previously stored at the storage location, then the chunk is physically stored at the storage location.
  • the chunk is deduplicated against the storage location and the chunk is already stored at the storage location, then the chunk is not physically stored at the storage location again.
  • multiple chunks are deduplicated against the storage location and only some of the chunks are already stored at the storage location, then only the chunks not previously stored at the storage location are stored at the storage location during the deduplication.
  • the matching chunk may be replaced with a reference that points to the single physical copy of the chunk. Processes accessing the reference may be redirected to the single physical instance of the stored chunk. Using references in this way results in storage savings. Because identical chunks may occur many times throughout a system, the amount of data that must be stored in the system or transferred over the network is reduced. However, interleaved data is difficult to deduplicate efficiently.
  • FIG. 1A illustrates a system 100 for smart segmentation.
  • Interleaved data refers to a stream of data produced from different underlying sources by interleaving data from the different underlying sources.
  • four underlying sources of data A, B, C, and D 180 may be interleaved to produce a stream adcccbadaaaadcb, where a represents a block of data from source A, b represents a block of data from source B, c represents a block of data from source C, and d represents a block of data from source D.
  • hashes of the chunks may be created in real time on a front end, which communicates with one or more deduplication back ends, or on a client 199.
  • front ends and back ends also include other computing devices or systems.
  • a chunk of data is a continuous subset of a data stream that is produced using a chunking algorithm that may be based on size or logical file boundaries.
  • Each chunk of data may be input to a hash function that may be cryptographic; e.g., MD5 or SHA1 .
  • chunks ⁇ - ⁇ , l 2 , I3, and U result in hashes A613F . . . , 32B1 1 . . . , 4C23D . . . , and 35DFA . . . respectively.
  • each chunk may be approximately around 4 kilobytes, and each hash may be approximately 16 to 20 bytes.
  • hashes of the chunks may be compared. Specifically, identical chunks will produce the same hash if the same hashing algorithm is used. Thus, if the hashes of two chunks are equal, and one chunk is already stored, the other chunk need not be physically stored again; this conserves storage space. Also, if the hashes are equal, underlying chunks themselves may be compared to verify duplication, or duplication may be assumed. Additionally, the system 100 may comprise one or more backend nodes 1 16, 120, 122. In at least one implementation, the different backend nodes 1 16, 120, 122 do not usually store the same chunks. As such, storage space is conserved because identical chunks are not stored between backend nodes 1 16, 120, 122, but segments (groups of chunks) must be routed to the correct backend node 1 16, 120, 122 to be effectively deduplicated.
  • indexes 105 and/or filters 107 may be used to determine which chunks are stored in which storage locations 106 on the backend nodes 1 16, 120, 122.
  • the indexes 105 and/or filters 107 may reside on the backend nodes 1 16, 120, 122 in at least one implementation. In other implementations, the indexes 105, and/or filters 107 may be distributed among the front end nodes 1 18 and/or backend nodes 1 16, 120, 122 in any combination. Additionally, each backend node 1 16, 120, 122 may have separate indexes 105 and/or filters 107 because different data is stored on each backend node 1 16, 120, 122.
  • an index 105 comprises a data structure that maps hashes of chunks stored on that backend node to (possibly indirectly) the storage locations containing those chunks.
  • This data structure may be a hash table.
  • For a non-sparse index an entry is created for every stored chunk.
  • For a sparse index an entry is created for only a limited fraction of the hashes of the chunks stored on that backend node. In at least one embodiment, the sparse index index indexes only one out of every 64 chunks on average.
  • Filter 107 may be present and implemented as a Bloom filter in at least one embodiment.
  • a Bloom filter is a space-efficient data structure for approximate set membership. That is, it represents a set but the represented set may contain elements not explicitly inserted.
  • the filter 107 may represent the set of hashes of the set of chunks stored at that backend node.
  • a backend node in this implementation can thus determine quickly if a given chunk could already be stored at that backend node by determining if its hash is a member of its filter 107.
  • Which backend node to deduplicate a chunk against is not determined on a per chunk basis in at least one embodiment. Rather, routing is determined a segment (a continuous group of chunks) at a time.
  • the input stream of data chunks may be partitioned into segments such that each data chunk belongs to exactly one segment.
  • Figure 1A illustrates that chunks and l 2 comprise segment 130, and that chunks I3 and l 4 comprise segment 132.
  • segments may contain thousands of chunks.
  • a segment may comprise a group of chunks that are adjacent in the interleaved stream. The boundaries of segments are breakpoints. As illustrated the breakpoint between segment 130 and 132 lies between l 2 and I3.
  • a suitable breakpoint in the stream may be determined based on locations of previously stored chunks.
  • the breakpoint is determined by the front-end node 1 18, backend node 1 16, 120, 122, or both the front-end node 1 18 and backend node 1 16, 120, 122 in various embodiments.
  • FIG. 1A shows only one front end 1 18, systems may contain multiple front ends, each implementing similar functionality. Clients 199, of which only one is shown, may communicate with the same front end 1 18 for long periods of time. In one implementation, the functionality of front end 1 18 and the backend nodes 1 16, 120, 122 are combined in a single node.
  • Figure 1 B illustrates a hardware view of the system 100. Components of the system 100 may be distributed over a network or networks 1 14 in at least one embodiment. Specifically, a user may interact with GUI 1 10 and transmit commands and other information from an administrative console over the network 1 14 for processing by front-end node 1 18 and backend node 1 16.
  • the display 104 may be a computer monitor, and a user may manipulate the GUI via the keyboard 1 12 and pointing device or computer mouse (not shown).
  • the network 1 14 may comprise network elements such as switches, and may be the Internet in at least one embodiment.
  • Front-end node 1 18 comprises a processor 102 that performs the hashing algorithm in at least one embodiment.
  • the system 100 comprises multiple front-end nodes.
  • Backend node 1 16 comprises a processor 108 that may access the indexes 105 and/or filters 107, and the processor 108 may be coupled to storage locations 106. Many configurations and combinations of hardware components of the system 100 are possible.
  • the system 100 comprises multiple back-end nodes.
  • one or more clients 199 are backed up periodically by scheduled command.
  • the virtual tape library (“VLT”) or network file system (“NFS”) protocols may be used as the protocol to backup a client 199.
  • Figure 2 illustrates a method 200 of smart segmentation beginning at 202 and ending at 210.
  • a sequence of hashes is received.
  • the sequence may be generated by front-end node 1 18 from sequential chunks of interleaved data scheduled for deduplication.
  • the sequential chunks of interleaved data may have been produced on front-end node 1 18 by chunking interleaved data received from client 199 for deduplication.
  • the chunking process partitions the interleaved data into a sequence of data chunks.
  • a sequence of hashes may in turn be generated by hashing each data chunk.
  • the chunking and hashing may be performed by the client 199, and only the hashes may be sent to the front-end node 1 18.
  • the chunking and hashing may be performed by the client 199, and only the hashes may be sent to the front-end node 1 18.
  • Other variations are possible.
  • interleaved data may originate from different sources or streams. For example, different threads may multiplex data into a single file resulting in interleaved data. Each hash corresponds to a chunk. In at least one embodiment, the amount of hashes received corresponds to chunks with lengths totaling three times the length of an average segment.
  • locations of previously stored copies of the data chunks are determined.
  • a query to the backends 1 16, 120, 122 is made for location information and the locations may be received as results of the query.
  • the front-end node 1 18 may broadcast the sequence of hashes to the backend nodes 1 16, 120, 122, each of which may then determine which of its locations 106 contain copies of the data chunks corresponding to the sent hashes and send the resulting location information back to front-end node 1 18.
  • the determining may be done directly without any need for communication between nodes.
  • each data chunk it may be determined which locations already contain copies of that data chunk. This determining may make use of heuristics. In some implementations, this determining may only be done for a subset of the data chunks.
  • the locations may be as general as a group or cluster of backend nodes or a particular backend node, or the locations may be as specific as a chunk container (e.g., a file or disk portion that stores chunks) or other particular location on a specific backend node. Determining locations may comprise searching for one or more of the hashes in an index 105 such as a full chunk index or a sparse index, or a set or filter 107 such as a Bloom filter. The determined locations may be a group of backend nodes 1 16, 120, 122, a particular backend node 1 16, 120, 122, chunk containers, stores, or storage nodes.
  • an index 105 such as a full chunk index or a sparse index
  • a set or filter 107 such as a Bloom filter.
  • the determined locations may be a group of backend nodes 1 16, 120, 122, a particular backend node 1 16, 120, 122, chunk containers, stores, or storage nodes.
  • each backend node may return a list of sets of chunk container identification numbers to the front-end node 1 18, each set pertaining to the corresponding hash/data chunk and the chunk container identification numbers identifying the chunk containers stored at that backend node in which copies of that data chunk are stored.
  • These lists can be combined on the front-end node 1 18 into a single list that gives for each data chunk, the chunk container ID/backend number pairs identifying chunk containers containing copies of that data chunk.
  • the returned information identifies only which data chunks that backend node has copies for.
  • the information can be combined to produce a list giving for each data chunk, the set of backend nodes containing copies of that data chunk.
  • the determined information may just consist of a list of sets of chunk container IDs because there is no need to distinguish between different backend nodes. As the skilled practitioner is aware, there many different ways location information can be conveyed.
  • a breakpoint in the sequence of chunks is determined based at least in part on the determined locations. This breakpoint may be used to form a boundary of a segment of data chunks. For example, if no segments have yet been produced, then the first segment may be generated as the data chunks from the beginning of the sequence to the data chunk just before the determined breakpoint. Alternatively, if some segments have already been generated then the next segment generated may consist of the data chunks between the end of the last segment generated and the newly determined breakpoint.
  • Each iteration of figure 2 may determine a new breakpoint and hence determine a new segment.
  • Each additional iteration may reuse some of the work or information of the previous iterations. For example, the hashes of the data chunks not formed into a segment by the previous iteration and their determined locations may be considered again by the next iteration for possible inclusion in the next segment determined.
  • the process of partitioning a sequence of data chunks into segments is called segmentation.
  • Determining a break point may comprise determining regions in the sequence of data chunks based in part on which data chunks have copies in the same determined locations and then determining the breakpoint in the sequence of data chunks based on the regions. For example, the regions in the sequence of data chunks may be determined such that at least 90% of the data chunks with determined locations of each region have previously stored copies in a single location. That is, for each region there is a location in which at least 90% of the data chunks with determined locations have previously stored copies. Next, a break point in the sequence of data chunks may be determined based on the regions.
  • Hashes and chunks corresponding to the same or similar locations may be grouped.
  • the front-end node 1 18 may group hashes and corresponding data chunks corresponding to one location into a segment, and may group adjacent hashes and corresponding data chunks corresponding to a different location into another segment. As such, the breakpoint is determined to lie between the two segments.
  • the front-end node 1 18 may deduplicate the newly formed segment against one of the backend nodes as a whole. That is, the segment may be deduplicated only against data contained in one of the backend nodes and not against data contained in the other backend nodes. This is in contrast to, for example, the first half of a segment being deduplicated against one backend node and the second half the segment being deduplicated against another backend node.
  • the data contained in a backend node may be in storage attached to the backend node, under control of the backend node, or the primary responsibility of the backend node rather than physically part of it.
  • the segment may be deduplicated only against data contained in one of a plurality of nodes.
  • the chosen backend node 1 16, 120, or 122 identifies the storage locations 106 against which the segment will be deduplicated.
  • FIG. 3 illustrates a particular computer system 380 suitable for implementing one or more examples disclosed herein.
  • the computer system 380 includes one or more hardware processors 382 (which may be referred to as central processor units or CPUs) that are in communication with memory devices including computer-readable storage device 388 and input/output (I/O) 390 devices.
  • the one or more processors may be implemented as one or more CPU chips.
  • the computer-readable storage device 388 comprises a non-transitory storage device such as volatile memory (e.g., RAM), non-volatile storage (e.g., Flash memory, hard disk drive, CD ROM, etc.), or combinations thereof.
  • the computer-readable storage device 388 may comprise a computer or machine-readable medium storing software or instructions 384 executed by the processor(s) 382. One or more of the actions described herein are performed by the processor(s) 382 during execution of the instructions 384.
  • Figure 4A illustrates an example of one way of determining a set of regions.
  • a sequence of 25 chunks is shown.
  • thousands of chunks may be processed at a time.
  • For each chunk, its determined locations are shown above that chunk.
  • chunk number 1 has not been determined to have a copy in any location 106. It may represent new data that has not yet been stored in at least one example. Alternatively, the heuristics used to determine chunk locations may have made an error in this case.
  • Chunk number 2 by contrast, has been determined to be in location 5.
  • Chunk number 3 also has no determined location, but chunk numbers 4 through 6 have been determined to have copies in location 1 . Note that some chunks have been determined to be in multiple locations; for example, chunk numbers 9 and 10 have been determined to have copies in both locations 1 and 2.
  • region R1 comprises chunks 1 through 3 and region R2 comprises chunks 3 through 18.
  • regions (R1 -R6) have been determined by finding the maximal continuous subsequences such that each subsequence has an associated location and every data chunk in that subsequence either has that location as one of its determined locations or has no determined location.
  • region R1 's associated location is 5; one of its chunks (# 2) has 5 as one of its determined locations and the other two chunks (#s 1 and 3) have no determined location.
  • R2's associated location is 1
  • R3 and R6's associated location is 2
  • R4's associated location is 4,
  • R5's associated location is 3.
  • Each of these regions is maximal because it cannot be extended in either direction by even one chunk without violating the example region generation rule. For example, chunk 4 cannot be added to region R1 because it has a determined location and none of its determined locations is 5.
  • Each region represents a swath of data that resides in one location; thus a breakpoint in the middle of a region will likely cause loss of deduplication. Because new data (e.g., data chunks without locations) can be stored anywhere without risk of creating intermediate duplication, the new data effectively acts like a wildcard, allowing it to be part of any region, thus extending the region.
  • regions need not be maximal but may be required to end with data chunks having determined locations.
  • regions may be allowed to incorporate a small amount of data chunks with determined locations that do not include the region's primary location.
  • region R2 might be allowed to exist as shown even if chunk 13 was determined to be located in location 5.
  • there may be a limit to how many such chunks a region may incorporate; the limit may be absolute (e.g., no more than five chunks) or relative (e.g., no more than 10% of the data chunks with determined locations may be have determined locations other than the associated location).
  • new data chunks may be handled differently. Instead of treating their locations as wildcards, able to belong to any region, they may be regarded as being located in both the determined location of the nearest chunk to the left with a determined location and the determined location of the nearest chunk to the right with a determined location. If the nearest chunk with a determined location is too far away (e.g., exceeds a threshold of distance away), then its determined locations may be ignored. Thus new data chunks too far away from old chunks may be regarded as having no location, and thus either incorporable in no region, or only in incorporable in special regions. Such a special reason may be one that contains only similar new data chunks far away from old data chunks in at least one example.
  • new data chunks may be regarded as being in the determined locations of the nearest data chunk with a determined location.
  • chunk 11 may treated as if it was in locations 1 & 2
  • chunk 13 may be treated as if it was in location 1
  • chunk 12 may be treated as being in either locations 1 &2, 1 , or both depending on tiebreaking rules.
  • Potential breakpoints may lie just before the first chunk and after the last chunk of each of the three resulting regions in Figure 4B (R1 ', R2', and R4'). In one example, the earliest such a breakpoint between a required minimum segment size and a required maximum segment size is chosen. If no such breakpoint exists, either the maximum segment size may be chosen or a backup segmentation scheme that does not take determined chunk locations into account may be applied. If, for the purposes of the example of figure 4A, it is assumed a minimum segment size of 8 and a maximum segment size of 23, then a breakpoint between chunk 18 and 19 will be chosen. The first generated segment may then consist of chunks 1 through 18. Chunk 19 may form the beginning of the second segment. Note that this puts the data in location 1 together in a single segment as well as the data in location 4 in a different single segment.
  • rules may comprise discarding maximal regions below a threshold size and prioritizing the resulting potential breakpoints by how large their associated regions are. Lower priority breakpoints might be used only if higher priority breakpoints fall outside the minimum and maximum segment size requirements.
  • two potential breakpoints are separated by new data not belonging to any region.
  • the breakpoint could be determined to be anywhere between the two potential breakpoints without affecting which regions get broken.
  • different rules would allow for selection of breakpoints in the middle between the regions or at one of the region ends.

Abstract

A sequence of hashes is received. Each hash corresponds to a data chunk of data to be deduplicated. Locations of previously stored copies of the data chunks are determined, the locations determined based on the hashes. A breakpoint in the sequence of data chunks is determined based on the locations, the breakpoint forming a boundary of a segment of data chunks.

Description

DETERMINING SEGMENT BOUNDARIES FOR DEDUPLICATION
BACKGROUND
[0001] Administrators strive to efficiently manage file servers and file server resources while keeping networks protected from unauthorized users yet accessible to authorized users. The practice of storing files on servers rather than locally on users' computers has led to identical data being stored at multiple locations in the same system and even at multiple locations in the same server.
[0002] Deduplication is a technique for eliminating redundant data, improving storage utilization, and reducing network traffic. Storage-based data deduplication inspects large volumes of data and identifies entire files, or sections of files, that are identical, then reduces the number of instances of identical data. For example, an email system may contain 100 instances of the same one-megabyte file attachment. Each time the email system is backed up, each of the 100 instances of the attachment is stored, requiring 100 megabytes of storage space. With data deduplication, only one instance of the attachment is stored, thus saving 99 megabytes of storage space.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] For a detailed description of exemplary embodiments of the invention, reference will now be made to the accompanying drawings in which:
[0004] Figure 1 A illustrates a system for determining segment boundaries;
[0005] Figure 1 B illustrates a system for determining segment boundaries;
[0006] Figure 2 illustrates a method for determining segment boundaries;
[0007] Figure 3 illustrates a storage device for determining segment boundaries;
[0008] Figures 4A and 4B show a diagram of determining segment boundaries.
NOTATION AND NOMENCLATURE
[0009] As used herein, the term "chunk" refers to a continuous subset of a data stream.
[0010] As used herein, the term "segment" refers to a group of continuous chunks. Each segment has two boundaries, one at its beginning and one at its end. [0011] As used herein, the term "hash" refers to an identification of a chunk that is created using a hash function.
[0012] As used herein, the term "block" refers to a division of a file or data stream that is interleaved with other files or data streams. For example, interleaved data may comprise 1 a, 2a, 3a, 1 b, 2b, 1 c, 3b, 2c, where 1 a is the first block of underlying stream one, 1 b is the second block of underlying stream one, 2a is the first block of underlying stream two, etc. In some cases, the blocks may differ in length.
[0013] As used herein, the term "deduplicate" refers to the act of logically storing a chunk, segment, or other division of data in a storage system or at a storage node such that there is only one physical copy (or, in some cases, a few copies) of each unique chunk at the system or node. For example, deduplicating ABC, DBC and EBF (where each letter represents a unique chunk) against an initially-empty storage node results in only one physical copy of B but three logical copies. Specifically, if a chunk is deduplicated against a storage location and the chunk is not previously stored at the storage location, then the chunk is physically stored at the storage location. However, if the chunk is deduplicated against the storage location and the chunk is already stored at the storage location, then the chunk is not physically stored at the storage location again. In yet another example, if multiple chunks are deduplicated against the storage location and only some of the chunks are already stored at the storage location, then only the chunks not previously stored at the storage location are stored at the storage location during the deduplication.
DETAILED DESCRIPTION
[0014] The following discussion is directed to various embodiments of the invention. Although one or more of these embodiments may be preferred, the embodiments disclosed should not be interpreted, or otherwise used, as limiting the scope of the disclosure, including the claims. In addition, one skilled in the art will understand that the following description has broad application, and the discussion of any embodiment is meant only to be exemplary of that embodiment, and not intended to intimate that the scope of the disclosure, including the claims, is limited to that embodiment. [0015] During chunk-based deduplication, unique chunks of data are each physically stored once no matter how many logical copies of them there may be. Subsequent chunks received may be compared to stored chunks, and if the comparison results in a match, the matching chunk is not physically stored again. Instead, the matching chunk may be replaced with a reference that points to the single physical copy of the chunk. Processes accessing the reference may be redirected to the single physical instance of the stored chunk. Using references in this way results in storage savings. Because identical chunks may occur many times throughout a system, the amount of data that must be stored in the system or transferred over the network is reduced. However, interleaved data is difficult to deduplicate efficiently.
[0016] Figure 1A illustrates a system 100 for smart segmentation. Interleaved data refers to a stream of data produced from different underlying sources by interleaving data from the different underlying sources. For example, four underlying sources of data A, B, C, and D 180 may be interleaved to produce a stream adcccbadaaaadcb, where a represents a block of data from source A, b represents a block of data from source B, c represents a block of data from source C, and d represents a block of data from source D.
[0017] Recovering the underlying source streams is difficult without understanding the format used to interleave the streams. Because different backup agents are made by different companies that interleave data in different ways, and because methods of interleaving change over time, it may not be cost-effective to produce a system that can un-interleave all interleaved data. It may therefore be useful for a system to be able to directly handle interleaved data.
[0018] During deduplication, hashes of the chunks may be created in real time on a front end, which communicates with one or more deduplication back ends, or on a client 199. For example, the front end 1 18, which communicates with one or more back ends, which may be deduplication backend nodes 1 16, 120, 122. In various embodiments, front ends and back ends also include other computing devices or systems. A chunk of data is a continuous subset of a data stream that is produced using a chunking algorithm that may be based on size or logical file boundaries. Each chunk of data may be input to a hash function that may be cryptographic; e.g., MD5 or SHA1 . In the example of Figure 1 A, chunks \-\, l2, I3, and U result in hashes A613F . . . , 32B1 1 . . . , 4C23D . . . , and 35DFA . . . respectively. In at least some embodiments, each chunk may be approximately around 4 kilobytes, and each hash may be approximately 16 to 20 bytes.
[0019] Instead of chunks being compared for deduplication purposes, hashes of the chunks may be compared. Specifically, identical chunks will produce the same hash if the same hashing algorithm is used. Thus, if the hashes of two chunks are equal, and one chunk is already stored, the other chunk need not be physically stored again; this conserves storage space. Also, if the hashes are equal, underlying chunks themselves may be compared to verify duplication, or duplication may be assumed. Additionally, the system 100 may comprise one or more backend nodes 1 16, 120, 122. In at least one implementation, the different backend nodes 1 16, 120, 122 do not usually store the same chunks. As such, storage space is conserved because identical chunks are not stored between backend nodes 1 16, 120, 122, but segments (groups of chunks) must be routed to the correct backend node 1 16, 120, 122 to be effectively deduplicated.
[0020] Comparing hashes of chunks can be performed more efficiently than comparing the chunks themselves, especially when indexes and filters are used. To aid in the comparison process, indexes 105 and/or filters 107 may be used to determine which chunks are stored in which storage locations 106 on the backend nodes 1 16, 120, 122. The indexes 105 and/or filters 107 may reside on the backend nodes 1 16, 120, 122 in at least one implementation. In other implementations, the indexes 105, and/or filters 107 may be distributed among the front end nodes 1 18 and/or backend nodes 1 16, 120, 122 in any combination. Additionally, each backend node 1 16, 120, 122 may have separate indexes 105 and/or filters 107 because different data is stored on each backend node 1 16, 120, 122.
[0021] In some implementations, an index 105 comprises a data structure that maps hashes of chunks stored on that backend node to (possibly indirectly) the storage locations containing those chunks. This data structure may be a hash table. For a non-sparse index, an entry is created for every stored chunk. For a sparse index, an entry is created for only a limited fraction of the hashes of the chunks stored on that backend node. In at least one embodiment, the sparse index indexes only one out of every 64 chunks on average.
[0022] Filter 107 may be present and implemented as a Bloom filter in at least one embodiment. A Bloom filter is a space-efficient data structure for approximate set membership. That is, it represents a set but the represented set may contain elements not explicitly inserted. The filter 107 may represent the set of hashes of the set of chunks stored at that backend node. A backend node in this implementation can thus determine quickly if a given chunk could already be stored at that backend node by determining if its hash is a member of its filter 107.
[0023] Which backend node to deduplicate a chunk against (i.e., which backend node to route a chunk to) is not determined on a per chunk basis in at least one embodiment. Rather, routing is determined a segment (a continuous group of chunks) at a time. The input stream of data chunks may be partitioned into segments such that each data chunk belongs to exactly one segment. Figure 1A illustrates that chunks and l2 comprise segment 130, and that chunks I3 and l4 comprise segment 132. In other examples, segments may contain thousands of chunks. A segment may comprise a group of chunks that are adjacent in the interleaved stream. The boundaries of segments are breakpoints. As illustrated the breakpoint between segment 130 and 132 lies between l2 and I3. As detailed in the method of Figure 2, a suitable breakpoint in the stream may be determined based on locations of previously stored chunks. The breakpoint is determined by the front-end node 1 18, backend node 1 16, 120, 122, or both the front-end node 1 18 and backend node 1 16, 120, 122 in various embodiments.
[0024] Although figure 1A shows only one front end 1 18, systems may contain multiple front ends, each implementing similar functionality. Clients 199, of which only one is shown, may communicate with the same front end 1 18 for long periods of time. In one implementation, the functionality of front end 1 18 and the backend nodes 1 16, 120, 122 are combined in a single node. [0025] Figure 1 B illustrates a hardware view of the system 100. Components of the system 100 may be distributed over a network or networks 1 14 in at least one embodiment. Specifically, a user may interact with GUI 1 10 and transmit commands and other information from an administrative console over the network 1 14 for processing by front-end node 1 18 and backend node 1 16. The display 104 may be a computer monitor, and a user may manipulate the GUI via the keyboard 1 12 and pointing device or computer mouse (not shown). The network 1 14 may comprise network elements such as switches, and may be the Internet in at least one embodiment. Front-end node 1 18 comprises a processor 102 that performs the hashing algorithm in at least one embodiment. In another embodiment, the system 100 comprises multiple front-end nodes. Backend node 1 16 comprises a processor 108 that may access the indexes 105 and/or filters 107, and the processor 108 may be coupled to storage locations 106. Many configurations and combinations of hardware components of the system 100 are possible. In another embodiment, the system 100 comprises multiple back-end nodes.
[0026] In at least one embodiment, one or more clients 199 are backed up periodically by scheduled command. The virtual tape library ("VLT") or network file system ("NFS") protocols may be used as the protocol to backup a client 199.
[0027] Figure 2 illustrates a method 200 of smart segmentation beginning at 202 and ending at 210. At 204, a sequence of hashes is received. For example, the sequence may be generated by front-end node 1 18 from sequential chunks of interleaved data scheduled for deduplication. The sequential chunks of interleaved data may have been produced on front-end node 1 18 by chunking interleaved data received from client 199 for deduplication. The chunking process partitions the interleaved data into a sequence of data chunks. A sequence of hashes may in turn be generated by hashing each data chunk.
[0028] Alternatively, the chunking and hashing may be performed by the client 199, and only the hashes may be sent to the front-end node 1 18. Other variations are possible.
[0029] As described above, interleaved data may originate from different sources or streams. For example, different threads may multiplex data into a single file resulting in interleaved data. Each hash corresponds to a chunk. In at least one embodiment, the amount of hashes received corresponds to chunks with lengths totaling three times the length of an average segment. Although the system is discussed using interleaved data as examples, in at least one example non-interleaved data is handled similarly as well.
[0030] At 206, locations of previously stored copies of the data chunks are determined. In at least one example, a query to the backends 1 16, 120, 122 is made for location information and the locations may be received as results of the query. In one implementation, the front-end node 1 18 may broadcast the sequence of hashes to the backend nodes 1 16, 120, 122, each of which may then determine which of its locations 106 contain copies of the data chunks corresponding to the sent hashes and send the resulting location information back to front-end node 1 18. In a one node implementation, the determining may be done directly without any need for communication between nodes.
[0031] For each data chunk, it may be determined which locations already contain copies of that data chunk. This determining may make use of heuristics. In some implementations, this determining may only be done for a subset of the data chunks.
[0032] The locations may be as general as a group or cluster of backend nodes or a particular backend node, or the locations may be as specific as a chunk container (e.g., a file or disk portion that stores chunks) or other particular location on a specific backend node. Determining locations may comprise searching for one or more of the hashes in an index 105 such as a full chunk index or a sparse index, or a set or filter 107 such as a Bloom filter. The determined locations may be a group of backend nodes 1 16, 120, 122, a particular backend node 1 16, 120, 122, chunk containers, stores, or storage nodes. For example, each backend node may return a list of sets of chunk container identification numbers to the front-end node 1 18, each set pertaining to the corresponding hash/data chunk and the chunk container identification numbers identifying the chunk containers stored at that backend node in which copies of that data chunk are stored. These lists can be combined on the front-end node 1 18 into a single list that gives for each data chunk, the chunk container ID/backend number pairs identifying chunk containers containing copies of that data chunk.
[0033] In another embodiment, the returned information identifies only which data chunks that backend node has copies for. Again, the information can be combined to produce a list giving for each data chunk, the set of backend nodes containing copies of that data chunk.
[0034] In yet another embodiment that has only a single node, the determined information may just consist of a list of sets of chunk container IDs because there is no need to distinguish between different backend nodes. As the skilled practitioner is aware, there many different ways location information can be conveyed.
[0035] At 208, a breakpoint in the sequence of chunks is determined based at least in part on the determined locations. This breakpoint may be used to form a boundary of a segment of data chunks. For example, if no segments have yet been produced, then the first segment may be generated as the data chunks from the beginning of the sequence to the data chunk just before the determined breakpoint. Alternatively, if some segments have already been generated then the next segment generated may consist of the data chunks between the end of the last segment generated and the newly determined breakpoint.
[0036] Each iteration of figure 2 (202 to 210) may determine a new breakpoint and hence determine a new segment. Each additional iteration may reuse some of the work or information of the previous iterations. For example, the hashes of the data chunks not formed into a segment by the previous iteration and their determined locations may be considered again by the next iteration for possible inclusion in the next segment determined. The process of partitioning a sequence of data chunks into segments is called segmentation.
[0037] Determining a break point may comprise determining regions in the sequence of data chunks based in part on which data chunks have copies in the same determined locations and then determining the breakpoint in the sequence of data chunks based on the regions. For example, the regions in the sequence of data chunks may be determined such that at least 90% of the data chunks with determined locations of each region have previously stored copies in a single location. That is, for each region there is a location in which at least 90% of the data chunks with determined locations have previously stored copies. Next, a break point in the sequence of data chunks may be determined based on the regions.
[0038] Hashes and chunks corresponding to the same or similar locations may be grouped. For example, the front-end node 1 18 may group hashes and corresponding data chunks corresponding to one location into a segment, and may group adjacent hashes and corresponding data chunks corresponding to a different location into another segment. As such, the breakpoint is determined to lie between the two segments.
[0039] The front-end node 1 18 may deduplicate the newly formed segment against one of the backend nodes as a whole. That is, the segment may be deduplicated only against data contained in one of the backend nodes and not against data contained in the other backend nodes. This is in contrast to, for example, the first half of a segment being deduplicated against one backend node and the second half the segment being deduplicated against another backend node. In at least one embodiment, the data contained in a backend node may be in storage attached to the backend node, under control of the backend node, or the primary responsibility of the backend node rather than physically part of it.
[0040] The segment may be deduplicated only against data contained in one of a plurality of nodes. In one embodiment, the chosen backend node 1 16, 120, or 122 identifies the storage locations 106 against which the segment will be deduplicated.
[0041] The system described above may be implemented on any particular machine or computer with sufficient processing power, memory resources, and throughput capability to handle the necessary workload placed upon the computer. Figure 3 illustrates a particular computer system 380 suitable for implementing one or more examples disclosed herein. The computer system 380 includes one or more hardware processors 382 (which may be referred to as central processor units or CPUs) that are in communication with memory devices including computer-readable storage device 388 and input/output (I/O) 390 devices. The one or more processors may be implemented as one or more CPU chips.
[0042] In various embodiments, the computer-readable storage device 388 comprises a non-transitory storage device such as volatile memory (e.g., RAM), non-volatile storage (e.g., Flash memory, hard disk drive, CD ROM, etc.), or combinations thereof. The computer-readable storage device 388 may comprise a computer or machine-readable medium storing software or instructions 384 executed by the processor(s) 382. One or more of the actions described herein are performed by the processor(s) 382 during execution of the instructions 384.
[0043] Figure 4A illustrates an example of one way of determining a set of regions. Here, a sequence of 25 chunks is shown. In various examples, thousands of chunks may be processed at a time. For each chunk, its determined locations are shown above that chunk. For example, chunk number 1 has not been determined to have a copy in any location 106. It may represent new data that has not yet been stored in at least one example. Alternatively, the heuristics used to determine chunk locations may have made an error in this case. Chunk number 2, by contrast, has been determined to be in location 5. Chunk number 3 also has no determined location, but chunk numbers 4 through 6 have been determined to have copies in location 1 . Note that some chunks have been determined to be in multiple locations; for example, chunk numbers 9 and 10 have been determined to have copies in both locations 1 and 2.
[0044] Shown below the chunks are a number of regions, R1 through R6. For example, region R1 comprises chunks 1 through 3 and region R2 comprises chunks 3 through 18. These regions (R1 -R6) have been determined by finding the maximal continuous subsequences such that each subsequence has an associated location and every data chunk in that subsequence either has that location as one of its determined locations or has no determined location. For example region R1 's associated location is 5; one of its chunks (# 2) has 5 as one of its determined locations and the other two chunks (#s 1 and 3) have no determined location. Similarly, R2's associated location is 1 , R3 and R6's associated location is 2, R4's associated location is 4, and R5's associated location is 3. [0045] Each of these regions is maximal because it cannot be extended in either direction by even one chunk without violating the example region generation rule. For example, chunk 4 cannot be added to region R1 because it has a determined location and none of its determined locations is 5. Each region represents a swath of data that resides in one location; thus a breakpoint in the middle of a region will likely cause loss of deduplication. Because new data (e.g., data chunks without locations) can be stored anywhere without risk of creating intermediate duplication, the new data effectively acts like a wildcard, allowing it to be part of any region, thus extending the region.
[0046] There are many ways of determining regions. For example, regions need not be maximal but may be required to end with data chunks having determined locations. In another example, in order to deal with noise, regions may be allowed to incorporate a small amount of data chunks with determined locations that do not include the region's primary location. For example, in Figure 4A, region R2 might be allowed to exist as shown even if chunk 13 was determined to be located in location 5. In another example, there may be a limit to how many such chunks a region may incorporate; the limit may be absolute (e.g., no more than five chunks) or relative (e.g., no more than 10% of the data chunks with determined locations may be have determined locations other than the associated location).
[0047] In another example, new data chunks may be handled differently. Instead of treating their locations as wildcards, able to belong to any region, they may be regarded as being located in both the determined location of the nearest chunk to the left with a determined location and the determined location of the nearest chunk to the right with a determined location. If the nearest chunk with a determined location is too far away (e.g., exceeds a threshold of distance away), then its determined locations may be ignored. Thus new data chunks too far away from old chunks may be regarded as having no location, and thus either incorporable in no region, or only in incorporable in special regions. Such a special reason may be one that contains only similar new data chunks far away from old data chunks in at least one example. In another example, new data chunks may be regarded as being in the determined locations of the nearest data chunk with a determined location. In the case of figure 4A, chunk 11 may treated as if it was in locations 1 & 2, chunk 13 may be treated as if it was in location 1 , and chunk 12 may be treated as being in either locations 1 &2, 1 , or both depending on tiebreaking rules.
[0048] Because "breaking" (i.e., determining boundaries) the middle of a region is likely to cause duplication, it should be avoided if possible. Moreover, breaking in the middle of a larger region rather than a smaller region and breaking closer to the middle of a region will likely cause more duplication. As such, these scenarios should be minimized as well. By taking the regions into account, an efficient breakpoint may be determined based on the regions. Efficient breakpoints cause less duplication of stored data.
[0049] There are many ways of determining boundaries. One example involves focusing on preserving the largest regions, e.g., selecting the largest regions and shortening the parts of the other regions that overlap it. Shorten here means make the smaller region just small enough so that it does not overlap the largest region; this may require removing the smaller region entirely if is completely contained in the largest region. In the case of figure 4A, the largest region is R2. R1 may be shortened to chunks 1 -2, and R3 may be discarded as it entirely overlaps R2. R4 may be shortened to chunks 19-25. The next largest region remaining may be selected and the process repeated until none of the remaining regions overlap. The result of this process for figure 4A is shown in Figure 4B.
[0050] Potential breakpoints may lie just before the first chunk and after the last chunk of each of the three resulting regions in Figure 4B (R1 ', R2', and R4'). In one example, the earliest such a breakpoint between a required minimum segment size and a required maximum segment size is chosen. If no such breakpoint exists, either the maximum segment size may be chosen or a backup segmentation scheme that does not take determined chunk locations into account may be applied. If, for the purposes of the example of figure 4A, it is assumed a minimum segment size of 8 and a maximum segment size of 23, then a breakpoint between chunk 18 and 19 will be chosen. The first generated segment may then consist of chunks 1 through 18. Chunk 19 may form the beginning of the second segment. Note that this puts the data in location 1 together in a single segment as well as the data in location 4 in a different single segment.
[0051] Many variations of this implementation are possible. For example, instead of shortening regions, rules may comprise discarding maximal regions below a threshold size and prioritizing the resulting potential breakpoints by how large their associated regions are. Lower priority breakpoints might be used only if higher priority breakpoints fall outside the minimum and maximum segment size requirements.
[0052] In at least one example, two potential breakpoints are separated by new data not belonging to any region. In such a case, the breakpoint could be determined to be anywhere between the two potential breakpoints without affecting which regions get broken. In various examples, different rules would allow for selection of breakpoints in the middle between the regions or at one of the region ends.
[0053] The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims

CLAIMS What is claimed is:
1 . A non-transitory computer-readable storage device comprising executable instructions that, when executed, cause one or more processors to:
receive a sequence of hashes, data to be deduplicated partitioned into a sequence of data chunks, each hash in the sequence of hashes comprising a hash of a corresponding data chunk;
determine locations of previously stored copies of the data chunks, the locations determined based on the hashes; and
determine a breakpoint in the sequence of data chunks based in part on the determined locations, the breakpoint forming a boundary of a segment of data chunks.
2. The device of claim 1 , wherein the instructions further cause the one or more processors to deduplicate the segment as a whole.
3. The device of claim 1 , wherein the instructions further cause the one or more processors to deduplicate the segment only against data contained in one of a plurality of nodes.
4. The device of claim 1 , wherein the locations are determined by searching for one or more of the hashes in an index, a sparse index, a set, or a Bloom filter.
5. The device of claim 1 , wherein determining a break point causes the one or more processors to:
determine regions in the sequence of data chunks based in part on which data chunks have copies in the same locations;
determine a break point in the sequence of data chunks based on the regions.
6. The device of claim 5, wherein for each region there is a location in which at least 90% of the data chunks with determined locations have previously stored copies.
7. A method, comprising:
receiving, by a processor, a sequence of hashes, data to be deduplicated partitioned into a sequence of data chunks, each hash in the sequence of hashes comprising a hash of a corresponding data chunk;
determining locations of previously stored copies of the data chunks; and determining a breakpoint in the sequence of data chunks based in part on the determined locations, the breakpoint forming a boundary of a segment of data chunks.
8. The method of claim 7, further comprising deduplicating the segment as a whole.
9. The method of claim 7, wherein the determined locations are chunk containers, stores, or storage nodes.
10. The method of claim 7, wherein the locations are determined by looking up one or more of the hashes in an index, a sparse index, a set, or a Bloom filter.
1 1 . The method of claim 7, wherein determining a break point comprises: determining regions in the sequence of data chunks based in part on which data chunks have copies in the same determined locations; determining a break point in the sequence of data chunks based on the regions.
12. The method of claim 1 1 , wherein for each region there is a location in which at least 90% of the data chunks with determined locations have previously stored copies.
13. The method of claim 1 1 , wherein determining the locations comprises querying for location information.
14. A device comprising:
one or more processors;
a memory coupled to the processors;
the one or more processors to
receive a sequence of hashes, data to be deduplicated partitioned into a sequence of data chunks, each hash in the sequence of hashes comprising a hash of a corresponding data chunk;
determine locations of previously stored copies of the data chunks, the locations determined based on the hashes; and
determine a breakpoint in the sequence of data chunks based on the locations, the breakpoint forming a boundary of a segment of data chunks.
15. The device of claim 14, wherein determining the breakpoint comprises: determine regions in the sequence of data chunks based in part on which data chunks have copies in the same locations;
determine a break point in the sequence of data chunks based on the regions.
PCT/US2012/035917 2012-05-01 2012-05-01 Determining segment boundaries for deduplication WO2013165389A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP12876001.4A EP2845106A4 (en) 2012-05-01 2012-05-01 Determining segment boundaries for deduplication
US14/395,491 US20150088840A1 (en) 2012-05-01 2012-05-01 Determining segment boundaries for deduplication
CN201280072861.XA CN104246720B (en) 2012-05-01 2012-05-01 Determine the segment boundary for deduplication
PCT/US2012/035917 WO2013165389A1 (en) 2012-05-01 2012-05-01 Determining segment boundaries for deduplication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/035917 WO2013165389A1 (en) 2012-05-01 2012-05-01 Determining segment boundaries for deduplication

Publications (1)

Publication Number Publication Date
WO2013165389A1 true WO2013165389A1 (en) 2013-11-07

Family

ID=49514655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/035917 WO2013165389A1 (en) 2012-05-01 2012-05-01 Determining segment boundaries for deduplication

Country Status (4)

Country Link
US (1) US20150088840A1 (en)
EP (1) EP2845106A4 (en)
CN (1) CN104246720B (en)
WO (1) WO2013165389A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016072988A1 (en) * 2014-11-06 2016-05-12 Hewlett Packard Enterprise Development Lp Data chunk boundary
US10496313B2 (en) 2014-09-22 2019-12-03 Hewlett Packard Enterprise Development Lp Identification of content-defined chunk boundaries

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105324765B (en) 2013-05-16 2019-11-08 慧与发展有限责任合伙企业 Selection is used for the memory block of duplicate removal complex data
WO2014185918A1 (en) 2013-05-16 2014-11-20 Hewlett-Packard Development Company, L.P. Selecting a store for deduplicated data
CN105359107B (en) 2013-05-16 2019-01-08 慧与发展有限责任合伙企业 The degrading state for the data that report is fetched for distributed objects
US10860233B2 (en) * 2019-04-12 2020-12-08 Samsung Electronics Co., Ltd. Half-match deduplication
US11106580B2 (en) 2020-01-27 2021-08-31 Hewlett Packard Enterprise Development Lp Deduplication system threshold based on an amount of wear of a storage device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050283500A1 (en) * 2004-06-17 2005-12-22 Kave Eshghi System and method for sharing storage resources between multiple files
US7844581B2 (en) * 2006-12-01 2010-11-30 Nec Laboratories America, Inc. Methods and systems for data management using multiple selection criteria
US20110184967A1 (en) * 2010-01-25 2011-07-28 Sepaton, Inc. System and method for navigating data
US20110307447A1 (en) * 2010-06-09 2011-12-15 Brocade Communications Systems, Inc. Inline Wire Speed Deduplication System

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8315984B2 (en) * 2007-05-22 2012-11-20 Netapp, Inc. System and method for on-the-fly elimination of redundant data
US8515909B2 (en) * 2008-04-29 2013-08-20 International Business Machines Corporation Enhanced method and system for assuring integrity of deduplicated data
US7979491B2 (en) * 2009-03-27 2011-07-12 Hewlett-Packard Development Company, L.P. Producing chunks from input data using a plurality of processing elements
WO2010113167A1 (en) * 2009-03-30 2010-10-07 Hewlett-Packard Development Company L.P. Deduplication of data stored in a copy volume
US9058298B2 (en) * 2009-07-16 2015-06-16 International Business Machines Corporation Integrated approach for deduplicating data in a distributed environment that involves a source and a target
CN102934097B (en) * 2010-06-18 2016-04-20 惠普发展公司,有限责任合伙企业 Data deduplication
US10394757B2 (en) * 2010-11-18 2019-08-27 Microsoft Technology Licensing, Llc Scalable chunk store for data deduplication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050283500A1 (en) * 2004-06-17 2005-12-22 Kave Eshghi System and method for sharing storage resources between multiple files
US7844581B2 (en) * 2006-12-01 2010-11-30 Nec Laboratories America, Inc. Methods and systems for data management using multiple selection criteria
US20110184967A1 (en) * 2010-01-25 2011-07-28 Sepaton, Inc. System and method for navigating data
US20110307447A1 (en) * 2010-06-09 2011-12-15 Brocade Communications Systems, Inc. Inline Wire Speed Deduplication System

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2845106A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10496313B2 (en) 2014-09-22 2019-12-03 Hewlett Packard Enterprise Development Lp Identification of content-defined chunk boundaries
WO2016072988A1 (en) * 2014-11-06 2016-05-12 Hewlett Packard Enterprise Development Lp Data chunk boundary

Also Published As

Publication number Publication date
EP2845106A4 (en) 2015-12-23
EP2845106A1 (en) 2015-03-11
CN104246720B (en) 2016-12-28
CN104246720A (en) 2014-12-24
US20150088840A1 (en) 2015-03-26

Similar Documents

Publication Publication Date Title
US11153094B2 (en) Secure data deduplication with smaller hash values
EP2738665B1 (en) Similarity analysis method, apparatus, and system
US20150088840A1 (en) Determining segment boundaries for deduplication
US20150066877A1 (en) Segment combining for deduplication
US9274716B2 (en) Systems and methods for hierarchical reference counting via sibling trees
US8799238B2 (en) Data deduplication
US10803019B2 (en) Hash-based multi-tenancy in a deduplication system
US9817865B2 (en) Direct lookup for identifying duplicate data in a data deduplication system
US10261946B2 (en) Rebalancing distributed metadata
US10242021B2 (en) Storing data deduplication metadata in a grid of processors
JP6807395B2 (en) Distributed data deduplication in the processor grid
US20150058294A1 (en) Adding cooperative file coloring in a similarity based deduplication system
US11048594B2 (en) Adding cooperative file coloring protocols in a data deduplication system
US9286314B2 (en) Applying a maximum size bound on content defined segmentation of data
US9244830B2 (en) Hierarchical content defined segmentation of data
US9940069B1 (en) Paging cache for storage system
US9483483B2 (en) Applying a minimum size bound on content defined segmentation of data
US11347424B1 (en) Offset segmentation for improved inline data deduplication
Karve et al. Redundancy aware virtual disk mobility for cloud computing
Jehlol et al. Enhancing Deduplication Efficiency Using Triple Bytes Cutters and Multi Hash Function.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12876001

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012876001

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 14395491

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE