WO2013051129A1 - 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム - Google Patents
格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム Download PDFInfo
- Publication number
- WO2013051129A1 WO2013051129A1 PCT/JP2011/073085 JP2011073085W WO2013051129A1 WO 2013051129 A1 WO2013051129 A1 WO 2013051129A1 JP 2011073085 W JP2011073085 W JP 2011073085W WO 2013051129 A1 WO2013051129 A1 WO 2013051129A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- chunk
- fragment
- stored
- deduplication
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/174—Redundancy elimination performed by the file system
- G06F16/1748—De-duplication implemented within the file system, e.g. based on file segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
- G06F3/0641—De-duplication techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1453—Management of the data involved in backup or backup restore using de-duplication of the data
Definitions
- the present invention relates to a method for eliminating duplication of stored data, an information processing apparatus, and a deduplication program, and can reduce data capacity by eliminating duplication of stored data for data stored in a recording medium such as a hard disk.
- the present invention relates to a method for eliminating duplication of stored data, an information processing apparatus, and a deduplication program.
- General deduplication processing is performed in the following three processes.
- the chunking method is an important factor that determines the performance.
- the smaller the chunk size of the chunk to be created the greater the percentage of real data that can be reduced (deduplication rate).
- the chunk size is set too small, there arises a problem that the amount of metadata necessary to manage each chunk and the time taken to restore the original data from the chunk increase.
- the chunk size is set larger, the amount of metadata corresponding to each chunk and the time required for data restoration can be reduced, but there is a problem that the deduplication rate decreases.
- Patent Document 1 As a countermeasure to the dilemma related to the above chunk size, a technique of applying a plurality of chunk sizes according to data to be deduplicated is known (see, for example, Patent Documents 1 and 2).
- Patent Document 1 after setting a small chunk size and performing chunking, the longest repeated chunk (Largest sequence of Repeated Chunks) included in each created chunk is detected and a chunk larger than the initial value is detected. A technique for newly outputting a chunk having a size is disclosed. Further, in Patent Document 2, chunking processing target data to be stored is chunked with a large size for duplicated data that spreads greatly and non-duplicated data that spreads greatly. In the vicinity of the boundary between the duplicated data and the non-duplicated data, A technique for chunking at a small size is disclosed.
- Patent Document 1 by consolidating chunks in data into one large chunk, the number of disk accesses during data restoration can be reduced without greatly reducing the deduplication rate. It is stated that the time required for data restoration can be shortened.
- Patent Document 2 states that a high deduplication rate and a small amount of metadata can be achieved by changing the chunk size according to data duplication and non-duplication.
- Patent Documents 1 and 2 each include a process of analyzing chunk patterns of all data and a process of performing chunking while referring to the past chunk history. In general, these processes are very time consuming. The deduplication processing is often performed in parallel with the regular data backup processing, and the processing is required to end within the backup window. It may be difficult in actual operation to perform the chunking process according to the methods of Patent Documents 1 and 2 when the backup process is executed each time.
- the present invention provides a deduplication process that can be performed in the actual operation of the storage system, and can reduce the time required to restore the original data from the data after the deduplication process. It is an object to provide a method, an information processing apparatus, and a deduplication program for eliminating duplication of stored data.
- a storage area of the storage device is a duplicate data fragment that is a data fragment that overlaps one data fragment constituting data stored in the storage device.
- Steps A step of generating and recording integrated data fragment attribute information, which is information representing an attribute of the integrated data fragment, from each integrated data fragment, using a plurality of data fragment sequences having the detected repeated data pattern as integrated data fragments. And have.
- Another aspect of the present invention is a deduplication device for realizing the deduplication method described above.
- Still another aspect of the present invention provides a deduplication program for causing a computer to execute the deduplication method.
- a deduplication process that can be performed in actual operation of a storage system is provided, and the time required to restore the original data from the data after the deduplication process can be reduced.
- An information processing apparatus and a deduplication program can be provided.
- FIG. 5 is a diagram illustrating an example of a processing flow of a file restoration module 214.
- FIG. It is the figure which showed the example of the flow of a process of the chunk pattern analysis module.
- the normal deduplication processing performed at the time of each backup and the data reanalysis processing performed to reduce the time required for data restoration are separated.
- normal deduplication processing that is required to be completed in a short time can be performed at high speed, and reanalysis processing can be performed when a certain amount of work time can be secured, such as during server maintenance.
- Patent Documents 1 and 2 disclose only processing related to chunking among the three processes of deduplication described in the background art. Therefore, even if chunking is performed by the methods of Patent Documents 1 and 2 at the time of data reanalysis, it cannot be reflected in normal deduplication processing via metadata. Further, Patent Documents 1 and 2 do not disclose a method of performing chunking using metadata on data that has already undergone deduplication. Therefore, it is necessary to analyze actual data again.
- Metadata that can be used in normal deduplication processing After re-analyzing the data for these problems, create metadata that can be used in normal deduplication processing. Specifically, first, a set of chunks generated by normal deduplication processing is analyzed to determine chunks that can be integrated without changing the deduplication rate. Next, metadata to be used in normal deduplication processing is created from the determined chunk, and is managed together with the metadata of the chunk before integration. This makes it possible to perform duplication determination and data restoration using metadata in normal deduplication processing. Further, by analyzing using the metadata of the chunk generated by the normal deduplication process, it is possible to determine the integrated chunk more efficiently than analyzing the actual data again.
- FIG. 1 is a diagram showing an example of the system configuration of a storage system 1 to which the first embodiment of the present invention is applied.
- the present system 1 includes a host computer 110, a storage device 120, and a deduplication device 130, and these devices are configured to be communicable with each other via a network 101.
- the host computer 110 is a general computer including at least a CPU (Central Processing Unit) 111, a memory 112, and a network interface 113.
- the host computer 110 has a function of reading data stored in the storage device 120 onto the memory 112 via the network interface 113 and a function of writing data on the memory 112 into the storage device 120 via the network interface 113.
- the host computer 110 may include an auxiliary storage device such as a hard disk drive (HDD), a semiconductor drive (Solid State Drive, or SSD).
- the network interface 113 is selected depending on the type of the network 101 to which the host computer 110 is connected. For example, if the network 101 is a LAN (Local Area Network), a NIC (Network Interface Card) is provided as the network interface 113. .
- the data is composed of one or more files.
- the data handled in the present invention is not limited to such a configuration, and generally includes digital data expressed as a binary string.
- the storage device 120 includes at least a storage control device 121 and a storage device 123.
- the storage device 120 can take the form of a file storage that can store data in units of files, for example, but may take other forms including block storage.
- the storage control device 121 includes a network interface 122, can receive data read / write commands from the host computer 110 and the deduplication device 130, and can read / write data from / to the storage device 123.
- the storage device 123 is configured by a storage medium such as an HDD (Hard Disk Disk Drive) 124 and stores data that has received a write command from the host computer 110 and / or the deduplication apparatus 130.
- HDD Hard Disk Disk Drive
- the storage control device 121 includes a processor such as a CPU (not shown), a memory, and a disk adapter as an I / O (Input / Output) interface with the storage device 123. Based on such a configuration, the storage control apparatus 121 creates a plurality of logical volumes from the logical storage area and a function for organizing the logical storage area from the physical storage area of the storage device 123 according to an appropriate RAID level. A function provided to the computer 110 is provided.
- the deduplication device 130 includes at least a network interface 135, a CPU 131, a memory 132, an auxiliary storage device 133, an I / O interface 134, and an input / output device 136.
- the deduplication device 130 has a function of reading data stored in the storage device 120 to the memory 132 via the network interface 135 and a function of writing data on the memory 132 to the storage device 120 via the network interface 135.
- the I / O interface 134 is a data input device such as a keyboard and a mouse, a data output device such as a display and a printer, and includes various devices having a data input / output function of a computer.
- the deduplication device 130 is configured as a separate computer from the storage device 120, but the function of the deduplication device 130 described later may be implemented in the storage device 120. Good.
- the functions of the deduplication device 130 are provided by the CPU 131 reading out the program and data for realizing each function stored in the auxiliary storage device 133 to the memory 132 and executing them.
- the deduplication device 130 includes a standard deduplication function unit 210 and a chunk integration function unit 220.
- the standard deduplication function 210 divides the data stored in the storage apparatus 120 into a sequence of a plurality of chunks (data fragments), and for the chunks that overlap each other (duplicate data fragments), the real data is stored in only one chunk.
- the function to store in 120 is provided. With the function of the standard deduplication function unit 210, the usable capacity of the storage device 123 can be increased.
- the chunk integration function unit 220 analyzes data patterns of repeated chunks in the sequence of chunks generated by the standard deduplication function unit 210, and manages a plurality of chunks as one integrated chunk, thereby reconstructing data. Provides a function to reduce the cost required for Hereinafter, means for realizing each function will be specifically described.
- the standard deduplication function unit 210 includes at least a chunking module 211 (data division unit), a chunk collation module 212 (data collation unit), a chunk registration module 213 (data registration unit), a file restoration module 214 (data restoration unit), a chunk A management table 216 and a file management table 215 are provided.
- the chunking module 211 has a function of reading data stored in the storage device 120 and dividing it into a plurality of chunks. Details of the processing by the chunking module 211 will be described later with reference to FIG.
- the chunk collation module 212 determines whether there is a chunk with duplicate data for each chunk generated by the chunking module 211. Details of the processing by the chunking module 211 will be described later with reference to FIG.
- the chunk registration module 213 generates attribute information used for managing each chunk including a chunk ID 301, a hash value 302, and a chunk size 303 from each chunk generated by the chunking module 211, and a file management table 215. And has a function of registering the attribute information in the chunk management table 216. Furthermore, the chunk registration module 213 also has a function of storing, in the storage apparatus 120, actual data of only the chunks that the chunk collation module 212 has determined that there are no duplicate chunks. Details of the processing by the chunk registration module 213 will be described later with reference to FIG.
- the file restoration module 214 is divided into chunks by the chunking module 213 using attribute information stored in the file management table 215 and chunk management table 216 and data corresponding to the chunk stored in the storage device 120. It has a function to restore previous data. Details of the processing by the file restoration module 214 will be described later with reference to FIG.
- the chunk management table 216 holds the attribute information of the chunk generated by the chunking module 213, and is referred to when the chunk collation module 212 determines a duplicate chunk and when the file restoration module 214 restores data from the chunk. Details of the chunk management table 216 will be described later with reference to FIG.
- the file management table 215 holds information related to the sequence of chunks constituting each file, and is referred to when data to be read (file) is restored in accordance with a data read command from the host computer 110. Details of the file management table 215 will be described later with reference to FIG.
- the original data written from the host computer 110 includes a set of a plurality of chunks that do not overlap with each other, and the chunks to be written held by the file management table 215 and the chunk management table 216. Converted to attribute information.
- this series of conversion processes is referred to as “deduplication processing”, data in which a set of a plurality of chunks that do not overlap each other is deduplicated, and attribute information of each chunk held by the file management table 215 and the chunk management table 216 Is called metadata.
- the data received from the host computer 110 is also stored. It is not necessary to keep the data. Therefore, when there are a plurality of overlapping chunks, the size of the data subjected to the de-duplication processing is smaller than when the original data is stored as it is.
- the chunk integration function unit 220 includes at least a chunk pattern analysis module 221 (data analysis unit), a chunk management table update module 222 (data update unit), a chunk pattern information table 223, and an integrated chunk information table 224.
- the chunk pattern analysis module 221 has a function of analyzing a sequence of chunks for each file managed by the file management table 215 and determining chunks to be integrated in order to reduce reconstruction costs. Details of the processing by the chunk pattern analysis module 221 will be described later with reference to FIG.
- the chunk management table update module 222 rearranges the data stored in the storage device 120 according to the chunks to be integrated determined by the chunk pattern analysis module 221, and the file management table 215 and the chunk management according to the data rearrangement result. It has a function of updating the information held in the table 216. Details of the processing by the chunk management table update module 222 will be described later with reference to FIG.
- the chunk pattern information table 223 holds information used by the chunk pattern analysis module 221 to determine chunks to be integrated. Details of the configuration of the chunk pattern information table 223 will be described later with reference to FIG.
- the integrated chunk information table 224 holds information about chunks that the chunk pattern analysis module 221 determines to be integrated. Details of the configuration of the integrated chunk information table 224 will be described later with reference to FIG.
- the deduplication device 130 is also provided with an operating system (OS) 230 and a data I / O unit 240.
- the OS 230 is basic software having a basic data processing function as a computer of the deduplication apparatus 130, and can be appropriately used as an OS of a general computer.
- the data I / O unit 240 manages data I / O processing between each module provided in the standard deduplication function unit 210 or the chunk integration function unit 220 and the outside via the network interface 135 under the control of the OS 230. .
- FIGS. 3A and 3B show an example of the chunk management table 216 according to the first embodiment.
- the following description of each table will be described with reference numerals different from those used in FIG. 2 in order to facilitate understanding of the reference numerals assigned to the components of each table.
- FIG. 3A shows an example of the state of the file management table 215 after the standard deduplication function unit 210 of the deduplication device 130 performs the data deduplication processing.
- FIG. 3A shows an example of the state of the file management table 215 after the standard deduplication function unit 210 of the deduplication device 130 performs the data deduplication processing.
- the chunk integration function unit 220 of the deduplication device 130 integrates chunks identified by chunk IDs 301, 1, and 2, generates a new chunk with chunk ID 301 of 9 and registers it in the chunk management table 216. Represents the state after. The chunk integration process will be described later.
- the chunk management table 300 illustrated in FIGS. 3A and 3B includes items of a chunk ID 301, a hash value 302, a chunk size 303, a duplication number 304, and a storage destination 306.
- the chunk ID 301 is an ID for uniquely identifying each chunk.
- the registration module 213 adds new chunk attribute information to the chunk management table 216, the chunk ID 301 is not the same as the other chunk IDs 301.
- the registration module 213 assigns each chunk.
- the hash value 302 stores an output value obtained by inputting data included in each chunk to a hash function.
- the hash function for example, SHA-1 or the like can be used.
- the hash value 302 calculated for each chunk may be used as the chunk ID 301.
- the chunk size 303 represents the size of each chunk as data, and is displayed in units of kilobytes in the examples of FIGS. 3A and 3B.
- the duplication number 304 represents how many times the chunk specified by the corresponding chunk ID 301 or the hash value 302 appears in the data before executing the deduplication process.
- the storage destination 305 represents the position on the storage device 123 where the chunk specified by the corresponding chunk ID 301 or hash value 302 is stored, and is recorded as a block address on the logical storage area provided by the storage device 123, for example.
- the storage destination 305 is used when the deduplication device 130 acquires chunk data on the storage device 123.
- the file restoration module 214 can read the chunks constituting the file to be read from the storage device 123.
- FIG. 4A shows an example of chunks stored on the storage device 123 before the chunk integration function unit 220 rearranges chunks.
- chunks with chunk IDs 301, 1, 2, and 3 are stored at discontinuous positions whose storage destinations 305 are indicated by L_1, L_2, and L_3, respectively.
- the chunk integration function unit 220 integrates the chunks identified by the chunk IDs 301, 1, 2, and 3 to generate a chunk with the chunk ID 301 of 9, and the chunk on the disk is generated.
- the example of the state after rearrangement is represented. In FIG.
- the original three chunks are stored at successive positions as new chunks identified by the chunk ID 301 of 9.
- the number of accesses from the deduplication device 130 onto the storage device 123 when acquiring a chunk identified by the chunk ID 301 is 9 This will be one time from the previous three times.
- FIGS. 5A and 5B each show a configuration example of the file management table 500.
- FIG. 5A illustrates a state before the chunk integration process
- FIG. 5B illustrates a state after the chunk integration process.
- the file management table 500 includes items of a file name 501, a file size 502, a chunk count 503, and a configuration chunk ID 505.
- the file name 501 represents an identifier that uniquely identifies each file.
- the file size 502 represents the size of each file, for example, in units of kilobytes.
- the number of chunks 503 represents the number of chunks constituting each file.
- the configuration chunk ID 505 represents a sequence of chunks constituting each file as a sequence of chunk IDs 301.
- the deduplication apparatus 130 When receiving a file read command from the host computer 110, the deduplication apparatus 130 includes the file name 501 of the read target file recorded in the file management table 500, and the configuration chunk ID 505 recorded corresponding thereto. As a result, the read target file can be restored from the chunk stored in the storage device 123.
- the file whose file name 501 is specified by “sample1.txt” is composed of 10 chunks recorded in the configuration chunk ID 505.
- the ID of each chunk is “1-2-3-4-1-2-3-5-6-1”, and the chunks are arranged in this order.
- the file whose file name 501 is “sample1.txt” is composed of six chunks. This is because, as shown in FIG. 3B, a new chunk having chunk ID 301 of “1-2-3” is defined as a chunk with chunk ID 301 of 9. Accordingly, in FIG. 5B, the ID array of each chunk constituting the file name “sample1.txt” is “9-4-9-5-6-1”.
- FIG. 6A shows a configuration example of the chunk pattern information table 600 after repeated chunk pattern detection processing described later
- FIG. 6B shows a configuration example of the chunk pattern information table 600 after dividing patterns having the same chunk.
- the chunk pattern information table 223 includes items of a chunk pattern 601, a length 602, an appearance number 603, and an appearance position 604.
- the chunk pattern 601 represents a pattern that repeatedly appears in a series of chunk sequences stored in the logical storage area provided by the storage device 123 as a sequence of chunk IDs 301.
- the length 602 represents the number of chunks constituting the chunk pattern 601.
- the appearance number 603 indicates how many times the chunk pattern 601 appears in the chunk sequence.
- the appearance position 604 indicates at which position in the series of digital data sequences stored in the logical storage area provided by the storage device 123 the chunk pattern 601 appears using a block address or the like in the logical storage area. .
- the chunk pattern 601 whose chunk ID 301 is represented by 1-2-3 is represented as appearing first, 100th, and 212th from the top of the digital data sequence.
- the chunk pattern analysis module 221 dynamically updates the chunk pattern information table 223 to determine chunks to be integrated. Details of the chunk pattern analysis processing by the chunk pattern analysis module 221 will be described later with reference to FIG.
- the integrated chunk information table 700 includes items of an integrated chunk ID 701, a sub chunk ID 702, the number of appearances 703, and an update date / time 704.
- the integrated chunk ID 701 is a chunk ID 301 newly assigned by the chunk pattern analysis module 221 to the chunk pattern 601 determined to be integrated by the chunk pattern analysis module 221.
- the sub chunk ID 702 is a chunk ID 301 representing a smaller chunk (hereinafter referred to as “sub chunk”) that constitutes an integrated chunk.
- the appearance number 703 represents how many times the integrated chunk appears in the sequence of chunks in the logical storage area of the storage device 123. In the example of FIG. 7, sub chunks 1-2-3 are integrated to generate an integrated chunk, and 9 is assigned as a new chunk ID 301.
- the update date / time 704 represents the date / time when the chunk pattern analysis module 221 integrated the chunks.
- FIG. 8 shows an example of a data processing flow executed by the chunking module 211.
- the first is a method of performing deduplication processing at the timing when the host computer 110 transmits data to the storage apparatus 120.
- the deduplication device 130 receives the data via the network interface 135, performs deduplication processing, and then divides the storage device 123 into chunks. Write the data.
- This first method is called an inline method.
- the second method is a method of performing deduplication processing after the data transmitted from the host computer 110 to the storage device 120 is written into the storage device 123 by the storage control device 121.
- the deduplication device 130 reads data on the storage device 123, performs deduplication processing, and then writes the deduplication processing data to the storage device 123 again.
- This second method is called a post-processing method.
- the deduplication processing by the deduplication device 130 is started at a time determined every week in accordance with the timing of backup processing of data stored in the storage device 123, for example.
- the post-process method will be described.
- the present invention can be applied to an in-line method only by changing the start timing of the deduplication process and the data reading destination of the deduplication device 130. .
- the chunking module 211 reads new data.
- the new data refers to data that has not been deduplicated by the standard deduplication function unit 210 among the data stored in the storage apparatus 120. Identification of new data in the storage device 120 is performed, for example, by recording whether or not deduplication processing has been performed for each address of the received file unit with a bitmap for the address of the logical storage area of the storage device 123. Can do.
- the chunking module 211 proceeds to the processing of S802 after reading new data.
- the chunking module 211 divides the data read in S801 into chunks.
- the division method into chunks is roughly divided into a fixed length method for dividing a fixed data size such as 1 KB unit, and a variable length method in which a specific byte pattern appears in a digital data sequence and the position is a division position. There is.
- variable-length chunking is described in the following document, for example.
- each chunk size 303 is described as being different, that is, variable-length chunking is performed.
- the chunking module 211 determines the chunk division position by an appropriate division method, and then proceeds to the processing of S803.
- the chunking module 211 transmits the chunk information determined in S802 to the chunk collation module 212.
- the chunking module 211 includes new data and information such as an address indicating the division position into the chunk in the chunk information.
- FIG. 9 is an example of a data processing flow executed by the chunk collation module 212 of this embodiment.
- the chunk collation module 212 receives the chunk information transmitted by the chunking module 211.
- the chunk collation module 212 proceeds to the process of S902 after receiving the chunk information.
- step S ⁇ b> 902 the chunk collation module 212 confirms whether each chunk determined in step S ⁇ b> 802 already exists in the chunk management table 300. This confirmation is performed as follows, for example. First, the chunk verification module 212 calculates the hash value of each received chunk using the same hash function as the hash function used to obtain the hash value 302 recorded for each chunk in the chunk management table 300. . Next, based on each calculated hash value, collation with the hash value 302 of each chunk recorded in the chunk management table 300 is performed to check whether a chunk having the same hash value exists. This chunk collation processing may be performed using a Bloom filter as described in the following document, for example. B. Zhu, K.Li, and H. Patterson, "Avoiding the disk bottleneck in the Data Domain deduplication file system, ”The 6th USENIX Conference on File and Storage Technologies (FAST '08), February 2008.
- the identity of chunks can be determined by the identity of hash values calculated for them.
- the chunks stored in the storage device 123 are stored. By comparing the binary data directly, it is possible to correctly determine the identity of the chunk. As described above, after confirming whether or not each received chunk exists in the chunk management table 216, the chunk collation module 212 proceeds to the processing of S903.
- the chunk collation module 212 transmits the result collated in S902 to the chunk registration module 213 and ends the process.
- FIG. 10 shows an example of a data processing flow executed by the chunk registration module 213.
- the chunk registration module 213 receives the verification result transmitted by the chunk verification module in S903.
- the collation result includes information on whether the same chunk as the received chunk is already stored in the chunk management table 300. After receiving the collation result for each chunk, the chunk registration module 213 performs the processing from S1002 onward for each chunk.
- the chunk registration module 213 determines whether the target chunk exists in the chunk management table 300. The determination is made based on the result confirmed by the chunk collation module 212 in S902. If the target chunk exists in the chunk management table 216, the same chunk already exists in the storage device 123, so the target chunk is called a duplicate chunk. If the chunk registration module 213 determines that the target chunks are duplicated (S1002, Yes), the chunk registration module 213 proceeds to the process of S1005. If it is determined that the target chunks do not overlap (S1002, No), the chunk registration module 213 proceeds to the process of S1003.
- the chunk registration module 213 stores the data of the target chunk in the storage device 123. At this time, a method may be used in which the target chunks are not stored in the storage device 123 but temporarily stored in the memory 132 and are stored in the storage device 123 together with other non-overlapping chunks. After saving in the storage device 123 or the memory 132, the chunk registration module 213 proceeds to the processing of S1004.
- the chunk registration module 213 registers the attribute information of the target chunk in the chunk management table 300.
- the chunk registration module 213 assigns a value or code that is not the same as the existing chunk ID 301 to the chunk ID 301.
- the value calculated in S902 is registered.
- the chunk size 303 the size of the target chunk is calculated and registered. Since there are no other overlapping chunks, the numerical value 1 is registered in the duplication number 304.
- the storage destination 305 information indicating the location where the chunk is stored in S1003 is registered.
- the chunk registration module 213 registers the attribute information of each chunk in the chunk management table 300 as described above, and then proceeds to the processing of S1006.
- the chunk registration module 213 updates the attribute information of the chunk overlapping with the target chunk already registered in the chunk management table 300. In this case, the chunk registration module 213 increases the value of the duplication number 304 recorded for the corresponding chunk by one. After updating the attribute information of the chunk management table 300, the chunk registration module 213 proceeds to the processing of S1006.
- the chunk registration module 213 adds information on the processed new data (new file) to the file management table 500. That is, the chunk registration module 213 registers the file name 501, the file size 502, the number of chunks 503, and the configuration chunk ID 505 in the file management table 500 for each file included in the new data.
- FIG. 11 shows an example of a data processing flow executed by the file restoration module 214 of the present embodiment.
- the file restoration module 214 performs processing at a timing when the host computer 110 transmits a data read command to the storage device 120 and the standard deduplication function unit 210 of the deduplication device 130 receives the data read command via the network interface 135. Start. Since the data that is deduplicated and stored in the storage device 123 is not the same as the original data, when the data read command is received, the deduplication device 130 uses the deduplicated data as the original data ( File). In this specification, this process is called data restoration. Data restoration is performed on all or part of the data stored in the storage device 123. In the following, the process of restoring a single file will be described. However, even in the case of general data, the data is divided into a plurality of files and chunked for each file. Data can be restored by performing the same processing.
- step S1101 the file restoration module 214 searches the file management table 500 based on the restoration target file name 501 included in the data read command. If it is determined that the file having the file name to be restored is recorded in the file management table 500 (S1101, Yes), the file restoration module 214 acquires the configuration chunk ID 505 of the corresponding file in S1103, and then performs the process of S1104. move on. If it is determined that there is no entry for the file with the corresponding file name 501 in the file management table 500 (No in S1101), the file restoration module 214 ends the process by issuing an error message in S1102.
- the file restoration module 214 acquires the data of the chunks constituting the file from the storage device 123. Specifically, the following processing is performed for each chunk ID 301 included in the configuration chunk ID 505 acquired in S1103. First, the chunk management table 300 is searched based on the chunk ID 301, and the storage destination 305 of the chunk is acquired. Next, chunk data is acquired from the storage device 123 based on the acquired storage destination 305. The acquired chunk data is temporarily stored in the memory 132. After performing the above processing on all the chunk IDs 301 included in the configuration chunk ID 505, the acquired chunk data is concatenated in the order of the configuration chunk ID 505. Thereafter, the process proceeds to S1105.
- the file restoration module 214 transmits the data linked in S1104 to the host computer 110 via the network interface 135, and ends the processing.
- FIG. 12 shows an example of a data processing flow executed by the chunk pattern analysis module 221 of this embodiment.
- the chunk pattern analysis module 221 can be configured to be started periodically such as once a week by a timer activation from the OS 230 of the deduplication device 130 or manually by an administrator.
- the chunk pattern analysis module 221 reads the configuration chunk ID 505 included in the file management table 500. At this time, the chunk pattern analysis module 221 may target all the files included in the file management table 500, or may target a part of the files. After reading the configuration chunk ID 505, the chunk pattern analysis module 221 proceeds to the process of S1202.
- the chunk pattern analysis module 221 replaces the chunk ID 301 that is an integrated chunk among the chunk IDs 301 included in the configuration chunk ID 505 read in S1201 with the sub chunk ID. This processing is performed by the chunk pattern analysis module 221 referring to the integrated chunk information table 700 and confirming whether the read chunk ID 301 is registered in the integrated chunk ID 701. For example, if “9” is included in the chunk ID 301 read in S1201, the chunk pattern analysis module 221 refers to the integrated chunk information table 700 and replaces it with the sub chunk ID “1-2-3”. Proceed to processing.
- the chunk pattern analysis module 221 creates a chunk pattern information table 600 shown in FIG. 6A. This is realized by the following processing.
- the chunk pattern analysis module 221 concatenates a set of constituent chunk IDs 505 after rewriting to sub-chunk IDs in S1202 to form one character string.
- the chunk pattern analysis module 221 inserts an identifier indicating the file delimiter at a position between the configuration chunk IDs 505 as a file delimiter. This identifier is assigned a value different from the identifier indicating the chunk ID 301 and the partition between other files.
- the chunk pattern analysis module 221 inserts the identifiers $ and ⁇ at the file delimiter positions and inserts“ 1-2-3-4-1-2-3-5-6-1-$-7 ”. -8-4-2-5- ⁇ -3-2 ”. In this way, one character string including the chunk ID 301 and the file delimiter identifier is generated.
- the chunk pattern analysis module 221 searches all the repeated patterns for a character string formed by concatenating the constituent chunk IDs 505, which is a partial character string search target. Further, the chunk pattern analysis module 221 acquires the length 602, the number of appearances 603, and the appearance position 604 of each repeated pattern, and registers them in the chunk pattern information table 600.
- the processing for creating the chunk pattern information table 600 has been described above. In the above process, the repeated pattern search is executed for all the read chunk IDs 505. However, by performing the same process for only the duplicate chunks in advance, the memory required for the search is reduced. In addition, the search speed can be improved.
- Whether each chunk is duplicated can be determined by referring to the chunk management table 500 and confirming whether the duplication number 503 corresponding to each chunk ID 301 is 2 or more. After creating the chunk pattern information table 223, the chunk pattern analysis module 221 proceeds to the process of S1204.
- the chunk pattern analysis module 221 divides a chunk pattern including the same chunk in the chunk pattern included in the chunk pattern information table 223 into a plurality of chunk patterns.
- the chunk pattern “1-2-3-1-5-6” in the third row in FIG. 6A has the same chunk “1” therein. Therefore, as shown in the upper table of FIG. 6B, it is divided into “1-2-3” and “1-5-6” which are chunk patterns that do not have the same chunk inside.
- this division is not unique, but the division is adopted so that the chunk after division becomes as large as possible.
- the same chunk pattern 601 may be generated in the chunk pattern information table 223. In that case, the same chunk pattern 601 is collected into one entry.
- the appearance number 603 is rewritten to the sum of the respective appearance numbers 603, and the appearance position 604 is rewritten to the connection of the respective appearance positions 604.
- the chunk pattern analysis module 221 proceeds to the processing of S1205.
- the chunk pattern analysis module 221 excludes the chunk pattern that does not satisfy the policy for the chunk pattern 601 from the chunk pattern information table 600.
- the policy regarding the chunk pattern includes the minimum length, the minimum number of occurrences, etc. of the chunk pattern 601, and these values are set in advance by the administrator or the like by preparing a parameter storage area in the chunk integration function unit 220. Shall be kept.
- the maximum value of the length 602 of the chunk pattern 601 and the maximum number of appearances 603 may be set as the policy.
- the chunk pattern analysis module 221 removes the entry of the chunk pattern 601 from the chunk pattern information table 600 according to the set policy, and then proceeds to the processing of S1206.
- the chunk pattern analysis module 221 excludes the chunk pattern 601 including the same chunk as the other chunk pattern 601. This is a process for further improving deduplication efficiency by not storing duplicated chunks included in different integrated chunks.
- the chunk patterns “1-2-3” and “1-5-6” include the same chunk “1”. Therefore, either one is excluded from the chunk pattern information table 600.
- Which chunk pattern is to be excluded can be determined according to a preset rule. This rule can be set, for example, “exclude the one having a lower appearance number 603” or “exclude the one having a smaller length 602”.
- the chunk pattern analysis module 221 excludes the entry of the chunk pattern 601 including the same chunk from the chunk pattern information table 600, and then proceeds to the processing of S1207.
- the chunk pattern analysis module 221 updates the integrated chunk information table 700.
- the chunk pattern analysis module 221 determines a chunk pattern 601 included in the chunk pattern information table 600 after execution of the processing of S1206 as a chunk to be integrated, and newly registers information related to each chunk pattern 601 in the integrated chunk information table 700.
- the chunk pattern analysis module 221 assigns a new chunk ID 301 to each chunk pattern 601 and registers it in the integrated chunk ID 701.
- the chunk ID 301 constituting the chunk pattern identified by each integrated chunk ID 701 is set as the sub chunk ID 702, the number of appearances of the chunk pattern is set as the number of appearances 703, and the date when the new registration of the integrated chunk ID 701 is performed is set as the update date 704. Register each.
- a chunk generated by dividing the data stored in the storage device 120 can be reconfigured as a longer integrated chunk. Therefore, the storage efficiency of the storage apparatus 120 can be improved, and the data reading speed from the storage apparatus 120 can be improved.
- FIG. 13 shows an example of a data processing flow executed by the chunk management table update module 222.
- the chunk management table update module 222 can start processing immediately after the processing of the chunk pattern analysis module 221 is finished or after a while has passed.
- the chunk management table update module 222 rearranges data on the storage device 123. This processing is realized by performing the following processing for each entry corresponding to the integrated chunk ID 701 included in the integrated chunk information table 700.
- the chunk management table update module 222 refers to the chunk management table 300 for the sub chunks included in the integrated chunk, and acquires the data storage destination 305 of each sub chunk.
- the chunk management table update module 222 acquires the data of each sub chunk from the storage device 123, concatenates it, and temporarily stores it in the memory 132.
- the chunk management table update module 222 writes the data of the connected sub chunk as a new chunk in the storage device 123. At this time, the chunk management table update module 222 holds the written position internally.
- the chunk management table update module 222 erases the original sub-chunk data on the storage device 123.
- the chunk management table update module 222 performs the above data rearrangement process, and then proceeds to the process of S1302.
- the chunk management table update module 222 adds the integrated chunk attribute information to the chunk management table 300.
- FIG. 4B shows a state after the integrated chunk “9” is newly added to the chunk management table 300.
- the hash value 302 a value calculated by using a predetermined hash function for the data of the integrated chunk to be registered is registered.
- the chunk size 303 the value of the number of appearances 703 of the integrated chunk information table 700 is registered.
- the storage destination 305 the position where the data of the integrated chunk is written in the storage device 123 in S1301 is registered.
- the chunk management table update module 222 changes the storage destination 305 of the sub chunk included in the integrated chunk.
- the sub-chunk storage destination 305 can be determined based on the integrated chunk storage destination 305, the order of the sub-chunks in the integrated chunk, and the size of the sub-chunk constituting the integrated chunk.
- the chunk “3” can be specified at a position advanced from the storage destination 305 of the chunk “9” by the data length of the chunks “1” and “2”.
- the chunk management table update module 222 updates the chunk management table 300, and then proceeds to the processing of S1303.
- FIG. 5B shows the state of the file management table 500 after “1-2-3”, which is the configuration chunk ID 505 of the file specified by the file name “sample1.txt”, is replaced with the integrated chunk “9”. .
- data storage in the storage apparatus 120 is performed using the reconstructed integrated chunk, so that the storage efficiency of the storage apparatus 120 can be improved.
- the data reading speed can be improved.
- the chunking module 211 of the standard deduplication function unit 210 basically performs the same chunking method as that used when generating a sub-chunk in S802 of the data processing flow illustrated in FIG. Chunking with. If the host computer 110 knows in advance from the characteristics of data to be stored in the storage device 120 that sub-chunks rarely appear alone in the data and appears only as an integrated chunk. The chunking method can be changed temporarily and output as an integrated chunk from the beginning without being divided into sub-chunks. The chunking method will be described below in the second embodiment.
- the deduplication apparatus 130 basically has the same configuration as that of the first embodiment and executes the same data processing. Here, only a part different from the first embodiment in the configuration of the second embodiment will be described.
- a chunk management table 300 (216) different from the configuration in the first embodiment is used.
- a configuration example of the chunk management table 300 according to the second embodiment will be described with reference to FIGS. 14A and 14B.
- the chunk management table 300 according to the second embodiment includes a skip size 1403 as a new item that was not found in the first embodiment.
- the skip size 1403 represents a data size set in advance so that the division position search process can be skipped when the chunking module 211 divides new data received from the host computer 110 into chunks.
- the chunking module 211 sequentially scans new data received from the host computer 110 from the top in step S802 in FIG. 8 and searches for a division position for dividing the new data into chunks.
- the chunking module 211 generates a certain chunk.
- the chunking module 211 sets a position to scan ahead by 2.3 KB of the skip size 1403. Moving. If chunk “1-2-3” appears consecutively (this possibility is assumed to be high from the above data characteristics), chunking module 211 will end chunk “3” with a scan after skipping 2.3 KB. Find the break position. As a result, the chunking module outputs the integrated chunk “1-2-3” as a divided chunk.
- the chunk collation module 212 collates the chunk management table 300 with the divided chunk in S902, and the chunk output by the chunking module 211 is the same as the integrated chunk “9”. Judge that.
- the second embodiment it is possible to improve the speed of chunking processing by skipping division position determination and reduce the number of verification target chunks.
- the configuration of this embodiment is particularly effective as the length of the integrated chunk is longer, and the time required for deduplication processing can be reduced by further reducing the time required for chunking processing.
- the time required for data restoration processing can be shortened without lowering the deduplication rate of stored data.
- 1 storage system 101 network, 110 host computer, 111, 131 CPU, 112, 132 memory, 113, 122, 135 network interface, 121 storage control device, 123 storage device, 124 hard disk, 133 auxiliary storage device, 134 I / O Interface, 210 Standard deduplication function section, 211 chunking module, 212 chunk verification module, 213 chunk registration module, 214 file restoration module, 215,500 file management table, 216,300 chunk management table, 220 chunk integration function section, 220 Chunk pattern analysis module, 222 Chunk management table update module, 223,600 Pattern information table, 224,700 integrated chunk information table
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
(1)チャンキング処理:ストレージに格納されているデータを、チャンクと呼ばれるデータ断片に分割する。
(2)重複判定処理:新たに作成したチャンクと同一のチャンクがすでにストレージ内に存在するか(重複して格納されているか)を判定する。
(3)メタデータ作成処理:新たに作成したチャンクのうち、重複しないチャンクのみをストレージに保存し、重複判定処理及び格納されたチャンクデータから元データを復元する際に使う情報(以下、「メタデータ」と呼ぶ)を作成する。
前記ストレージ装置に格納しようとする前記データを前記データ断片に分割するステップと、
前記データを分割後の前記データ断片の構成によって記録するステップと、
前記データ断片について、同一の前記データ断片が存在するか判定するステップと、
同一の前記データ断片が存在すると判定した場合、1の前記データ断片を前記ストレージ装置の記憶領域に格納し、当該データ断片固有の属性を示す情報であるデータ断片属性情報を生成して記録するステップと、
前記ストレージ装置の前記記憶領域に格納されている前記データの読み出し要求を受けた場合に、読み出し対象データを形成している前記データ断片の構成を取得して、該当する前記データ断片を前記ストレージ装置の前記記憶領域から読みだして前記データを復元するステップと、
記録された前記データ断片を取得して連結することによりチャンク統合可否を判定する対象である統合対象データを生成し、当該統合対象データについて特定のデータパターンの繰り返しを繰り返しデータパターンがあるか検出するステップと、
検出された前記繰り返しデータパターンを有する複数の前記データ断片の列を統合データ断片として、各統合データ断片から当該統合データ断片の属性を表す情報である統合データ断片属性情報を生成して記録するステップとを有する。
また、本発明の他の態様は、上記の重複排除方法を実現するための重複排除装置である。また、本発明のさらに他の態様は、コンピュータに前記の重複排除方法を実行させるための重複排除プログラムである。
以下、本発明の第1の実施形態について、説明する。
図1は、本発明の第1実施形態を適用したストレージシステム1のシステム構成の例を示した図である。図1に示すように、本システム1は、ホストコンピュータ110、ストレージ装置120、及び重複排除装置130を含み、これらの装置はネットワーク101を介して相互に通信可能に接続されて構成される。
S. Quinlan and S. Dorward, "Venti: a new approach to archival
storage,” The First USENIX conference on File and Storage
Technologies (Fast'02), January 2002.
A. Muthitacharoen, B. Chen, and D. Mazieres, "A low-bandwidth
network file system,” The 18th ACM Symposium on Operating Systems
Principles (SOSP), Banff, Alberta, Canada, October 2001.
B. Zhu, K.Li, and H. Patterson, "Avoiding the disk bottleneck in
the Data Domain deduplication file system,” The 6th USENIX
Conference on File and Storage Technologies (FAST '08),
February 2008.
B. Zhu, K.Li, and H. Patterson, "Avoiding the disk bottleneck in the
Data Domain deduplication file system,” The 6th USENIX Conference
on File and Storage Technologies (FAST '08), February 2008.
Gusfield, Dan (1999) [1997]. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. USA: Cambridge University Press. p. 143.
次に、本発明の第2実施形態について説明する。上述した第1の実施形態では、標準重複排除機能部210のチャンキングモジュール211は、図8に例示するデータ処理フローのS802において、基本的に、サブチャンクを生成した時と同様のチャンキング方式でチャンキングを行う。仮に、ホストコンピュータ110がストレージ装置120に格納しようとするデータの特性等から、そのデータ内にはサブチャンクが単体で現れることはほとんどなく、統合チャンクとしてのみ現れることがあらかじめわかっている場合には、チャンキング方式を一時的に変更し、サブチャンクに分割することなくはじめから統合チャンクとして出力することができる。以下第2の実施形態でそのチャンキング方式について説明する。
Claims (17)
- ストレージ装置に格納されるデータを構成する1のデータ断片と重複するデータ断片である重複データ断片を前記ストレージ装置の記憶領域から排除するための格納データの重複排除方法であって、
前記ストレージ装置に格納しようとする前記データを前記データ断片に分割するステップと、
前記データを分割後の前記データ断片の構成によって記録するステップと、
前記データ断片について、同一の前記データ断片が存在するか判定するステップと、
同一の前記データ断片が存在すると判定した場合、1の前記データ断片を前記ストレージ装置の記憶領域に格納し、当該データ断片固有の属性を示す情報であるデータ断片属性情報を生成して記録するステップと、
前記ストレージ装置の前記記憶領域に格納されている前記データの読み出し要求を受けた場合に、読み出し対象データを形成している前記データ断片の構成を取得して、該当する前記データ断片を前記ストレージ装置の前記記憶領域から読みだして前記データを復元するステップと、
記録された前記データ断片を取得して連結することによりチャンク統合可否を判定する対象である統合対象データを生成し、当該統合対象データについて特定のデータパターンの繰り返しを繰り返しデータパターンがあるか検出するステップと、
検出された前記繰り返しデータパターンを有する複数の前記データ断片の列を統合データ断片として、各統合データ断片から当該統合データ断片の属性を表す情報である統合データ断片属性情報を生成して記録するステップとを有する、格納データの重複排除方法。 - 請求項1に記載の格納データの重複排除方法であって、前記データ断片属性情報は当該データ断片について所定のハッシュ関数を使用して算出されたハッシュ値と当該データ断片の前記記憶領域における保存先情報とを含み、前記データ断片に重複データ断片が存在するかは、各データ断片について前記ハッシュ値を比較することで実行される、格納データの重複排除方法。
- 請求項2に記載の格納データの重複排除方法であって、前記統合データ断片に含まれている複数の前記データ断片について前記保存先情報を取得し、前記統合データ断片に含まれている前記データ断片が前記保存先情報に従って、前記ストレージ装置との前記記憶領域上で連続して格納されるように再配置する、格納データの重複排除方法。
- 請求項1に記載の格納データの重複排除方法であって、検出した前記繰り返しデータパターンの中に、同一の前記データ断片が複数含まれる場合、前記繰り返しデータパターンを分割して前記同一のデータ断片が含まれないようにする、格納データの重複排除方法。
- 請求項1に記載の格納データの重複排除方法であって、前記繰り返しデータパターンが所定の長さに満たない場合、又は前記繰り返しデータパターンの検出数が所定値に満たない場合、該当する繰り返しデータパターンを記録しない、格納データの重複排除方法。
- 請求項1に記載の格納データの重複排除方法であって、検出された複数の前記繰り返しデータパターンが、同一の前記データ断片を含んでいる場合、所定の規則により選択したいずれか1の前記繰り返しデータパターン以外の繰り返しデータパターンを記録しない、格納データの重複排除方法。
- 請求項1に記載の格納データの重複排除方法であって、前記データ断片を取得して連結する際に、前記ストレージ装置に対して書き込みあるいは読み出しされる前記データの区切り位置をまたいで前記繰り返しデータパターンを認識しない、格納データの重複排除方法。
- 請求項1に記載の格納データの重複排除方法であって、前記繰り返しデータパターンを検出する際に、すでに記録されている前記統合データ断片の長さより短い箇所に存在する前記データ断片の区切り位置は、前記繰り返しデータパターンの検出に関しては認識しないように構成されている、格納データの重複排除方法。
- ストレージ装置に格納されるデータを構成する1のデータ断片と重複するデータ断片である重複データ断片を前記ストレージ装置の記憶領域から排除するための格納データの重複排除装置であって、プロセッサとメモリとを有し、それぞれが前記メモリ上で該当するプログラムを前記プロセッサが実行することにより実現される、
前記ストレージ装置に格納しようとする前記データを前記データ断片に分割するデータ分割部と、
前記データを分割後の前記データ断片の構成によって記録するデータ登録部と、
前記データ断片について、同一の前記データ断片が存在するか判定し、同一の前記データ断片が存在すると判定した場合、1の前記データ断片を前記ストレージ装置の記憶領域に格納し、当該データ断片固有の属性を示す情報であるデータ断片属性情報を生成して記録するデータ照合部と、
前記ストレージ装置の前記記憶領域に格納されている前記データの読み出し要求を受けた場合に、読み出し対象データを形成している前記データ断片の構成を取得して、該当する前記データ断片を前記ストレージ装置の前記記憶領域から読みだして前記データを復元するデータ復元部と、
記録された前記データ断片を取得して連結することによりチャンク統合可否を判定する対象である統合対象データを生成し、当該統合対象データについて特定のデータパターンの繰り返しを繰り返しデータパターンがあるか検出するデータ解析部と、
検出された前記繰り返しデータパターンを有する複数の前記データ断片の列を統合データ断片として、各統合データ断片から当該統合データ断片の属性を表す情報である統合データ断片属性情報を生成して記録するデータ更新部とを有する、格納データの重複排除装置。 - 請求項9に記載の格納データの重複排除装置であって、前記データ断片属性情報は当該データ断片について所定のハッシュ関数を使用して算出されたハッシュ値と当該データ断片の前記記憶領域における保存先情報とを含み、前記データ断片に重複データ断片が存在するかは、各データ断片について前記ハッシュ値を比較することで実行される、格納データの重複排除装置。
- 請求項10に記載の格納データの重複排除装置であって、前記統合データ断片に含まれている複数の前記データ断片について前記保存先情報を取得し、前記統合データ断片に含まれている前記データ断片が前記保存先情報に従って、前記ストレージ装置との前記記憶領域上で連続して格納されるように再配置する、格納データの重複排除装置。
- 請求項9に記載の格納データの重複排除装置であって、検出した前記繰り返しデータパターンの中に、同一の前記データ断片が複数含まれる場合、前記繰り返しデータパターンを分割して前記同一のデータ断片が含まれないようにする、格納データの重複排除装置。
- 請求項9に記載の格納データの重複排除装置であって、前記繰り返しデータパターンが所定の長さに満たない場合、又は前記繰り返しデータパターンの検出数が所定値に満たない場合、該当する繰り返しデータパターンを記録しない、格納データの重複排除装置。
- 請求項9に記載の格納データの重複排除装置であって、検出された複数の前記繰り返しデータパターンが、同一の前記データ断片を含んでいる場合、所定の規則により選択したいずれか1の前記繰り返しデータパターン以外の繰り返しデータパターンを記録しない、格納データの重複排除装置。
- 請求項10に記載の格納データの重複排除装置であって、前記データ断片を取得して連結する際に、前記ストレージ装置に対して書き込みあるいは読み出しされる前記データの区切り位置をまたいで前記繰り返しデータパターンを認識しない、格納データの重複排除装置。
- 請求項9に記載の格納データの重複排除装置であって、前記繰り返しデータパターンを検出する際に、すでに記録されている前記統合データ断片の長さより短い箇所に存在する前記データ断片の区切り位置は、前記繰り返しデータパターンの検出に関しては認識しないように構成されている、格納データの重複排除装置。
- ストレージ装置に格納されるデータを構成する1のデータ断片と重複するデータ断片である重複データ断片を前記ストレージ装置の記憶領域から排除するために使用される重複排除プログラムであって、
前記ストレージ装置に格納しようとする前記データを前記データ断片に分割するステップと、
前記データを分割後の前記データ断片の構成によって記録するステップと、
前記データ断片について、同一の前記データ断片が存在するか判定するステップと、
同一の前記データ断片が存在すると判定した場合、1の前記データ断片を前記ストレージ装置の記憶領域に格納し、当該データ断片固有の属性を示す情報であるデータ断片属性情報を生成して記録するステップと、
前記ストレージ装置の前記記憶領域に格納されている前記データの読み出し要求を受けた場合に、読み出し対象データを形成している前記データ断片の構成を取得して、該当する前記データ断片を前記ストレージ装置の前記記憶領域から読みだして前記データを復元するステップと、
記録された前記データ断片を取得して連結することによりチャンク統合可否を判定する対象である統合対象データを生成し、当該統合対象データについて特定のデータパターンの繰り返しを繰り返しデータパターンがあるか検出するステップと、
検出された前記繰り返しデータパターンを有する複数の前記データ断片の列を統合データ断片として、各統合データ断片から当該統合データ断片の属性を表す情報である統合データ断片属性情報を生成して記録するステップとをコンピュータに実行させる重複排除プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/349,561 US9542413B2 (en) | 2011-10-06 | 2011-10-06 | Stored data deduplication method, stored data deduplication apparatus, and deduplication program |
JP2013537331A JP5735654B2 (ja) | 2011-10-06 | 2011-10-06 | 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム |
PCT/JP2011/073085 WO2013051129A1 (ja) | 2011-10-06 | 2011-10-06 | 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/073085 WO2013051129A1 (ja) | 2011-10-06 | 2011-10-06 | 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013051129A1 true WO2013051129A1 (ja) | 2013-04-11 |
Family
ID=48043321
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/073085 WO2013051129A1 (ja) | 2011-10-06 | 2011-10-06 | 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9542413B2 (ja) |
JP (1) | JP5735654B2 (ja) |
WO (1) | WO2013051129A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014136183A1 (ja) * | 2013-03-04 | 2014-09-12 | 株式会社日立製作所 | ストレージ装置及びデータ管理方法 |
WO2015068233A1 (ja) * | 2013-11-07 | 2015-05-14 | 株式会社日立製作所 | ストレージシステム |
KR20160011212A (ko) * | 2013-05-17 | 2016-01-29 | 아브 이니티오 테크놀로지 엘엘시 | 데이터 운영을 위한 메모리 및 스토리지 공간 관리 |
WO2017141315A1 (ja) * | 2016-02-15 | 2017-08-24 | 株式会社日立製作所 | ストレージ装置 |
Families Citing this family (152)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11275509B1 (en) | 2010-09-15 | 2022-03-15 | Pure Storage, Inc. | Intelligently sizing high latency I/O requests in a storage environment |
US8732426B2 (en) | 2010-09-15 | 2014-05-20 | Pure Storage, Inc. | Scheduling of reactive I/O operations in a storage environment |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US8468318B2 (en) | 2010-09-15 | 2013-06-18 | Pure Storage Inc. | Scheduling of I/O writes in a storage environment |
US12008266B2 (en) | 2010-09-15 | 2024-06-11 | Pure Storage, Inc. | Efficient read by reconstruction |
US8589655B2 (en) | 2010-09-15 | 2013-11-19 | Pure Storage, Inc. | Scheduling of I/O in an SSD environment |
US8589625B2 (en) | 2010-09-15 | 2013-11-19 | Pure Storage, Inc. | Scheduling of reconstructive I/O read operations in a storage environment |
US9244769B2 (en) | 2010-09-28 | 2016-01-26 | Pure Storage, Inc. | Offset protection data in a RAID array |
US8775868B2 (en) | 2010-09-28 | 2014-07-08 | Pure Storage, Inc. | Adaptive RAID for an SSD environment |
US11636031B2 (en) | 2011-08-11 | 2023-04-25 | Pure Storage, Inc. | Optimized inline deduplication |
US8589640B2 (en) | 2011-10-14 | 2013-11-19 | Pure Storage, Inc. | Method for maintaining multiple fingerprint tables in a deduplicating storage system |
US8719540B1 (en) | 2012-03-15 | 2014-05-06 | Pure Storage, Inc. | Fractal layout of data blocks across multiple devices |
US10623386B1 (en) | 2012-09-26 | 2020-04-14 | Pure Storage, Inc. | Secret sharing data protection in a storage system |
US8745415B2 (en) | 2012-09-26 | 2014-06-03 | Pure Storage, Inc. | Multi-drive cooperation to generate an encryption key |
US11032259B1 (en) | 2012-09-26 | 2021-06-08 | Pure Storage, Inc. | Data protection in a storage system |
US9063967B2 (en) | 2013-01-10 | 2015-06-23 | Pure Storage, Inc. | Performing copies in a storage system |
US11733908B2 (en) | 2013-01-10 | 2023-08-22 | Pure Storage, Inc. | Delaying deletion of a dataset |
US10908835B1 (en) | 2013-01-10 | 2021-02-02 | Pure Storage, Inc. | Reversing deletion of a virtual machine |
US11768623B2 (en) | 2013-01-10 | 2023-09-26 | Pure Storage, Inc. | Optimizing generalized transfers between storage systems |
US9697226B1 (en) * | 2013-06-28 | 2017-07-04 | Sanmina Corporation | Network system to distribute chunks across multiple physical nodes |
US20150006846A1 (en) * | 2013-06-28 | 2015-01-01 | Saratoga Speed, Inc. | Network system to distribute chunks across multiple physical nodes with disk support for object storage |
US10263770B2 (en) | 2013-11-06 | 2019-04-16 | Pure Storage, Inc. | Data protection in a storage system using external secrets |
US11128448B1 (en) | 2013-11-06 | 2021-09-21 | Pure Storage, Inc. | Quorum-aware secret sharing |
US10365858B2 (en) | 2013-11-06 | 2019-07-30 | Pure Storage, Inc. | Thin provisioning in a storage device |
US9208086B1 (en) | 2014-01-09 | 2015-12-08 | Pure Storage, Inc. | Using frequency domain to prioritize storage of metadata in a cache |
JP6260359B2 (ja) * | 2014-03-07 | 2018-01-17 | 富士通株式会社 | データ分割処理プログラム,データ分割処理装置及びデータ分割処理方法 |
US10656864B2 (en) | 2014-03-20 | 2020-05-19 | Pure Storage, Inc. | Data replication within a flash storage array |
CN105446964B (zh) * | 2014-05-30 | 2019-04-26 | 国际商业机器公司 | 用于文件的重复数据删除的方法及装置 |
US9779268B1 (en) | 2014-06-03 | 2017-10-03 | Pure Storage, Inc. | Utilizing a non-repeating identifier to encrypt data |
US9218244B1 (en) | 2014-06-04 | 2015-12-22 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US11399063B2 (en) | 2014-06-04 | 2022-07-26 | Pure Storage, Inc. | Network authentication for a storage system |
US9218407B1 (en) | 2014-06-25 | 2015-12-22 | Pure Storage, Inc. | Replication and intermediate read-write state for mediums |
US10496556B1 (en) | 2014-06-25 | 2019-12-03 | Pure Storage, Inc. | Dynamic data protection within a flash storage system |
US10296469B1 (en) | 2014-07-24 | 2019-05-21 | Pure Storage, Inc. | Access control in a flash storage system |
US9495255B2 (en) | 2014-08-07 | 2016-11-15 | Pure Storage, Inc. | Error recovery in a storage cluster |
US9558069B2 (en) | 2014-08-07 | 2017-01-31 | Pure Storage, Inc. | Failure mapping in a storage array |
US9864761B1 (en) | 2014-08-08 | 2018-01-09 | Pure Storage, Inc. | Read optimization operations in a storage system |
US10430079B2 (en) | 2014-09-08 | 2019-10-01 | Pure Storage, Inc. | Adjusting storage capacity in a computing system |
US9753955B2 (en) | 2014-09-16 | 2017-09-05 | Commvault Systems, Inc. | Fast deduplication data verification |
US10164841B2 (en) | 2014-10-02 | 2018-12-25 | Pure Storage, Inc. | Cloud assist for storage systems |
US10430282B2 (en) | 2014-10-07 | 2019-10-01 | Pure Storage, Inc. | Optimizing replication by distinguishing user and system write activity |
US9489132B2 (en) | 2014-10-07 | 2016-11-08 | Pure Storage, Inc. | Utilizing unmapped and unknown states in a replicated storage system |
US9727485B1 (en) | 2014-11-24 | 2017-08-08 | Pure Storage, Inc. | Metadata rewrite and flatten optimization |
US9773007B1 (en) | 2014-12-01 | 2017-09-26 | Pure Storage, Inc. | Performance improvements in a storage system |
US9552248B2 (en) | 2014-12-11 | 2017-01-24 | Pure Storage, Inc. | Cloud alert to replica |
US9588842B1 (en) | 2014-12-11 | 2017-03-07 | Pure Storage, Inc. | Drive rebuild |
US9864769B2 (en) * | 2014-12-12 | 2018-01-09 | Pure Storage, Inc. | Storing data utilizing repeating pattern detection |
US10545987B2 (en) | 2014-12-19 | 2020-01-28 | Pure Storage, Inc. | Replication to the cloud |
US10296354B1 (en) | 2015-01-21 | 2019-05-21 | Pure Storage, Inc. | Optimized boot operations within a flash storage array |
US11947968B2 (en) | 2015-01-21 | 2024-04-02 | Pure Storage, Inc. | Efficient use of zone in a storage device |
US10437784B2 (en) * | 2015-01-30 | 2019-10-08 | SK Hynix Inc. | Method and system for endurance enhancing, deferred deduplication with hardware-hash-enabled storage device |
US9710165B1 (en) | 2015-02-18 | 2017-07-18 | Pure Storage, Inc. | Identifying volume candidates for space reclamation |
US9921910B2 (en) | 2015-02-19 | 2018-03-20 | Netapp, Inc. | Virtual chunk service based data recovery in a distributed data storage system |
US10082985B2 (en) | 2015-03-27 | 2018-09-25 | Pure Storage, Inc. | Data striping across storage nodes that are assigned to multiple logical arrays |
US10178169B2 (en) | 2015-04-09 | 2019-01-08 | Pure Storage, Inc. | Point to point based backend communication layer for storage processing |
US9639274B2 (en) | 2015-04-14 | 2017-05-02 | Commvault Systems, Inc. | Efficient deduplication database validation |
US10140149B1 (en) | 2015-05-19 | 2018-11-27 | Pure Storage, Inc. | Transactional commits with hardware assists in remote memory |
US10310740B2 (en) | 2015-06-23 | 2019-06-04 | Pure Storage, Inc. | Aligning memory access operations to a geometry of a storage device |
US9547441B1 (en) | 2015-06-23 | 2017-01-17 | Pure Storage, Inc. | Exposing a geometry of a storage device |
US11294588B1 (en) * | 2015-08-24 | 2022-04-05 | Pure Storage, Inc. | Placing data within a storage device |
US11341136B2 (en) | 2015-09-04 | 2022-05-24 | Pure Storage, Inc. | Dynamically resizable structures for approximate membership queries |
KR20170028825A (ko) | 2015-09-04 | 2017-03-14 | 퓨어 스토리지, 아이앤씨. | 압축된 인덱스들을 사용한 해시 테이블들에서의 메모리 효율적인 스토리지 및 탐색 |
US11269884B2 (en) | 2015-09-04 | 2022-03-08 | Pure Storage, Inc. | Dynamically resizable structures for approximate membership queries |
US9843453B2 (en) | 2015-10-23 | 2017-12-12 | Pure Storage, Inc. | Authorizing I/O commands with I/O tokens |
US10083185B2 (en) | 2015-11-09 | 2018-09-25 | International Business Machines Corporation | Enhanced data replication |
US10013201B2 (en) | 2016-03-29 | 2018-07-03 | International Business Machines Corporation | Region-integrated data deduplication |
US10452297B1 (en) | 2016-05-02 | 2019-10-22 | Pure Storage, Inc. | Generating and optimizing summary index levels in a deduplication storage system |
US10133503B1 (en) | 2016-05-02 | 2018-11-20 | Pure Storage, Inc. | Selecting a deduplication process based on a difference between performance metrics |
US10203903B2 (en) | 2016-07-26 | 2019-02-12 | Pure Storage, Inc. | Geometry based, space aware shelf/writegroup evacuation |
US10191662B2 (en) | 2016-10-04 | 2019-01-29 | Pure Storage, Inc. | Dynamic allocation of segments in a flash storage system |
US10162523B2 (en) | 2016-10-04 | 2018-12-25 | Pure Storage, Inc. | Migrating data between volumes using virtual copy operation |
US10613974B2 (en) | 2016-10-04 | 2020-04-07 | Pure Storage, Inc. | Peer-to-peer non-volatile random-access memory |
US10756816B1 (en) | 2016-10-04 | 2020-08-25 | Pure Storage, Inc. | Optimized fibre channel and non-volatile memory express access |
US10481798B2 (en) | 2016-10-28 | 2019-11-19 | Pure Storage, Inc. | Efficient flash management for multiple controllers |
US10185505B1 (en) | 2016-10-28 | 2019-01-22 | Pure Storage, Inc. | Reading a portion of data to replicate a volume based on sequence numbers |
US10359942B2 (en) | 2016-10-31 | 2019-07-23 | Pure Storage, Inc. | Deduplication aware scalable content placement |
US10452290B2 (en) | 2016-12-19 | 2019-10-22 | Pure Storage, Inc. | Block consolidation in a direct-mapped flash storage system |
US11550481B2 (en) | 2016-12-19 | 2023-01-10 | Pure Storage, Inc. | Efficiently writing data in a zoned drive storage system |
US11093146B2 (en) | 2017-01-12 | 2021-08-17 | Pure Storage, Inc. | Automatic load rebalancing of a write group |
US10528488B1 (en) | 2017-03-30 | 2020-01-07 | Pure Storage, Inc. | Efficient name coding |
US11403019B2 (en) | 2017-04-21 | 2022-08-02 | Pure Storage, Inc. | Deduplication-aware per-tenant encryption |
US12045487B2 (en) | 2017-04-21 | 2024-07-23 | Pure Storage, Inc. | Preserving data deduplication in a multi-tenant storage system |
US10944671B2 (en) | 2017-04-27 | 2021-03-09 | Pure Storage, Inc. | Efficient data forwarding in a networked device |
US10599355B2 (en) | 2017-05-12 | 2020-03-24 | Seagate Technology Llc | Data compression with redundancy removal across boundaries of compression search engines |
US10402266B1 (en) | 2017-07-31 | 2019-09-03 | Pure Storage, Inc. | Redundant array of independent disks in a direct-mapped flash storage system |
US10831935B2 (en) | 2017-08-31 | 2020-11-10 | Pure Storage, Inc. | Encryption management with host-side data reduction |
US10776202B1 (en) | 2017-09-22 | 2020-09-15 | Pure Storage, Inc. | Drive, blade, or data shard decommission via RAID geometry shrinkage |
US10789211B1 (en) | 2017-10-04 | 2020-09-29 | Pure Storage, Inc. | Feature-based deduplication |
US10884919B2 (en) | 2017-10-31 | 2021-01-05 | Pure Storage, Inc. | Memory management in a storage system |
US10860475B1 (en) | 2017-11-17 | 2020-12-08 | Pure Storage, Inc. | Hybrid flash translation layer |
US10740306B1 (en) * | 2017-12-04 | 2020-08-11 | Amazon Technologies, Inc. | Large object partitioning system |
US11144638B1 (en) | 2018-01-18 | 2021-10-12 | Pure Storage, Inc. | Method for storage system detection and alerting on potential malicious action |
US10970395B1 (en) | 2018-01-18 | 2021-04-06 | Pure Storage, Inc | Security threat monitoring for a storage system |
US11010233B1 (en) | 2018-01-18 | 2021-05-18 | Pure Storage, Inc | Hardware-based system monitoring |
US10467527B1 (en) | 2018-01-31 | 2019-11-05 | Pure Storage, Inc. | Method and apparatus for artificial intelligence acceleration |
US11036596B1 (en) | 2018-02-18 | 2021-06-15 | Pure Storage, Inc. | System for delaying acknowledgements on open NAND locations until durability has been confirmed |
US11494109B1 (en) | 2018-02-22 | 2022-11-08 | Pure Storage, Inc. | Erase block trimming for heterogenous flash memory storage devices |
US11934322B1 (en) | 2018-04-05 | 2024-03-19 | Pure Storage, Inc. | Multiple encryption keys on storage drives |
CN110389714B (zh) * | 2018-04-20 | 2022-12-23 | 伊姆西Ip控股有限责任公司 | 用于数据输入输出的方法、装置和计算机存储介质 |
US11995336B2 (en) | 2018-04-25 | 2024-05-28 | Pure Storage, Inc. | Bucket views |
US11153094B2 (en) * | 2018-04-27 | 2021-10-19 | EMC IP Holding Company LLC | Secure data deduplication with smaller hash values |
US11385792B2 (en) | 2018-04-27 | 2022-07-12 | Pure Storage, Inc. | High availability controller pair transitioning |
US10678433B1 (en) | 2018-04-27 | 2020-06-09 | Pure Storage, Inc. | Resource-preserving system upgrade |
US20190361697A1 (en) * | 2018-05-22 | 2019-11-28 | Pure Storage, Inc. | Automatically creating a data analytics pipeline |
US10678436B1 (en) | 2018-05-29 | 2020-06-09 | Pure Storage, Inc. | Using a PID controller to opportunistically compress more data during garbage collection |
US11436023B2 (en) | 2018-05-31 | 2022-09-06 | Pure Storage, Inc. | Mechanism for updating host file system and flash translation layer based on underlying NAND technology |
US10776046B1 (en) | 2018-06-08 | 2020-09-15 | Pure Storage, Inc. | Optimized non-uniform memory access |
US11281577B1 (en) | 2018-06-19 | 2022-03-22 | Pure Storage, Inc. | Garbage collection tuning for low drive wear |
US11869586B2 (en) | 2018-07-11 | 2024-01-09 | Pure Storage, Inc. | Increased data protection by recovering data from partially-failed solid-state devices |
US11194759B2 (en) | 2018-09-06 | 2021-12-07 | Pure Storage, Inc. | Optimizing local data relocation operations of a storage device of a storage system |
US11133076B2 (en) | 2018-09-06 | 2021-09-28 | Pure Storage, Inc. | Efficient relocation of data between storage devices of a storage system |
US10846216B2 (en) | 2018-10-25 | 2020-11-24 | Pure Storage, Inc. | Scalable garbage collection |
US11113409B2 (en) | 2018-10-26 | 2021-09-07 | Pure Storage, Inc. | Efficient rekey in a transparent decrypting storage array |
US11194473B1 (en) | 2019-01-23 | 2021-12-07 | Pure Storage, Inc. | Programming frequently read data to low latency portions of a solid-state storage array |
US11588633B1 (en) | 2019-03-15 | 2023-02-21 | Pure Storage, Inc. | Decommissioning keys in a decryption storage system |
US11334254B2 (en) | 2019-03-29 | 2022-05-17 | Pure Storage, Inc. | Reliability based flash page sizing |
US11397674B1 (en) | 2019-04-03 | 2022-07-26 | Pure Storage, Inc. | Optimizing garbage collection across heterogeneous flash devices |
US11775189B2 (en) | 2019-04-03 | 2023-10-03 | Pure Storage, Inc. | Segment level heterogeneity |
US10990480B1 (en) | 2019-04-05 | 2021-04-27 | Pure Storage, Inc. | Performance of RAID rebuild operations by a storage group controller of a storage system |
US12087382B2 (en) | 2019-04-11 | 2024-09-10 | Pure Storage, Inc. | Adaptive threshold for bad flash memory blocks |
US11099986B2 (en) | 2019-04-12 | 2021-08-24 | Pure Storage, Inc. | Efficient transfer of memory contents |
US12001355B1 (en) | 2019-05-24 | 2024-06-04 | Pure Storage, Inc. | Chunked memory efficient storage data transfers |
US11487665B2 (en) | 2019-06-05 | 2022-11-01 | Pure Storage, Inc. | Tiered caching of data in a storage system |
US11281394B2 (en) | 2019-06-24 | 2022-03-22 | Pure Storage, Inc. | Replication across partitioning schemes in a distributed storage system |
US10929046B2 (en) | 2019-07-09 | 2021-02-23 | Pure Storage, Inc. | Identifying and relocating hot data to a cache determined with read velocity based on a threshold stored at a storage device |
US11422751B2 (en) | 2019-07-18 | 2022-08-23 | Pure Storage, Inc. | Creating a virtual storage system |
US11294871B2 (en) | 2019-07-19 | 2022-04-05 | Commvault Systems, Inc. | Deduplication system without reference counting |
US11086713B1 (en) | 2019-07-23 | 2021-08-10 | Pure Storage, Inc. | Optimized end-to-end integrity storage system |
US11963321B2 (en) | 2019-09-11 | 2024-04-16 | Pure Storage, Inc. | Low profile latching mechanism |
US11403043B2 (en) | 2019-10-15 | 2022-08-02 | Pure Storage, Inc. | Efficient data compression by grouping similar data within a data segment |
US11615185B2 (en) | 2019-11-22 | 2023-03-28 | Pure Storage, Inc. | Multi-layer security threat detection for a storage system |
US11645162B2 (en) | 2019-11-22 | 2023-05-09 | Pure Storage, Inc. | Recovery point determination for data restoration in a storage system |
US11651075B2 (en) | 2019-11-22 | 2023-05-16 | Pure Storage, Inc. | Extensible attack monitoring by a storage system |
US11755751B2 (en) | 2019-11-22 | 2023-09-12 | Pure Storage, Inc. | Modify access restrictions in response to a possible attack against data stored by a storage system |
US12050689B2 (en) | 2019-11-22 | 2024-07-30 | Pure Storage, Inc. | Host anomaly-based generation of snapshots |
US11675898B2 (en) | 2019-11-22 | 2023-06-13 | Pure Storage, Inc. | Recovery dataset management for security threat monitoring |
US12079356B2 (en) | 2019-11-22 | 2024-09-03 | Pure Storage, Inc. | Measurement interval anomaly detection-based generation of snapshots |
US11520907B1 (en) | 2019-11-22 | 2022-12-06 | Pure Storage, Inc. | Storage system snapshot retention based on encrypted data |
US12067118B2 (en) | 2019-11-22 | 2024-08-20 | Pure Storage, Inc. | Detection of writing to a non-header portion of a file as an indicator of a possible ransomware attack against a storage system |
US12079333B2 (en) | 2019-11-22 | 2024-09-03 | Pure Storage, Inc. | Independent security threat detection and remediation by storage systems in a synchronous replication arrangement |
US12079502B2 (en) | 2019-11-22 | 2024-09-03 | Pure Storage, Inc. | Storage element attribute-based determination of a data protection policy for use within a storage system |
US11687418B2 (en) | 2019-11-22 | 2023-06-27 | Pure Storage, Inc. | Automatic generation of recovery plans specific to individual storage elements |
US11500788B2 (en) | 2019-11-22 | 2022-11-15 | Pure Storage, Inc. | Logical address based authorization of operations with respect to a storage system |
US12050683B2 (en) * | 2019-11-22 | 2024-07-30 | Pure Storage, Inc. | Selective control of a data synchronization setting of a storage system based on a possible ransomware attack against the storage system |
US11941116B2 (en) | 2019-11-22 | 2024-03-26 | Pure Storage, Inc. | Ransomware-based data protection parameter modification |
US11341236B2 (en) | 2019-11-22 | 2022-05-24 | Pure Storage, Inc. | Traffic-based detection of a security threat to a storage system |
US11720714B2 (en) | 2019-11-22 | 2023-08-08 | Pure Storage, Inc. | Inter-I/O relationship based detection of a security threat to a storage system |
US11625481B2 (en) | 2019-11-22 | 2023-04-11 | Pure Storage, Inc. | Selective throttling of operations potentially related to a security threat to a storage system |
US11720692B2 (en) | 2019-11-22 | 2023-08-08 | Pure Storage, Inc. | Hardware token based management of recovery datasets for a storage system |
US11657155B2 (en) | 2019-11-22 | 2023-05-23 | Pure Storage, Inc | Snapshot delta metric based determination of a possible ransomware attack against data maintained by a storage system |
JP7476715B2 (ja) * | 2020-08-07 | 2024-05-01 | 富士通株式会社 | 情報処理装置及び重複率見積もりプログラム |
CN113885785B (zh) * | 2021-06-15 | 2022-07-26 | 荣耀终端有限公司 | 一种数据去重方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008070484A2 (en) * | 2006-12-01 | 2008-06-12 | Nec Laboratories America, Inc. | Methods and systems for quick and efficient data management and/or processing |
WO2009087028A1 (en) * | 2008-01-04 | 2009-07-16 | International Business Machines Corporation | Backing up a de-duplicated computer file-system of a computer system |
US20090193219A1 (en) * | 2008-01-29 | 2009-07-30 | Hitachi, Ltd. | Storage subsystem |
US20090313248A1 (en) * | 2008-06-11 | 2009-12-17 | International Business Machines Corporation | Method and apparatus for block size optimization in de-duplication |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080270436A1 (en) * | 2007-04-27 | 2008-10-30 | Fineberg Samuel A | Storing chunks within a file system |
US7567188B1 (en) * | 2008-04-10 | 2009-07-28 | International Business Machines Corporation | Policy based tiered data deduplication strategy |
US8140491B2 (en) * | 2009-03-26 | 2012-03-20 | International Business Machines Corporation | Storage management through adaptive deduplication |
US10394757B2 (en) * | 2010-11-18 | 2019-08-27 | Microsoft Technology Licensing, Llc | Scalable chunk store for data deduplication |
-
2011
- 2011-10-06 US US14/349,561 patent/US9542413B2/en not_active Expired - Fee Related
- 2011-10-06 JP JP2013537331A patent/JP5735654B2/ja not_active Expired - Fee Related
- 2011-10-06 WO PCT/JP2011/073085 patent/WO2013051129A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008070484A2 (en) * | 2006-12-01 | 2008-06-12 | Nec Laboratories America, Inc. | Methods and systems for quick and efficient data management and/or processing |
WO2009087028A1 (en) * | 2008-01-04 | 2009-07-16 | International Business Machines Corporation | Backing up a de-duplicated computer file-system of a computer system |
US20090193219A1 (en) * | 2008-01-29 | 2009-07-30 | Hitachi, Ltd. | Storage subsystem |
US20090313248A1 (en) * | 2008-06-11 | 2009-12-17 | International Business Machines Corporation | Method and apparatus for block size optimization in de-duplication |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014136183A1 (ja) * | 2013-03-04 | 2014-09-12 | 株式会社日立製作所 | ストレージ装置及びデータ管理方法 |
KR20160011212A (ko) * | 2013-05-17 | 2016-01-29 | 아브 이니티오 테크놀로지 엘엘시 | 데이터 운영을 위한 메모리 및 스토리지 공간 관리 |
CN105556474A (zh) * | 2013-05-17 | 2016-05-04 | 起元科技有限公司 | 管理数据操作的存储器和存储空间 |
JP2016530584A (ja) * | 2013-05-17 | 2016-09-29 | アビニシオ テクノロジー エルエルシー | データ操作のための、メモリ及びストレージ空間の管理 |
CN105556474B (zh) * | 2013-05-17 | 2019-04-30 | 起元科技有限公司 | 管理数据操作的存储器和存储空间 |
KR102201510B1 (ko) | 2013-05-17 | 2021-01-11 | 아브 이니티오 테크놀로지 엘엘시 | 데이터 운영을 위한 메모리 및 스토리지 공간 관리 |
WO2015068233A1 (ja) * | 2013-11-07 | 2015-05-14 | 株式会社日立製作所 | ストレージシステム |
US9720608B2 (en) | 2013-11-07 | 2017-08-01 | Hitachi, Ltd. | Storage system |
WO2017141315A1 (ja) * | 2016-02-15 | 2017-08-24 | 株式会社日立製作所 | ストレージ装置 |
JPWO2017141315A1 (ja) * | 2016-02-15 | 2018-05-31 | 株式会社日立製作所 | ストレージ装置 |
US10592150B2 (en) | 2016-02-15 | 2020-03-17 | Hitachi, Ltd. | Storage apparatus |
Also Published As
Publication number | Publication date |
---|---|
US20140229452A1 (en) | 2014-08-14 |
JP5735654B2 (ja) | 2015-06-17 |
JPWO2013051129A1 (ja) | 2015-03-30 |
US9542413B2 (en) | 2017-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5735654B2 (ja) | 格納データの重複排除方法、格納データの重複排除装置、及び重複排除プログラム | |
US9633065B2 (en) | Efficient data rehydration | |
US8639669B1 (en) | Method and apparatus for determining optimal chunk sizes of a deduplicated storage system | |
US8712963B1 (en) | Method and apparatus for content-aware resizing of data chunks for replication | |
US9880759B2 (en) | Metadata for data storage array | |
CN104978151B (zh) | 基于应用感知的重复数据删除存储系统中的数据重构方法 | |
JP5878548B2 (ja) | 重複排除ストレージ・システム、その内部の合成バックアップを容易にする方法、及び、プログラム | |
US9514138B1 (en) | Using read signature command in file system to backup data | |
JP6495568B2 (ja) | 増分sqlサーバデータベースバックアップを実行する方法、コンピュータ可読記憶媒体およびシステム | |
JP5434705B2 (ja) | ストレージ装置、ストレージ装置制御プログラムおよびストレージ装置制御方法 | |
US8315985B1 (en) | Optimizing the de-duplication rate for a backup stream | |
CN107229420B (zh) | 数据存储方法、读取方法、删除方法和数据操作系统 | |
US9665306B1 (en) | Method and system for enhancing data transfer at a storage system | |
JP6598996B2 (ja) | データ準備のためのシグニチャベースのキャッシュ最適化 | |
JP5719037B2 (ja) | ストレージ装置及び重複データ検出方法 | |
KR20170054299A (ko) | 메모리 관리 시의 중복 제거를 위해서 기준 세트로 기준 블록을 취합하는 기법 | |
WO2017042978A1 (ja) | 計算機システム、ストレージ装置、及びデータの管理方法 | |
US8914325B2 (en) | Change tracking for multiphase deduplication | |
US9170747B2 (en) | Storage device, control device, and control method | |
JP2018530838A (ja) | データ準備のためのキャッシュ最適化 | |
US11593312B2 (en) | File layer to block layer communication for selective data reduction | |
KR101652436B1 (ko) | 분산파일 시스템에서의 중복 제거 장치 및 방법 | |
CN106991020B (zh) | 对图像级备份的文件系统对象的有效处理 | |
JPWO2007099636A1 (ja) | ファイルシステム移行方法、ファイルシステム移行プログラム及びファイルシステム移行装置 | |
US20170351608A1 (en) | Host device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11873778 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013537331 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14349561 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11873778 Country of ref document: EP Kind code of ref document: A1 |