CN101878471B - Data storage space recovery system and method - Google Patents

Data storage space recovery system and method Download PDF

Info

Publication number
CN101878471B
CN101878471B CN2008801039980A CN200880103998A CN101878471B CN 101878471 B CN101878471 B CN 101878471B CN 2008801039980 A CN2008801039980 A CN 2008801039980A CN 200880103998 A CN200880103998 A CN 200880103998A CN 101878471 B CN101878471 B CN 101878471B
Authority
CN
China
Prior art keywords
file system
page
data
space
data space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2008801039980A
Other languages
Chinese (zh)
Other versions
CN101878471A (en
Inventor
L·E·阿什曼
M·J·克莱姆
M·H·皮特尔科
M·D·奥尔森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DELL International Ltd
Original Assignee
Compellent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Compellent Technologies Inc filed Critical Compellent Technologies Inc
Publication of CN101878471A publication Critical patent/CN101878471A/en
Application granted granted Critical
Publication of CN101878471B publication Critical patent/CN101878471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/1727Details of free space management performed by the file system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0662Virtualisation aspects
    • G06F3/0665Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Abstract

A process of determining explicitly free data space in computer data storage systems with implicitly allocated data space through the use of information provided by a hosting computer system with knowledge of what space allocated is currently being used at the time of a query, is provided. In one embodiment, a File System ( FS ) is asked to identify clusters no longer in use which is then mapped to physical disks as visible to an Operating System ( OS ). The physical disks are mapped to simulated/virtualized volumes presented by a storage subsystem. By using server information regarding the FS, for those pages that are no longer in use, point in time copy ( PITC ) pages are marked for future PITC and will not be coalesced forward, thereby saving significant storage.

Description

Data storage space recovery system and method
The cross reference of related application
[001] the application require to be entitled as submit in " Data Storage Space Recovery System andMethod (data storage space recovery system and method) ", on June 22nd, 2007, sequence number is 11/767, the rights and interests of 049 U.S. Patent application, and relating to and being entitled as that " VirtualDisk Drive System and Method (virtual disc drive system and method) ", on August 13rd, 2004 submit to, sequence number is 10/918,329 U.S. Patent application common undetermined; These two applications all are merged in this paper by reference.
Technical field
[002] the present invention relates to the data space of being determined to have the obvious free time in the computer data storage system of the recessive data space that distributes by knowing of providing of the host computer system current information that is being used in space which is assigned with when the inquiry by using.By reducing desired storage total amount, can realize considerable cost savings in the phase that exists of any given data.
Background technology
[003] data volume that requires storage and transmission to increase day by day for various purposes are annual comprises for usage of trade and complies with various laws.The medium that records these data thereon has the acquisition price calculated with U.S. dollar, with the administered price of manpower Time Calculation and price such as the infrastructure of power and heat radiation and/or other key elements is provided.Desirable is the cost that reduces all these key elements.Generally believing management and the cost of this infrastructure is provided is the multiple that obtains the cost of storage medium.By reducing the amount of medium, other infrastructure costs can further be reduced.The invention provides by its can save, recycling or reuse data storages and relevant medium, thereby reduce the method for the total cost that has the data storage.
[004] before proved and to be structured in the storage subsystem that wherein is assigned to pond (pool) all physics storing initials, its example is discussed in the U.S. Patent application common undetermined that submit to, sequence number 10/918,329 in " Virtual Disk Drive System andMethod (virtual disc drive system and method) ", on August 13rd, 2004 being entitled as.This pond can be assigned to accessible other entities of computational entity as required in order to this entity is used for the data storage then.Storage allocation in the field of the invention from the pond to the computational entity is commonly called " simplifying configuration (thin provisioning) automatically ".Only this method of memory allocated has been utilized such connotation as required: storage is used by computational entity, because if computational entity writes data, it is intended to store these data and is used for fetching after a while.By only distributing the storage by these specific operation signs, the considerable memory space of not used by the storage subsystem of routine and may never being used by the storage subsystem of routine may be omitted from system as a whole, thereby reduces costs such as acquisition, maintenance.
[005] yet, in standard agreement, computational entity can not pass on the specific region that has before had data to be stored in it no longer be used and can be used again or otherwise be released now to storage subsystem.This data space can be used to interim storage, perhaps can only be to no longer include enough value to keep for further using.Owing to do not have method only to can be used to the zone that no longer is used from the angle sign of storage subsystem, storage subsystem continuation service data space.In other words, do not have implicit method to exist in logic, need not check data itself and determine to vacate (free) previous recessive storage that distributes beyond all doubtly by this method.Similarly, for storage subsystem, the content that checks all data that computational entity is stored is very to expend computational resource.Thereby when attempting to closely follow the technique variation in operation or the file system and can using all possible application of storage subsystem, storage system suffers very serious performance impact.
[006] generally speaking, desirable is to know exactly which piece is used for the file system of any operating system and any kind, and which piece is not to be used to help to make that to simplify configuration automatically effective as far as possible.User for the piece storage does not exist standard " not to be used " to the storage unit indicator dog.For the memory device of routine, this information is wholly immaterial, because a physical block is mapped to each addressable blocks on the memory device by physical representation.In the nearly all storage system that comprises more than a disc (disk) equipment, in fact any given addressable blocks can be mapped to almost any (and sometimes more than the one) physical block on one or more physics disc apparatus.Utilize virtualized fully, simplify the storage system of configuration automatically, only about the information which piece is being used collected recessively-if block is written into, it is assumed that and is used.This is safe in essence hypothesis.Under the situation of simplifying configuration automatically, be written to given addressable blocks based on the user, physical block is assigned with to be mapped to user's addressable blocks based on needs." read " piece of never being write and to return sky data (dummy data), normally that constitute by complete 0 and data with desired total length.Making piece can be released be used to the only method that reuses in this embodiment is if produce PITC, and given logic addressable blocks is written into again, and previous PITC expiration.Again, this indicates the piece that before had been assigned with no longer necessary and can be reallocated for the integrality of addressable storage recessively if needed, is assigned to other volumes possibly.
[007] some condition can cause a large amount of untapped addressable blocks among any FS.The extreme example of this situation may be to produce to comprise the almost single very large file of whole volume, then deletes this document.Storage subsystem writes desired storage with distribute for what file system was carried out recessively at every turn, in this case for comprising those of whole volume.After this document is deleted, no longer be required by the major part in the space of storage subsystem distribution, but storage space can not be released recessively, thus consumption of natural resource.Along with going by, use or the small-sized distribution of file system aspect and redistribute and to cause identical result.
[008] thereby, the collocation method of simplifying automatically in the existing data-storage system is fettered by the file system operation of operating system.These file system are not redistributed the space of being vacateed, but write to new file previous untapped allocation of space, i.e. new file write operation.Cause the big quantity space that before has been written in the given subregion of this method of operating, this given subregion is in fact no longer stored the data that can use for file system.(" LBA ") no longer used by file system because which LBA (Logical Block Addressing) data-storage system has no idea to know, this document system will accumulate for these present untapped along with going by according to the piece storage of hierarchically that is provided by data-storage system.Each time point copy (" PITC ", point in timecopy) that this accumulation will finally require to obtain will be visited the previous page in (refer to) page pool, although that storage in fact no longer is used.
[009] because the increasing page is declared to be " being used ", and in fact be not used, such as copy (copy), copy the operation that (replication) and other data move and will take more time, and will consume more storage space (may in all levels), thereby make many space advantages of simplifying configuration automatically invalid.File that is exemplified as lgb is written into and corresponding new volume is assigned with, and then the file of this lgb is deleted.In storage subsystem, the lgb page still is dispensed among effective PITC (active PITC) and will be brought among the next PITC like that.The page can be replaced in PITC after a while, yet in existing systems, does not have the releasing document system to declare the method for the page that it no longer is used.The result is that the page of lgb will be consumed in new copy, be sky even this is rolled up if this is by inference for empty volume is copied by using in-house tool.
[010] therefore, determine when that the method that the recessive storage that distributes is no longer used by computational entity and can be vacateed for other application is desirable.
Summary of the invention
[011] the invention provides the system and method for being determined to have the data space of the obvious free time in the computer data storage system of the recessive data space that distributes by knowing of providing of the host computer system current information that is being used in space which is assigned with when the inquiry by using.By reducing desired storage total amount, can realize considerable cost savings in the phase that exists of any given data.
[012] in one embodiment of the invention, provide to determine when that the recessive storage that distributes no longer used and can be vacateed the method for other application by computational entity.One of advantage of the present invention is that it reduces desired data storage total amount, this reduces other resources, such as the bandwidth that requires data are copied to from an entity the other copy of another entity, storage data, and correspondingly reduce the infrastructure of having used the support effect, comprise space, the time of conveying and managed storage, and power and other resources that come in handy of offering memory device.
[013] will realize the ground as it, embodiments of the invention can be changed aspect apparent at each, and do not break away from the spirit and scope of the invention.Therefore, should to be considered to be schematic rather than restrictive in essence for accompanying drawing and detailed explanation.
Description of drawings
[014] Fig. 1 illustrates the process flow diagram of an illustrative methods of recovering according to the data space of principle of the present invention.
[015] Fig. 2 illustrate according to principle of the present invention in the computer data storage system obviously an exemplary filesystem unit/sector of the page pool mapping method of free data space/bunch.
Embodiment
[016] Fig. 1 and 2 illustrates and determines to determine to have in the computer data storage system of the recessive data space that distributes the obviously method of idle data space by using by knowing of providing of the host computer system current information that is being used in space which is assigned with when the inquiry.
[017] host computer system of the present invention can comprise one or more computational entities (being sometimes referred to as main frame or server), this computational entity is connected to one or more data storage subsystems by the mode such as fiber channel, SCSI or other standard storage agreements, and one or more physical storage volume are simulated or be mapped to each data storage subsystem.An embodiment of data storage subsystem is 10/918 being entitled as that " Virtual Disk Drive System and Method (virtual disc drive system and method) ", on August 13rd, 2004 submit to, sequence number, discuss in 329 the U.S. Patent application common undetermined, its theme is merged in by reference.Main frame or server comprise operating system (" OS "), and its part is called as file system (" FS "), this document system have a plurality of unit/sectors/bunch, as shown in Figure 2.
[018] main frame or server have no idea to determine to be limited in conventional storage volume in the single physical disc and the difference between simulation/virtual volume usually.Abstract (abstraction) between the sector of memory cells that data storage subsystem is checked main frame or server offers and is used to those sector of memory cells of striding the data storage that a plurality of discs are expanded such as the redundant storage of RAID or other nonredundancy methods by using.Storage subsystem abstract (abstract) is assigned to the storage of the unit that is called as the page through the RAID method, and this page comprises a plurality of sectors.This abstract permission is to the inner management of the simplification of the data allocations between virtual volume and the actual disc storage, and that detailed being implemented in is entitled as is that " Virtual Disk Drive System and Method (virtual disc drive system and method) ", on August 13rd, 2004 submit to, sequence number is to discuss in 10/918,329 the U.S. Patent application common undetermined.
[019] therefore, in Fig. 1, the obvious method 100 of idle data space determined to have in the computer data storage system of the recessive data space that distributes from sign FS allocation units/sector/bunch step 102.OS physical disks blade unit/sector (the FS units/sectors/clusters are allocated andmapped with OS physical disk units/sectors in a step 104), FS unit/sector in step 104/bunch be assigned with and be mapped with.In step 106, obviously untapped tabulation of clear area is transported to storage subsystem.When arriving storage subsystem, untapped is adjusted to and only comprises full page.The page must not use fully so that its qualified being vacateed.In step 108, the controller (not shown) can be revised effective PITC, and this effective PITC follows the tracks of the variation of volume in given time durations.In step 108, the controller (not shown) determines that each piece in the untapped tabulation is in some copy effective time (" PITC ") or in historical PITC, wherein effectively PITC is storage area or the page that has been used and has not been used, and historical PITC is storage area or the page that has been used and may have been vacateed when PITC expires.If the piece in the untapped tabulation is effective PITC, then controller is sent the page back to free list in step 110.Page pool 210 among Fig. 2 illustrates the free list of storage space.Page pool 212 among Fig. 2 illustrates the page and is sent back to free list afterwards.
[020] if the piece in the free list is historical PITC, then controller in step 112 with the page marks among effective PITC for can (PITC after a while can contain might be overlapping with this page new data in order to be vacateed when the frozen PITC expiration that has this page become the PITC that has the page that is labeled, in any case so this page can be vacateed recessively), thus this page will be vacateed when historical PITC expires.Data in the historical PITC are read-only and can exist the phase not to be modified.This is included in I/O is write page of data, and sends the page back to free list.In case historical PITC expiration, its page may be sent back to free list.Next, controller determines in the tabulation whether another piece is arranged.If method 100 is returned step 108, and the rest may be inferred.If do not have piece in the tabulation, method 100 finishes.Page pool 212 among Fig. 2 illustrates PITC B and the C free page tabulation after expiring from system.Page E and N are vacateed when PITC B and C expire from system.As long as PITC exists and effective recovery point is provided, it need keep its whole page.
[021] in the typical case that does not have said method 100 of the present invention, the page 6 among the PITC A, the page 1 among the PITC B, the page 1,2 among the PITC C, as shown in Figure 2, can be formerly accessed (reference), thereby they must be with forward as PITC and be merged, and they are spaces of recessive free time and server or main frame cannot be known.As shown in Figure 2, FS does not re-use as FS bunch of mapping 202 indicated these storage areas, namely bunches 2,4,5,6 no longer is used, and it only is the space that is wasted.
[022] in order to discharge or to vacate these spaces, FS be required to be identified at shown in bunch mapping 202 be not used and be used bunch.This will bunches 2,4,5,6 be designated no longer and be used.
[023] then, bunch (2,4,5,6) that FS is required not to be used are mapped to the visible disc to OS.This provide the sector 18 and 19 and bunch 6 of the sector 7,8 of bunches 2 sectors 3,4, bunches 4 to the disc 0 to the disc 0, bunches 5 to the disc 1 to the disc 1 sector 1 and 2 mapping.Should be understood that sector number as used herein is the purpose in order to illustrate.
[024] consistent with the simulation/virtual volume that is presented by storage subsystem because of the physics disc of checking by design OS, between OS visual angle (view) 204 of disc and the storage subsystem volumes 206 man-to-man sector map is arranged.
[025] sevtor address that is identified as the sector of not used can be resolved to data now from its mapped corresponding PITC, i.e. PITC A, PITC B and PITC C in 208.Each PITC page contains much more very sectors-thousands of sometimes usually, and in this example the purpose in order to illustrate, each page contains two sectors.Therefore, volume 0 sector 3 and 4 is mapped to the page 1 of PITC B, and the sector 7 and 8 of volume 0 is mapped to the page 6 of PITCA, and the rest may be inferred.At this some place, the page that can not vacateed because other parts of the page are being used also can be resolved.For instance, in Fig. 2, the sector 19 of volume 1 is mapped to the page 5 of PITC C, and it also and is still used by the sector 3 of volume 1.In this case, the page 5 of PITC C is not vacateed at this some place.
[026] by using the server info about FS, is marked as for from now on PITC at the PITC page shown in 208 and surpasses the point that recover in the space for no longer being used and will not merged forward, thereby save considerable storage.
What [027] notice is that above-mentioned example does not illustrate FS bunch of never being used and how to be mapped to " remainder according to ".Although should be understood that method of the present invention sign and resolve (for example deletion or the move etc.) bunch that had before contained data and no longer contained data, can realize being used to identifying and resolve comprise some bunches of never being used bunch step.
[028] generally speaking, by checking FS, some identified page can be removed from PITC after a while, sends some page back to the memory page pond in operation from now on.In the present invention, FS can freely will be mapped to sector and physics disc with its desirable any way by any allocation units that FS uses.Thereby one of the key of recovering the space no longer be used is inquiry FS to determine which space and in fact be used and to be in which physical location.Knowing under the situation of this information, can carry out the mapping from the FS allocation units to the virtual store subsystem volumes, and therefrom, to the mapping of the page.Being identified as the page that is being used can be confirmed as by the free time recessively significantly.This information can be used to optimize the space and use in suitable PITC.

Claims (14)

  1. In the specified data storage subsystem by the method for the data space of the recessive obvious free time of distributing to host file system, described method comprises:
    Inquire about the file system storage unit that described host file system is not used with identification, wherein said file system storage unit is corresponding to the recessive storage space that distributes in described data storage subsystem;
    Receive the tabulation of untapped file system storage unit from described host file system;
    Untapped file system storage unit in the tabulation of described untapped file system storage unit is mapped to the data space that the recessiveness of the correspondence in described data storage subsystem is distributed, and such data space is vacateed but is assumed to by described data storage subsystem and used by described host file system by this; And
    Vacate the data space of the recessiveness distribution of described correspondence significantly.
  2. 2. the described method of claim 1, the data space that the step of the data space that wherein said recessiveness of vacateing described correspondence is significantly distributed is distributed based on the recessiveness of described correspondence are at the some copy page or copy judgement in the page at historical time point effective time.
  3. 3. the described method of claim 2 wherein, is discharged into the data space of putting the recessiveness distribution of the correspondence in the copy page described effective time in the free page pond, becomes the obvious free time by this.
  4. 4. the described method of claim 2, wherein, historical time selected be discharged in the free page pond when data space that the recessiveness of the correspondence of copy in the page distributes marks with the expiration of the described historical time point copy of the box lunch page, in a single day described historical time point copies the page and expires and just become the obvious free time by this.
  5. 5. the described method of claim 1 wherein is connected to described data storage subsystem by fiber channel with described host file system.
  6. 6. the described method of claim 1 wherein is connected to described data storage subsystem by SCSI with described host file system.
  7. 7. the described method of claim 1 wherein, is adjusted into the storage unit corresponding to the complete page that only comprises in the data storage subsystem with the tabulation of untapped file system storage unit.
  8. 8. the described method of claim 1, wherein, configuration is simplified in described data storage subsystem utilization automatically.
  9. In the specified data storage subsystem by the equipment of the data space of the recessive obvious free time of distributing to host file system, described equipment comprises:
    Be used for the described host file system of inquiry to identify the device of the file system storage unit that is not used, wherein said file system storage unit is corresponding to the recessive storage space that distributes in described data storage subsystem;
    Be used for receiving from described host file system the device of the tabulation of untapped file system storage unit;
    The untapped file system storage unit that is used for tabulation that will described untapped file system storage unit is mapped to the data space that the recessiveness of the correspondence in described data storage subsystem is distributed, and such data space is vacateed but is assumed to device by described host file system use by described data storage subsystem by this; And
    Be used for vacateing significantly the device at the counterpart of the amount of physical memory of data storage subsystem.
  10. 10. the described equipment of claim 9, wherein, being used for vacateing significantly amount of physical memory that the device of the counterpart of amount of physical memory distributes based on the recessiveness of correspondence is at the some copy page or copy judgement in the page at historical time point effective time.
  11. 11. the described equipment of claim 10, wherein, the device that is used for vacateing significantly the counterpart of amount of physical memory be configured to will effective time the point copy page the counterpart of amount of physical memory be discharged in the free page pond, become obvious space by this.
  12. 12. the described equipment of claim 10, wherein, the device that is used for vacateing significantly the counterpart of amount of physical memory is configured to the counterpart that historical time select the amount of physical memory of the copy page marked with box lunch and is discharged in the free page pond when the expiration of the historical time point copy page, and in a single day described historical time point copies the page and expires and just become the obvious free time by this.
  13. 13. a method that discharges the data space of the recessiveness distribution on the data-storage system, described method comprises:
    Identify the recessive data space that distributes in the described data-storage system;
    The data space that whether has any recessiveness to distribute to the file system inquiry that is operably connected to described data-storage system is used by file system;
    Receive the data space tabulation that described file system is not being used;
    To be mapped to amount of physical memory from the described data space from described tabulation that described file system receives; And
    To be discharged into the page pool tabulation from the described data space from described tabulation that described file system receives, thereby convert the untapped recessive data space that distributes to obvious idle data storage space in order to distributed by described data-storage system.
  14. 14. the described method of claim 13 wherein, is not adjusted into described file system and only comprises the complete page in the data space tabulation of using.
CN2008801039980A 2007-06-22 2008-06-23 Data storage space recovery system and method Active CN101878471B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11/767,049 US8601035B2 (en) 2007-06-22 2007-06-22 Data storage space recovery system and method
US11/767049 2007-06-22
PCT/US2008/067905 WO2009002934A1 (en) 2007-06-22 2008-06-23 Data storage space recovery system and method

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201310316822.6A Division CN103500164A (en) 2007-06-22 2008-06-23 Data storage space recovery system and method

Publications (2)

Publication Number Publication Date
CN101878471A CN101878471A (en) 2010-11-03
CN101878471B true CN101878471B (en) 2013-08-28

Family

ID=40137617

Family Applications (2)

Application Number Title Priority Date Filing Date
CN2008801039980A Active CN101878471B (en) 2007-06-22 2008-06-23 Data storage space recovery system and method
CN201310316822.6A Pending CN103500164A (en) 2007-06-22 2008-06-23 Data storage space recovery system and method

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201310316822.6A Pending CN103500164A (en) 2007-06-22 2008-06-23 Data storage space recovery system and method

Country Status (6)

Country Link
US (2) US8601035B2 (en)
EP (2) EP3361384A1 (en)
JP (2) JP2010531029A (en)
CN (2) CN101878471B (en)
HK (1) HK1150250A1 (en)
WO (1) WO2009002934A1 (en)

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311420B2 (en) * 2007-06-20 2016-04-12 International Business Machines Corporation Customizing web 2.0 application behavior based on relationships between a content creator and a content requester
US8694563B1 (en) * 2009-04-18 2014-04-08 Hewlett-Packard Development Company, L.P. Space recovery for thin-provisioned storage volumes
US20100306253A1 (en) * 2009-05-28 2010-12-02 Hewlett-Packard Development Company, L.P. Tiered Managed Storage Services
US8639876B2 (en) * 2010-01-27 2014-01-28 International Business Machines Corporation Extent allocation in thinly provisioned storage environment
US9965224B2 (en) * 2010-02-24 2018-05-08 Veritas Technologies Llc Systems and methods for enabling replication targets to reclaim unused storage space on thin-provisioned storage systems
US8380961B2 (en) 2010-08-18 2013-02-19 International Business Machines Corporation Methods and systems for formatting storage volumes
US8392653B2 (en) 2010-08-18 2013-03-05 International Business Machines Corporation Methods and systems for releasing and re-allocating storage segments in a storage volume
US9411517B2 (en) * 2010-08-30 2016-08-09 Vmware, Inc. System software interfaces for space-optimized block devices
CN101976223B (en) * 2010-10-09 2012-12-12 成都市华为赛门铁克科技有限公司 Thin provisioning method and device
US9348819B1 (en) * 2011-12-31 2016-05-24 Parallels IP Holdings GmbH Method and system for file data management in virtual environment
KR101791855B1 (en) * 2016-03-24 2017-10-31 주식회사 디에이아이오 Storage device and method of reclaiming space of the same
US10761743B1 (en) 2017-07-17 2020-09-01 EMC IP Holding Company LLC Establishing data reliability groups within a geographically distributed data storage environment
US10817388B1 (en) 2017-07-21 2020-10-27 EMC IP Holding Company LLC Recovery of tree data in a geographically distributed environment
US10684780B1 (en) * 2017-07-27 2020-06-16 EMC IP Holding Company LLC Time sensitive data convolution and de-convolution
US10880040B1 (en) 2017-10-23 2020-12-29 EMC IP Holding Company LLC Scale-out distributed erasure coding
US10528260B1 (en) 2017-10-26 2020-01-07 EMC IP Holding Company LLC Opportunistic ‘XOR’ of data for geographically diverse storage
US10382554B1 (en) 2018-01-04 2019-08-13 Emc Corporation Handling deletes with distributed erasure coding
US10817374B2 (en) 2018-04-12 2020-10-27 EMC IP Holding Company LLC Meta chunks
US10579297B2 (en) 2018-04-27 2020-03-03 EMC IP Holding Company LLC Scaling-in for geographically diverse storage
US10936196B2 (en) 2018-06-15 2021-03-02 EMC IP Holding Company LLC Data convolution for geographically diverse storage
US11023130B2 (en) 2018-06-15 2021-06-01 EMC IP Holding Company LLC Deleting data in a geographically diverse storage construct
US10719250B2 (en) 2018-06-29 2020-07-21 EMC IP Holding Company LLC System and method for combining erasure-coded protection sets
US11436203B2 (en) 2018-11-02 2022-09-06 EMC IP Holding Company LLC Scaling out geographically diverse storage
US10901635B2 (en) 2018-12-04 2021-01-26 EMC IP Holding Company LLC Mapped redundant array of independent nodes for data storage with high performance using logical columns of the nodes with different widths and different positioning patterns
US10931777B2 (en) 2018-12-20 2021-02-23 EMC IP Holding Company LLC Network efficient geographically diverse data storage system employing degraded chunks
US11119683B2 (en) 2018-12-20 2021-09-14 EMC IP Holding Company LLC Logical compaction of a degraded chunk in a geographically diverse data storage system
US10892782B2 (en) 2018-12-21 2021-01-12 EMC IP Holding Company LLC Flexible system and method for combining erasure-coded protection sets
CN109871209A (en) * 2018-12-30 2019-06-11 贝壳技术有限公司 Original list state recovery method and device
US10768840B2 (en) 2019-01-04 2020-09-08 EMC IP Holding Company LLC Updating protection sets in a geographically distributed storage environment
US11023331B2 (en) 2019-01-04 2021-06-01 EMC IP Holding Company LLC Fast recovery of data in a geographically distributed storage environment
US10942827B2 (en) 2019-01-22 2021-03-09 EMC IP Holding Company LLC Replication of data in a geographically distributed storage environment
US10866766B2 (en) 2019-01-29 2020-12-15 EMC IP Holding Company LLC Affinity sensitive data convolution for data storage systems
US10846003B2 (en) 2019-01-29 2020-11-24 EMC IP Holding Company LLC Doubly mapped redundant array of independent nodes for data storage
US10942825B2 (en) 2019-01-29 2021-03-09 EMC IP Holding Company LLC Mitigating real node failure in a mapped redundant array of independent nodes
US10936239B2 (en) 2019-01-29 2021-03-02 EMC IP Holding Company LLC Cluster contraction of a mapped redundant array of independent nodes
US11029865B2 (en) 2019-04-03 2021-06-08 EMC IP Holding Company LLC Affinity sensitive storage of data corresponding to a mapped redundant array of independent nodes
US10944826B2 (en) 2019-04-03 2021-03-09 EMC IP Holding Company LLC Selective instantiation of a storage service for a mapped redundant array of independent nodes
US11113146B2 (en) 2019-04-30 2021-09-07 EMC IP Holding Company LLC Chunk segment recovery via hierarchical erasure coding in a geographically diverse data storage system
US11119686B2 (en) 2019-04-30 2021-09-14 EMC IP Holding Company LLC Preservation of data during scaling of a geographically diverse data storage system
US11121727B2 (en) 2019-04-30 2021-09-14 EMC IP Holding Company LLC Adaptive data storing for data storage systems employing erasure coding
US11748004B2 (en) 2019-05-03 2023-09-05 EMC IP Holding Company LLC Data replication using active and passive data storage modes
US11209996B2 (en) 2019-07-15 2021-12-28 EMC IP Holding Company LLC Mapped cluster stretching for increasing workload in a data storage system
US11449399B2 (en) 2019-07-30 2022-09-20 EMC IP Holding Company LLC Mitigating real node failure of a doubly mapped redundant array of independent nodes
US11023145B2 (en) 2019-07-30 2021-06-01 EMC IP Holding Company LLC Hybrid mapped clusters for data storage
US11228322B2 (en) 2019-09-13 2022-01-18 EMC IP Holding Company LLC Rebalancing in a geographically diverse storage system employing erasure coding
US11449248B2 (en) 2019-09-26 2022-09-20 EMC IP Holding Company LLC Mapped redundant array of independent data storage regions
US11435910B2 (en) 2019-10-31 2022-09-06 EMC IP Holding Company LLC Heterogeneous mapped redundant array of independent nodes for data storage
US11119690B2 (en) 2019-10-31 2021-09-14 EMC IP Holding Company LLC Consolidation of protection sets in a geographically diverse data storage environment
US11288139B2 (en) 2019-10-31 2022-03-29 EMC IP Holding Company LLC Two-step recovery employing erasure coding in a geographically diverse data storage system
US11435957B2 (en) 2019-11-27 2022-09-06 EMC IP Holding Company LLC Selective instantiation of a storage service for a doubly mapped redundant array of independent nodes
US11144220B2 (en) 2019-12-24 2021-10-12 EMC IP Holding Company LLC Affinity sensitive storage of data corresponding to a doubly mapped redundant array of independent nodes
US11231860B2 (en) 2020-01-17 2022-01-25 EMC IP Holding Company LLC Doubly mapped redundant array of independent nodes for data storage with high performance
US11507308B2 (en) 2020-03-30 2022-11-22 EMC IP Holding Company LLC Disk access event control for mapped nodes supported by a real cluster storage system
US11288229B2 (en) 2020-05-29 2022-03-29 EMC IP Holding Company LLC Verifiable intra-cluster migration for a chunk storage system
US11693983B2 (en) 2020-10-28 2023-07-04 EMC IP Holding Company LLC Data protection via commutative erasure coding in a geographically diverse data storage system
US11847141B2 (en) 2021-01-19 2023-12-19 EMC IP Holding Company LLC Mapped redundant array of independent nodes employing mapped reliability groups for data storage
US11625174B2 (en) 2021-01-20 2023-04-11 EMC IP Holding Company LLC Parity allocation for a virtual redundant array of independent disks
US11354191B1 (en) 2021-05-28 2022-06-07 EMC IP Holding Company LLC Erasure coding in a large geographically diverse data storage system
US11449234B1 (en) 2021-05-28 2022-09-20 EMC IP Holding Company LLC Efficient data access operations via a mapping layer instance for a doubly mapped redundant array of independent nodes

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1849577A (en) * 2003-08-14 2006-10-18 克姆佩棱特科技公司 Virtual disk drive system and method

Family Cites Families (87)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE36462E (en) * 1986-01-16 1999-12-21 International Business Machines Corporation Method to control paging subsystem processing in virtual memory data processing system during execution of critical code sections
US5287496A (en) 1991-02-25 1994-02-15 International Business Machines Corporation Dynamic, finite versioning for concurrent transaction and query processing
US5278838A (en) * 1991-06-18 1994-01-11 Ibm Corp. Recovery from errors in a redundant array of disk drives
US5371882A (en) * 1992-01-14 1994-12-06 Storage Technology Corporation Spare disk drive replacement scheduling system for a disk drive array data storage subsystem
US5331646A (en) * 1992-05-08 1994-07-19 Compaq Computer Corporation Error correcting code technique for improving reliablility of a disk array
US5963962A (en) * 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
JP2735479B2 (en) 1993-12-29 1998-04-02 株式会社東芝 Memory snapshot method and information processing apparatus having memory snapshot function
US5572661A (en) * 1994-10-05 1996-11-05 Hewlett-Packard Company Methods and system for detecting data loss in a hierarchic data storage system
JPH0944381A (en) 1995-07-31 1997-02-14 Toshiba Corp Method and device for data storage
US5809224A (en) * 1995-10-13 1998-09-15 Compaq Computer Corporation On-line disk array reconfiguration
KR100208801B1 (en) * 1996-09-16 1999-07-15 윤종용 Storage device system for improving data input/output perfomance and data recovery information cache method
KR100275900B1 (en) * 1996-09-21 2000-12-15 윤종용 Method for implement divideo parity spare disk in raid sub-system
US5950218A (en) * 1996-11-04 1999-09-07 Storage Technology Corporation Method and system for storage and retrieval of data on a tape medium
US6275897B1 (en) * 1997-06-17 2001-08-14 Emc Corporation Remote cache utilization for mirrored mass storage subsystem
US6192444B1 (en) * 1998-01-05 2001-02-20 International Business Machines Corporation Method and system for providing additional addressable functional space on a disk for use with a virtual data storage subsystem
US6078932A (en) * 1998-01-13 2000-06-20 International Business Machines Corporation Point-in-time backup utilizing multiple copy technologies
US6212531B1 (en) * 1998-01-13 2001-04-03 International Business Machines Corporation Method for implementing point-in-time copy using a snapshot function
US6421711B1 (en) 1998-06-29 2002-07-16 Emc Corporation Virtual ports for data transferring of a data storage system
US6353878B1 (en) * 1998-08-13 2002-03-05 Emc Corporation Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem
US6366987B1 (en) * 1998-08-13 2002-04-02 Emc Corporation Computer data storage physical backup and logical restore
US6269431B1 (en) * 1998-08-13 2001-07-31 Emc Corporation Virtual storage and block level direct access of secondary storage for recovery of backup data
DE59902293D1 (en) 1998-09-01 2002-09-12 Siemens Ag METHOD FOR STORING DATA ON A STORAGE MEDIUM WITH LIMITED STORAGE CAPACITY
US6311251B1 (en) * 1998-11-23 2001-10-30 Storage Technology Corporation System for optimizing data storage in a RAID system
US6611897B2 (en) * 1999-03-22 2003-08-26 Hitachi, Ltd. Method and apparatus for implementing redundancy on data stored in a disk array subsystem based on use frequency or importance of the data
US6415296B1 (en) * 1999-03-31 2002-07-02 International Business Machines Corporation Method and system for more efficiently providing a copy in a raid data storage system
US7000069B2 (en) 1999-04-05 2006-02-14 Hewlett-Packard Development Company, L.P. Apparatus and method for providing very large virtual storage volumes using redundant arrays of disks
US6904599B1 (en) * 1999-11-29 2005-06-07 Microsoft Corporation Storage management system having abstracted volume providers
US6560615B1 (en) * 1999-12-17 2003-05-06 Novell, Inc. Method and apparatus for implementing a highly efficient, robust modified files list (MFL) for a storage system volume
US6839827B1 (en) * 2000-01-18 2005-01-04 International Business Machines Corporation Method, system, program, and data structures for mapping logical blocks to physical blocks
US6779095B2 (en) * 2000-06-19 2004-08-17 Storage Technology Corporation Apparatus and method for instant copy of data using pointers to new and original data in a data location
US7072916B1 (en) * 2000-08-18 2006-07-04 Network Appliance, Inc. Instant snapshot
US6618794B1 (en) * 2000-10-31 2003-09-09 Hewlett-Packard Development Company, L.P. System for generating a point-in-time copy of data in a data storage system
US6799258B1 (en) * 2001-01-10 2004-09-28 Datacore Software Corporation Methods and apparatus for point-in-time volumes
US6857059B2 (en) * 2001-01-11 2005-02-15 Yottayotta, Inc. Storage virtualization system and methods
US6990667B2 (en) * 2001-01-29 2006-01-24 Adaptec, Inc. Server-independent object positioning for load balancing drives and servers
US7058788B2 (en) 2001-02-23 2006-06-06 Falconstor Software, Inc. Dynamic allocation of computer memory
US6795895B2 (en) * 2001-03-07 2004-09-21 Canopy Group Dual axis RAID systems for enhanced bandwidth and reliability
US6510500B2 (en) * 2001-03-09 2003-01-21 International Business Machines Corporation System and method for minimizing message transactions for fault-tolerant snapshots in a dual-controller environment
US6915241B2 (en) 2001-04-20 2005-07-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Method for segmentation and identification of nonstationary time series
JP4175788B2 (en) * 2001-07-05 2008-11-05 株式会社日立製作所 Volume controller
US6948038B2 (en) 2001-07-24 2005-09-20 Microsoft Corporation System and method for backing up and restoring data
KR100392382B1 (en) * 2001-07-27 2003-07-23 한국전자통신연구원 Method of The Logical Volume Manager supporting Dynamic Online resizing and Software RAID
US6636778B2 (en) * 2001-09-10 2003-10-21 International Business Machines Corporation Allocation of data storage drives of an automated data storage library
US6823436B2 (en) * 2001-10-02 2004-11-23 International Business Machines Corporation System for conserving metadata about data snapshots
US6877109B2 (en) * 2001-11-19 2005-04-05 Lsi Logic Corporation Method for the acceleration and simplification of file system logging techniques using storage device snapshots
US7173929B1 (en) * 2001-12-10 2007-02-06 Incipient, Inc. Fast path for performing data operations
IL147073A0 (en) * 2001-12-10 2002-08-14 Monosphere Ltd Method for managing the storage resources attached to a data network
US7237075B2 (en) * 2002-01-22 2007-06-26 Columbia Data Products, Inc. Persistent snapshot methods
US20030220948A1 (en) * 2002-01-22 2003-11-27 Columbia Data Products, Inc. Managing snapshot/backup collections in finite data storage
US6829617B2 (en) * 2002-02-15 2004-12-07 International Business Machines Corporation Providing a snapshot of a subset of a file system
US7254813B2 (en) * 2002-03-21 2007-08-07 Network Appliance, Inc. Method and apparatus for resource allocation in a raid system
US7085956B2 (en) 2002-04-29 2006-08-01 International Business Machines Corporation System and method for concurrent logical device swapping
US6732171B2 (en) * 2002-05-31 2004-05-04 Lefthand Networks, Inc. Distributed network storage system with virtualization
US6938123B2 (en) 2002-07-19 2005-08-30 Storage Technology Corporation System and method for raid striping
US6957362B2 (en) * 2002-08-06 2005-10-18 Emc Corporation Instantaneous restoration of a production copy from a snapshot copy in a data storage system
US7107385B2 (en) * 2002-08-09 2006-09-12 Network Appliance, Inc. Storage virtualization by layering virtual disk objects on a file system
US7107417B2 (en) 2002-08-29 2006-09-12 International Business Machines Corporation System, method and apparatus for logical volume duplexing in a virtual tape system
US7191304B1 (en) 2002-09-06 2007-03-13 3Pardata, Inc. Efficient and reliable virtual volume mapping
US6857057B2 (en) * 2002-10-03 2005-02-15 Hewlett-Packard Development Company, L.P. Virtual storage systems and virtual storage system operational methods
US7089395B2 (en) 2002-10-03 2006-08-08 Hewlett-Packard Development Company, L.P. Computer systems, virtual storage systems and virtual storage system operational methods
US6952794B2 (en) 2002-10-10 2005-10-04 Ching-Hung Lu Method, system and apparatus for scanning newly added disk drives and automatically updating RAID configuration and rebuilding RAID data
US6981114B1 (en) 2002-10-16 2005-12-27 Veritas Operating Corporation Snapshot reconstruction from an existing snapshot and one or more modification logs
KR100439675B1 (en) 2002-10-24 2004-07-14 한국전자통신연구원 An efficient snapshot technique for shated large storage
US6957294B1 (en) 2002-11-15 2005-10-18 Unisys Corporation Disk volume virtualization block-level caching
US7284016B2 (en) * 2002-12-03 2007-10-16 Emc Corporation Client-server protocol for directory access of snapshot file systems in a storage system
US7263582B2 (en) * 2003-01-07 2007-08-28 Dell Products L.P. System and method for raid configuration
JP4283004B2 (en) * 2003-02-04 2009-06-24 株式会社日立製作所 Disk control device and control method of disk control device
US7231544B2 (en) * 2003-02-27 2007-06-12 Hewlett-Packard Development Company, L.P. Restoring data from point-in-time representations of the data
US7111147B1 (en) 2003-03-21 2006-09-19 Network Appliance, Inc. Location-independent RAID group virtual block management
JP2004348193A (en) 2003-05-20 2004-12-09 Hitachi Ltd Information processing system and its backup method
US6959313B2 (en) * 2003-07-08 2005-10-25 Pillar Data Systems, Inc. Snapshots of file systems in data storage systems
US7379954B2 (en) * 2003-07-08 2008-05-27 Pillar Data Systems, Inc. Management of file system snapshots
US20050010731A1 (en) * 2003-07-08 2005-01-13 Zalewski Stephen H. Method and apparatus for protecting data against any category of disruptions
JP4321705B2 (en) 2003-07-29 2009-08-26 株式会社日立製作所 Apparatus and storage system for controlling acquisition of snapshot
EP1668486A2 (en) 2003-08-14 2006-06-14 Compellent Technologies Virtual disk drive system and method
DE10348500B4 (en) * 2003-10-18 2009-07-30 Inos Automationssoftware Gmbh Method and device for detecting a gap dimension and / or an offset between a flap of a vehicle and the rest of the vehicle body
US7133884B1 (en) 2003-11-26 2006-11-07 Bmc Software, Inc. Unobtrusive point-in-time consistent copies
JP4681249B2 (en) 2004-04-09 2011-05-11 株式会社日立製作所 Disk array device
US7409518B2 (en) * 2004-05-21 2008-08-05 International Business Machines Corporation Method for improving disk space allocation
US7603532B2 (en) * 2004-10-15 2009-10-13 Netapp, Inc. System and method for reclaiming unused space from a thinly provisioned data container
US7873782B2 (en) * 2004-11-05 2011-01-18 Data Robotics, Inc. Filesystem-aware block storage system, apparatus, and method
JP4749112B2 (en) 2005-10-07 2011-08-17 株式会社日立製作所 Storage control system and method
US8095641B2 (en) 2005-10-27 2012-01-10 International Business Machines Corporation Method and system for virtualized health monitoring of resources
JP4694350B2 (en) 2005-11-08 2011-06-08 株式会社日立製作所 Managing the number of disk groups that can be started in the storage device
US7676514B2 (en) * 2006-05-08 2010-03-09 Emc Corporation Distributed maintenance of snapshot copies by a primary processor managing metadata and a secondary processor providing read-write access to a production dataset
US7653832B2 (en) * 2006-05-08 2010-01-26 Emc Corporation Storage array virtualization using a storage block mapping protocol client and server
US7702662B2 (en) * 2007-05-16 2010-04-20 International Business Machines Corporation Method and system for handling reallocated blocks in a file system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1849577A (en) * 2003-08-14 2006-10-18 克姆佩棱特科技公司 Virtual disk drive system and method

Also Published As

Publication number Publication date
CN103500164A (en) 2014-01-08
JP2010531029A (en) 2010-09-16
EP3361384A1 (en) 2018-08-15
WO2009002934A1 (en) 2008-12-31
US20080320061A1 (en) 2008-12-25
HK1150250A1 (en) 2011-11-11
EP2160684A1 (en) 2010-03-10
JP5608251B2 (en) 2014-10-15
US8601035B2 (en) 2013-12-03
EP2160684A4 (en) 2011-09-07
US20140089628A1 (en) 2014-03-27
CN101878471A (en) 2010-11-03
JP2013080527A (en) 2013-05-02
US9251049B2 (en) 2016-02-02

Similar Documents

Publication Publication Date Title
CN101878471B (en) Data storage space recovery system and method
US9021295B2 (en) Virtual disk drive system and method
US7873600B2 (en) Storage control device to backup data stored in virtual volume
US7840657B2 (en) Method and apparatus for power-managing storage devices in a storage pool
JP4961319B2 (en) A storage system that dynamically allocates real areas to virtual areas in virtual volumes
US20120124285A1 (en) Virtual disk drive system and method with cloud-based storage media
CN101872319A (en) Storage system condition indicator and using method thereof
CN101095115A (en) Storage system condition indicator and method
CN101566930B (en) Virtual disk drive system and method
CN101477446B (en) Disk array system and its logical resource processing method in degradation or reconstruction state
Burger et al. Accelerate with IBM storage: DS8880/DS8880f thin provisioning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1150250

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1150250

Country of ref document: HK

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160505

Address after: American Texas

Patentee after: DELL International Ltd

Address before: American Minnesota

Patentee before: Compellent Technologies