CN100419664C - Incremental backup operations in storage networks - Google Patents

Incremental backup operations in storage networks Download PDF

Info

Publication number
CN100419664C
CN100419664C CN 200510118887 CN200510118887A CN100419664C CN 100419664 C CN100419664 C CN 100419664C CN 200510118887 CN200510118887 CN 200510118887 CN 200510118887 A CN200510118887 A CN 200510118887A CN 100419664 C CN100419664 C CN 100419664C
Authority
CN
China
Prior art keywords
Prior art date
Application number
CN 200510118887
Other languages
Chinese (zh)
Other versions
CN1770088A (en
Inventor
A·达尔曼
L·纳尔逊
R·丹尼尔斯
Original Assignee
惠普开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/979395 priority Critical
Priority to US10/979,395 priority patent/US20060106893A1/en
Application filed by 惠普开发有限公司 filed Critical 惠普开发有限公司
Publication of CN1770088A publication Critical patent/CN1770088A/en
Application granted granted Critical
Publication of CN100419664C publication Critical patent/CN100419664C/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data

Abstract

描述了示范存储网络体系结构、数据体系结构以及用于在存储网络中执行备份操作的方法。 It described exemplary storage network architecture, data architecture and a method for performing the backup operations in the storage network. 一种示范方法可在存储网络中的处理器中实现。 An exemplary method may be implemented in the network processor memory. 该方法包括:在第一时间点产生源卷的快照克隆;同时激活在逻辑上链接到快照克隆的第一快照差别文件;将改变源卷中的数据集的I/O操作记录到第一快照差别文件中;关闭第一快照差别文件;在第一时间点之后,在第二时间点产生快照克隆的备份副本;以及在第二时间点之后,在第三时间点产生第一快照差别文件的备份副本。 The method comprising: generating a first source volume at the point in time snapshot of cloning; activated simultaneously logically linked to a first snapshot of the snapshot difference file clones; I will change the source volume datasets / O operation to record a first snapshot difference file; closing the first snapshot difference file; after the first time point, at a second time point to generate a snapshot backup copy clones; and after the second time point, generating a first point at a third time difference file snapshot of backup copy.

Description

存储网络中执行备份操作的方法技术领域所述主题涉及电子计算,更具体来说,涉及存储网络中的增量备份操作。 Backup operations relating to the storage network TECHNICAL FIELD relates to electronic computing, and more particularly, to a storage network incremental backup operation. 背景技术复制和存储存储装置的内容的能力是存储系统的一个重要特征。 BACKGROUND replication ability and the content stored in the storage means is a storage system is an important feature. 数据可并行地存储,以防单个存储装置或介质出故障。 Data may be stored in parallel, in case a single storage device or medium fails. 在第一存储装置或介质出故障时,系统则可检索第二存储装置或介质中包含的数据的副本。 When the first storage device or medium fails, the system may retrieve a copy of the data in the second storage means or contained in the medium. 复制和存储存储装置的内容的能力还有助于在复制时内容的固定记录的创建。 The ability to copy and store the contents of the storage device also helps to create the copy contents of fixed records. 这个特征允许用户恢复无意中编辑或擦除的数据的先前版本。 The previous version of this feature allows the user to recover accidentally erased or edited data. 存在与复制和存储存储装置的内容关联的空间和处理成本。 Space and processing cost and the presence of the copy content stored in the storage means associated. 例如, 一些存储装置在其内容正被复制时无法接受输入/输出(1/0)操作。 For example, some storage device when it is being copied content can not accept input / output (1/0) operations. 此外,用来保存副本的存储空间无法用于其它存储需求。 In addition, to save a copy of the storage space can not be used for other storage needs. 存储系统和存储软件产品可提供制作盘巻的时间点副本的方式。 Storage systems and storage software products provide a copy of the point in time to make the disc Volume of the way. 在这些产品的一部分中,可极快地制作副本,而不会明显地干扰使用盘巻的应用。 In some of these products, you can very quickly make a copy of the application without significantly interfere with the use of discs of Volume. 在另一些产品中,通过共享存储而不是复制全部盘巻数据,可使副本节省空间。 In other products, through shared memory instead of copying the entire tray Volume data, make a copy to save space. 但是,用于复制数据文件的已知方法包含限制。 However, known methods for copying data files include restrictions. 一部分已知的盘复制方法没有提供快速复制。 Part of the known methods do not provide fast disk copy to copy. 另一些已知的盘复制方法解决方案不节省空间。 Other known solutions disk replication method does not save space. 又一些已知的盘复制方法提供快速且节省空间的快照,但不是在可缩放、分布式、表驱动的虚拟存储系统中进行这类操作。 And some known disc replication method provides a fast and space-saving snapshots, but not in scalable, distributed perform such operations, table-driven virtual storage system. 因此, 在存储装置中仍然需要改进的复制操作。 Accordingly, in the storage device remains a need for improved copy operation. 发明内容在一个示范实现中, 一种计算方法可在存储网络中的处理器中实现。 SUMMARY In one exemplary implementation, a calculation method may be implemented in the network processor memory. 该方法包括:在第一时间点产生源巻的快照克隆;同时激活在逻辑上链接到快照克隆的第一快照差别文件;将改变源巻中的数据集的1/0搡作记录到第一快照差別文件中;关闭第一快照差别文件;在第一时间点之后,在第二时间点产生快照克隆的备份副本;以及在第二时间点之后,在第三时间点产生第一快照差别文件的备份副本。 The method comprising: generating source Volume Snapshot clones at a first time point; activated simultaneously logically linked to a first snapshot of the snapshot difference file clones; Volume changing 1/0 of the source data set to be recorded first shoving snapshot file differences; closing the first snapshot difference file; after the first time point, generating a snapshot backup copy clones at a second time point; and after the second time point, generating a first differential snapshot file a third time point backup copy. 附图说明图1是利用存储网络的连网计算系统的一个示范实现的示意说明。 BRIEF DESCRIPTION OF DRAWINGS FIG. 1 is a schematic illustration of an exemplary networked storage network use computing system implementation. 图2是存储网络的一个示范实现的示意说明。 FIG 2 is a schematic illustration of an exemplary implementation of network storage. 图3是可用于实现主机的计算装置的一个示范实现的示意说明。 FIG 3 is a schematic illustration of one exemplary computing device can be used to implement the host-implemented. 图4是存储单元的一个示范实现的示意说明。 FIG 4 is a schematic illustration of an exemplary memory cell implemented. 图5说明LUN的一个示范存储器表示。 5 illustrates an exemplary memory represents the LUN. 图6是在虛拟化存储系统中的数据分配的示意说明。 FIG 6 is a data in the virtual storage system allocated a schematic illustration. 图7是用于在存储网络中实现快照差别文件的一个示范数据体系结构的示意说明。 7 is a differential snapshots in the storage network file schematic illustration of an exemplary data architecture. 图8是用于在存储网络中创建和使用快照差别文件的一个示范文件结构的示意iJt明。 FIG 8 is a schematic for creating and using an exemplary file structure iJt next snapshot difference file in a storage network. 图9a-9b是快照差别文件的存储器分配图的示意说明。 FIGS 9a-9b is a schematic illustration of the memory map of the snapshot difference file. 图IO是流程图,说明用于创建快照差别文件的示范方法中的操作。 FIG IO is a flowchart illustrating operation of an exemplary method for creating a difference file in the snapshot. 图ll是流程图,说明用于在利用一个或多个快照差别文件的环境下执行读操作的示范方法中的操作。 Fig ll is a flowchart illustrating an operation using one or more exemplary read operation performed at ambient differential snapshot file process. 图12是流程图,说明用于在利用一个或多个快照差别文件的环境下执行写操作的示范方法中的操作。 FIG 12 is a flowchart illustrating an exemplary method of operation for use in one or more snapshot difference file environment performs the write operation in the. 图13是流程图,说明用于将快照差别文件合并到逻辑盘中的示范方法中的操作。 FIG 13 is a flowchart illustrating a merged snapshot difference file to an exemplary method of operation of the logical disk. 图14是流程图,说明在恢复操作中利用快照差别文件的示范方法中的操作。 FIG 14 is a flowchart illustrating an exemplary method of operating a file using the snapshot difference in the recovery operation. 图15是流程图,说明用于自动管理备份操作的方法的一个示范实现中的操作。 FIG 15 is a flowchart illustrating an exemplary operation for the automatic backup operations management method in. 具体实施方式本文所述的是示范存储网络体系结构、数据体系结构以及用于在存储网络中创建和使用差别文件的方法。 DETAILED embodiments described herein are exemplary storage network architecture, data architecture and a method for creating and using a difference file in the storage network. 本文所述的方法可体现为计算才几可读介质中的逻辑指令。 The methods described herein may be embodied as logic instructions calculate only a few readable medium. 在处理器上执行时,逻辑指令使通用计算装置^f皮编程为实现所述方法的专用机器。 When executed on a processor, the logic instructions cause a general purpose computing device programmed to transdermal ^ f special-purpose machine for implementing the method. 当由逻辑指令配置成执行本文所述的方法时,处理器构成用于执行所述方法的结构。 When configured by the logic instructions to execute the methods described herein, constitutes structure for the processor performing the method. 示范网络体系结构本文所述的主题可在提供系统级上的虛拟化数据存储的存储体系结构中实现,使得虚拟化在SAN中实现。 Exemplary network structure of the system described herein may be implemented in the theme provides a virtual data storage on a storage system level architecture, implemented such that the virtualization in the SAN. 在本文所述的实现中, 利用存储装置的计算系统称作主机。 In implementations described herein, the computing system utilizing a memory device called a host. 在一种典型实现中,主机是代表其本身或者代表耦合到主机的系统消耗数据存储资源容量的任何计算系统。 In a typical implementation, the host of its own behalf or on behalf of a host system coupled to the data storage system consume any computing resource capacity. 例如,主机可以是处理大数据库的巨型计算机、维护事务记录的事务处理服务器等等。 For example, the host may be a supercomputer to handle large databases, transaction processing server maintaining transaction records and so on. 或者,主机可以是局域网(LAN)或广域网(WAN)中的文件服务器,它为企业提供存储服务。 Alternatively, the host may be a local area network (LAN) or wide area network (WAN) file server provides storage services for enterprises. 在直接连接存储解决方案中,这种主机可包括配置成管理多个直接连接的盘驱动器的一个或多个盘控制器或RAID控制器。 In direct attached storage solutions, such host may be configured to include one or more disk controllers or RAID controller managing a plurality of disk drives directly connected. 相比之下,在SAN中,主机经由高速连接技术、例如具体实例中的光纤信道(FC)交换结构一致地连接到SAN。 In contrast, in a SAN, the host is connected via a high-speed technology, such as the specific example of Fiber Channel (FC) switch fabric coupled to the same SAN. 虚拟化SAN体系结构包括一组存储单元,其中,各存储单元包括称作盘组的存储装置池。 SAN virtualization architecture comprises a set of memory cells, wherein each memory cell includes a memory cell called a disk device group. 各存储单元包括耦合到盘组的并行存储控制器。 Each memory cell includes a memory controller coupled to a parallel disk pack. 存储控制器采用光纤信道仲裁环路连接或者通过诸如光纤信道交换结构之类的网络耦合到存储装置。 The memory controller uses fiber channel arbitrated loop connected or coupled to the storage device through the switched network such as a fiber channel or the like structure. 存储控制器还可通过点到点连接相互耦合,使它们能够协同管理存储容量对使用存储容量的计算机的提供。 The memory controller may also be coupled to one another by point to point connection, so that they can provide a coordinated management of the storage capacity of the storage capacity of the computer. 本文所述的网络体系结构表示分布式计算环境,例如采用专有SAN的企业计算系统。 The network architecture described herein represents a distributed computing environment, for example using the SAN specific enterprise computing system. 但是,网络体系结构可易于上下缩放,以便满足特定应用的需要。 However, the network architecture may be readily scaled up and down to meet the needs of a particular application. 图1是利用存储网络的连网计算系统100的一个示范实现的示意说明。 FIG 1 is a schematic illustration of an exemplary networked storage network use computing system 100 implemented. 在一个示范实现中,存储装置池110可实现为虛拟化存储装置池,例如Lubbers等人的已公布的美国专利申请公布号2003/0079102 中所述,通过引用将此公开完整地结合于本文中。 In one exemplary implementation, the storage device 110 may be implemented as a pool of virtual storage pool means, e.g. Lubbers et al., Published U.S. Patent Application Publication No. 2003/0079102 described by reference to the disclosed fully incorporated herein by this . 多个逻辑盘(又称作逻辑单元或LUN)112a、 112b可在存储装置池110中分配。 A plurality of logical disks (also referred to as a logical unit or LUN) 112a, 112b may be allocated in the storage tank 110. 每个LUN112a、 112b包括一系列邻接的逻辑地址,它们可由主才几装置120、 122、 124和128通过将请求从主机装置所使用的连接协议映射到唯一标识的LUN112a、 112b来寻址。 Each LUN112a, 112b includes a series of contiguous logical address, they may be only a few master devices 120, 122, 124 and 128 mapping requests from the host device connected to the protocol used to uniquely identify the LUN112a, 112b addressed. 主机、如服务器128可向其它计算或数据处理系统或装置提供服务。 Host, such as server 128 may provide services to other computing or data processing systems or devices. 例如,客户计算机126可经由主机、如服务器128访问存储装置池110。 For example, the client 126 via the host computer, such as server 128 accessing the storage pool 110 apparatus. 服务器128可向客户机126提供文件服务,并且可提供诸如事务处理服务、 电子邮件服务之类的其它服务。 Server 128 may provide file services to the client 126, and can provide services such as transaction processing services, other services like e-mail service. 因此,客户装置126可能或者可能不直接使用由主机128耗用的存储装置。 Thus, the client device 126 may or may not be directly used by the host device memory 128 consumed. 例如无线装置120等装置以及也可用作主机的计算机122、 124 可在逻辑上直接耦合到LUN 112a、 112b„主机120-128可耦合到多个LUN112a、 112b,以及LUN112a、 112b可在多个主机之间共享。 图1所示的装置中的每个可包括存储器、大容量存储装置以及足以管理网络连接的一定程度的数据处理能力。例如LUN 112a、 112b等LUN包括一个或多个冗余存储器(RStore),它们是可靠存储的基本单位。RStore包括具有关联冗余属性的物理存储段(PSEG)的有序集合,并且整个包舍在单个冗余存储器集(RSS)中。与传统存储系统类比,PSEG类似于盘驱动器,以及各个RSS类似于包含多个驱动器的RAID存储集。实现特定LUN的PSEG可在任何数量的物理存储盘上分布。此外,特定LUN 102表示的物理存储容量可配置成实现提供变化容量、 可靠性及可用性特征的各种存储类型。例如, 一些LUN可表示条带、 镜像和/ Device such as a computer and a wireless device 120 and the like may be used as hosts 122, 124 may be logically coupled directly to the LUN 112a, 112b "may be coupled to a plurality of hosts 120-128 LUN112a, 112b, and LUN112a, 112b may be a plurality of sharing between the host apparatus shown in figure 1 may each include a memory, a mass storage device and a degree sufficient to manage the data processing capability of network connection. e.g. LUN 112a, 112b, etc. include one or more redundant LUN memory (Rstore), which is the basic unit comprises a reliable storage .RStore ordered set of physical storage segments (PSEG) has an associated redundancy properties, and the entire package in a single round redundant memory set (RSS) in with the traditional storage analog system, PSEG like disk drives, and each set of RSS similar to RAID storage includes a plurality of drives. PSEG achieve a particular LUN may be distributed over any number of physical storage disks. in addition, the physical storage capacity can be represented by a particular LUN 102 configured to implement changes to provide various types of storage capacity, reliability and availability features. For example, some bands may represent LUN, mirroring and / 奇偶校验保护的存储。另一些LUN可表示没有配置条带、 冗余度或奇偶校验保护的存储容量。在一个示范实现中,RSS包括逻辑设备分配域(LDAD)中的物理盘的子集,并且可包括六到十一个物理驱动器(可动态改变)。物理驱动器可以具有不同的容量。为了映射,RSS中的物理驱动器可分配索引(例如0、 1、 2..... 11),以及为了RAID-l,可组织成对(即相邻的奇数和偶数索引)。包括许多盘的大RAID巻存在的一个问题是, 盘出故障的可能性随着添加更多驱动器而明显增加。例如,十六驱动器系统会遇到驱动器故障(或者更严重地同时两个驱动器故障)的可能性是八驱动器系统的两倍。根据本发明,由于数据保护在某个RSS 中展开,并且不是跨多个RSS,因此, 一个RSS中的盘故障对于其它任何RSS的可用性没有影响。因此,实现数据保护的RSS必定经受在RSS中的两次驱动器故 Parity protection storage. Other LUN may represent a storage capacity of the strip is not configured, redundancy or parity protected. In one exemplary implementation, RSS includes sub physical disk logical device allocation domain (LDAD) of set, and may include six to eleven physical drive (dynamically changing). physical drive may have different capacities. in order to map, RSS physical drives may be assigned indices (e.g., 0, 1, 2 ..... 11 ), and to RAID-l, may be organized in pairs (i.e., adjacent odd and even indexed). One of the problems include many large disk RAID Volume is the possibility of failure of the disk drive with the addition of more significantly increase. For example, sixteen drivers will experience a drive failure (or, more seriously while the two drive failures) the possibility of double eight drive system. according to the invention, due to the expansion of data protection in a RSS, and not span multiple RSS, thus, a disk failure does not affect the RSS for the availability of any other RSS. Thus, the RSS data protection must be subjected to two drive it in RSS 障而不是整个系统中的两次故障。由于RAID-1实现中的组对,因此不仅两个驱动器必定在特定RSS中出故障,而且RSS中的驱动器中的特定一个必定是第二个出故障(即第二个出故障的驱动器必定与第一个出故障的驱动器成对)。存储集到其中可独立管理每个RSS的多个RSS的这种原子化改进了整个系统的数据的性能、可靠性以及可用性。SAN管理器设备109耦合到管理逻辑盘集(MLD)111,它是描述用于创建LUN112a、 112b、 LDAD 103a、 103b的逻辑结构以及系统使用的其它逻辑结构的元数据容器。 Two barrier failure rather than the entire system. Since the realization of the RAID-1 pair, and must therefore only two drives fail in a particular RSS, the RSS and the drive must be in a specific second failure (i.e., the second failed drives must be paired with the first drive failed). save sets to be managed independently of each RSS wherein a plurality of such atoms RSS of improved overall system performance data, .SAN coupling reliability and availability manager logical disk device 109 to the management set (MLD) 111, which is described for creating LUN112a, 112b, LDAD 103a, the logical metadata container structures, and other structures used by the system logic 103b. 存储装置池101中可用的物理存储容量的一部分作为法定空间113保留,不能分配给LDAD 103a、 103b,因此无法用于实现LUN 112a、 112b。 Part of the storage tank 101 means physical storage capacity available in the space 113 as a legal reservation can not be allocated to LDAD 103a, 103b, and therefore can not be used to achieve LUN 112a, 112b. 在一个特定实例中,加入存储装置池110的每个物理盘具有可被指定为法定空间113的保留数量的容量(例如前"n"个物理扇区)。 In one particular example, the addition of each physical disk storage device cell 110 can be designated as having a number of legal capacity reserved space 113 (e.g., before "n" physical sectors). MLD lll在多个物理驱动器的这个法定空间中被形成镜像,因此即使在驱动器出故障时也可被存取。 MLD lll image is formed in this space statutory plurality of physical drives, so even when the drive failure can be accessed. 在一个特定实例中,与各LDAD 103a、 103b关联的至少一个物理驱动器包括MLD lll的副本(命名为"法定驱动器,,)。SAN管理设备109可能希望关联诸如LDAD 103a、 103b和LUN 112a、 112b 的名称串以及对象生日的时标等的信息。为了便于这个行为,管理代理采用MLD 111来存储这种信息作为元数据。MLD 111在各LDAD 103a、 103b的创建时隐式创建。例如,法定空间113用来存储包括物理存储器ID(各物理驱动器的唯一ID)、版本控制信息、类型(法定/非法定)、RSSID(标识这个盘属于哪个RSS)、 RSS偏移(标识这个盘在RSS中的相对位置)、存储单元ID(标识这个盘属于哪个存储单元)、PSEG大小的信息以及表明该盘是否为法定盘的状态信息。这个元数据PSEG还包含整个物理存储器的PSEG空闲表,可能采取分配位图的形式。另外,法定空间113包含物理盘上的每个PSEG的PSEG分配记录(PSAR)。 PSAR包含PSAR签 In one particular example, each LDAD 103a, 103b associated with the at least one physical drive includes a copy MLD lll (designated "statutory drive ,,). SAN management device 109 may wish to associate such LDAD 103a, 103b and LUN 112a, 112b the name string information and the like when the target object's birthday. to facilitate this behavior, the management agent uses MLD 111 stores such information as metadata .MLD 111 implicitly created upon each LDAD 103a, 103b example, statutory space 113 includes a physical memory for storing ID (unique ID of each physical drive), versioning information, type (legal / statutory), RSSID (which identifies the disk belongs RSS), RSS offset (the disk identification in RSS relative position), a storage unit ID (which memory cells identifies the disk belongs), information PSEG size and to indicate whether the disc state information of the legal disk. this metadata PSEG further comprises the entire physical memory PSEG freelist, may be taken allocation bitmap form. in addition, legal space 113 comprising PSEG allocation record (PSAR) PSEG on each physical disk. PSAR PSAR sign comprising 、元数据版本、PSAR使用情况以及这个PSEG所属的RSD的指示。CSLD 114是另一种类型的元数据容器,其中包括从每个LDAD 103a和103b内的地址空间中分配的、但与LUN 112a和112b不同地可跨越多个LDAD 103a和103b的逻辑驱动器。每个LDAD 103a、103b 最好是包括分配给CSLD 114的空间。CSLD 114保存描述给定LDAD 103的逻辑结构的元数据,其中包括主逻辑盘元数据容器(PLDMC), 它包含描述在LDAD 103a、 103b中实现的各LUN 112a、 112b所使用的每个RStore的描述符(称作RSDM)的阵列。CSLD 114实现通常用于诸如盘创建、均匀调整、RSS合并、RSS分割以及再生之类的任务的元数据。这个元数据包括各物理盘的状态信息,它表明物理盘是"正常"(即按照预计方式工作)、"丢失"(即不可用)、"合并"(即已经重新出现并且在使用前必须标准化的丢失驱动器)、"替换"(即驱动器^f皮标记为移去,以及数据必须 Metadata version, as well as the use of PSAR PSEG belongs RSD indication .CSLD 114 is another type of metadata container, wherein the dispensing comprises from each LDAD 103a and 103b in the address space within, but the LUN 112a and 112b may be variously LDAD 103a and 103b across multiple logical drives. each LDAD 103a, 103b preferably includes a space 114 is assigned to CSLD .CSLD 114 stored metadata describing the given logical structure LDAD 103, including the primary logical disk metadata container (PLDMC), which contains the description of implementations of the LUN 112a LDAD 103a, 103b, each descriptor 112b RStore used (referred RSDM) array is typically used to achieve such .CSLD 114 disk creation, even adjust, RSS merger, RSS division and metadata task of regeneration and the like. this metadata includes status information for each physical disk, it indicates that the physical disk is a "normal" (ie, in accordance with the anticipated work), "missing "(i.e. not available)," merged "(i.e., has re-appeared before use and loss of drive to be standardized)," replace "(i.e., transdermal ^ f drives labeled removed, and the data must be 复制到分布式备件上)以及"再生" (即,驱动器不可用,并且要求其数据再生到分布式备件上)。CSLD 114中的逻辑盘目录(LDDIR)数据结构是任何LDAD 103a、 103b中的全部LUN112a、 112b的目录。LDDS中的一个条目包括通用唯一ID(UUID)以及表示那个LUN 102的主逻辑盘元数据容器(PLDMC)的位置的RSD。 Distributed copied to the spare parts), and "regeneration" (i.e., drive is not available, and claims the reproduced data to the distributed spare) .CSLD 114 in the logical disk directory (LDDIR) data structure is any LDAD 103a, 103b of all entries in a directory .LDDS LUN112a, 112b comprises a Universally unique ID (UUID) and RSD represents the position of the main logical disk LUN 102 metadata container (PLDMC) a. RSD是指向基本RSDM的指针或者相应LUN 112a、 112b的入口点。 RSD is a pointer to the basic RSDM or corresponding LUN 112a, 112b entry point. 这样,具体LUN 112a、 112b特定的元数据可通过索引到LDDIR从而查找特定LUN 112a、 112b的基本RSDM来访问。 Thus, specific LUN 112a, 112b may be such that specific metadata through the index to find a specific LUN 112a to LDDIR, substantially RSDM 112b to access. PLDMC(例如以下描述的映射结构)内的元数据可装入存储器来实现具体LUN112a、 112b。 Metadata within PLDMC (e.g. mapping structure described below) may be loaded into memory to implement specific LUN112a, 112b. 因此,图1所示的存储装置池实现可用于恢复的多种形式的元数据。 Thus, the memory device shown in FIG pool perform multiple metadata available for recovery. CSLD111实现通常用于诸如盘创建、均匀调整、RSS合并、RSS 分割以及再生之类的任务的元数据。 CSLD111 implementation typically used such as a disc to create a uniform adjustment, RSS combined, RSS divided task and metadata reproduction or the like. 各盘上的已知位置中保存的PSAR元数据包含更基本形式的元数据,它没有映射到存储器中,而是可在需要时从它的已知位置来访问,从而再生系统中的所有元数据。 Stored in a known location on each disc PSAR more basic form of metadata includes metadata, it is not mapped into memory, but can be accessed from its known position when required, such that all of the metadata reproduction system data. 图1所示的装置中的每个可包括存储器、大容量存储装置以及足以管理网络连接的一定程度的数据处理能力。 Apparatus shown in FIG. 1 may each include a memory, a mass storage device and a degree sufficient to manage the data processing capability of network connection. 根据本发明的计算机程序装置在图l所示的各种装置的存储器中实现,并且由图l所示的装置的数据处理能力来启用。 Implemented in various memory devices shown in FIG. L computer program means according to the invention, and the data processing capability of the apparatus shown in FIG. L enabled. 在一个示范实现中,各LDAD 103a、 103b可与少至四个盘驱动器、多至数千个盘驱动器对应。 In one exemplary implementation, each LDAD 103a, 103b may correspond to the four small disk drives, up to thousands of disk drives. 在特定实例中,要求每个LDAD最少八个驱动器,以便支持采用四个成对盘的LDAD 103a、 103b中的RAID-1。 In a specific example, it requires a minimum of eight LDAD each drive, in order to support four pairs using LDAD 103a disks, RAID-1 103b of. LDAD103a、 103b中定义的LUN 112a、 112b可表示数兆字节或以下的存储区直到2兆兆字节或以上的存储区。 LDAD103a, LUN 112a 103b defined, 112b may represent the number of megabytes of memory or less area up to 2 terabytes or more storage area. 因此,数百或数千个LUN112a、 112b可在给定的LDAD 103a、 103b中定义,从而服务于大量存储需求。 Thus, hundreds or thousands LUN112a, 112b may be, 103b defined in the given LDAD 103a, so that service a large storage requirements. 这样,大企业可由提供专用于企业中的各工作站的单独存储区以及整个企业中的共享存储区的单个存储装置池1101来提供服务。 Thus, companies may provide a single large storage device to a dedicated pool of 1101 separate storage area in the business workstations throughout the enterprise and the shared memory area to provide services. 此外,企业可实现多个LDAD103a、 103b和/或多个存储装置池1101来提供虚拟地无限的存储容量。 In addition, the enterprise may implement multiple LDAD103a, 103b, and / or more storage devices 1101 to provide a pool of virtually unlimited storage capacity. 因此,在逻辑上,根据本说明的虛拟存储系统提供配置和访问上的大灵活性。 Thus, logically, provides configuration flexibility, and large access virtual storage system according to the present description. 图2是可用来实现例如存储装置池110的存储装置池的示范存储网络200的示意说明。 FIG 2 is a schematic illustration used to implement the network storage device such as the exemplary storage pool storage device 200 of the cell 110. 存储网络200包括通过通信网络212连接的多个存储单元210a、 210b、 210c。 Storage network 200 comprises a plurality of memory cells 212 connected via a communication network 210a, 210b, 210c. 存储单元210a、 210b、 210c可实现为一个或多个可通信地连接的存储装置。 The storage unit 210a, 210b, 210c may be implemented as one or more storage devices communicatively connected. 示范存储装置包括可向Hewlett-Packard Corporation(Palo Alto, California, USA)购买的存储装置的STORAGE WORKS线。 Exemplary storage means comprises STORAGE WORKS lines to the Hewlett-Packard Corporation (Palo Alto, California, USA) purchased memory device. 通信网络212可实现为专有专用网络, 例如光纤信道(FC)交换结构。 Communication network 212 may be implemented as a proprietary private networks, such as Fiber Channel (FC) switch fabric. 或者,通信网络212的若干部分可采用按照例如因特网小型计算机串行接口(iSCSI)协议的适当通信协议的公共通信网络来实现。 Alternatively, portions of the communications network 212 may be employed in accordance with the appropriate communication protocol, for example, a public communication network Internet Small Computer Serial Interface (iSCSI) protocol implemented. 客户计算机214a、 214b、 214c可通过主机、如服务器216、 220 来访问存储单元210a、 210b、 210c。 Client computers 214a, 214b, 214c by the host, such as server 216, 220 to access the storage unit 210a, 210b, 210c. 客户才几214a、 214b、 214c可直接或者经由网络218、如局域网(LAN)或广域网(WAN)连接到文件服务器216。 Only a few clients 214a, 214b, 214c may be directly or via network 218, such as a local area network (LAN) or a wide area network (WAN) 216 connected to the file server. 可包含在任何存储网络中的存储单元210a、 210b、 210c 的数量主要受到通信网络212中实现的连通性的限制。 The storage unit may be included in any of the storage network 210a, the number 210b, 210c mainly by limiting communication connectivity network 212 implemented. 举例来说,包括单FC交换机的交换结构可互连256或者更多端口,从而提供单个存储网络中数百个存储单元210a、 210b、 210c的可能性。 For example, the switch fabric comprises a single FC switch 256 may be interconnected or more ports, thereby providing a single storage network hundreds of memory cells 210a, 210b, 210c of the possibility. 主机216、 220通常实现为服务器计算机。 Host 216, 220 is typically implemented as a server computer. 图3是可用于实现主-K的示范计算装置330的示意"i兑明。计算装置330包括一个或多个处理器或处理单元332、系统存储器334以及将包括系统存储器334在内的各种系统组件耦合到处理器332的总线336。总线336表示若干类型的总线结构中的任何一个或多个,其中包括采用各种总线体系结构中任一个的存储器总线或存储器控制器、外围总线、加速图形端口以及处理器或局部总线。系统存储器334包括只读存储器(ROM)338 和随机存取存储器(RAM)340。包含例如在启动过程中帮助计算装置330中的元件之间传送信息的基本例程的基本输入/输出系统(BIOS)342存储在ROM338中。计算装置330还包括用于对硬盘(未示出)进行读取和写入的硬盘驱动器344,以及可包括对可移动磁盘348进行读取和写入的磁盘驱动器346以及用于对可移动光盘352、如CD ROM或其它光介质进行读取或写入的光盘驱动器350 FIG 3 is a variety that can be used to achieve the main -K a schematic exemplary computing device 330 "i against the next computing device 330 includes one or more processors or processing units 332, a system memory 334, and including the system memory 334, including the system components are coupled to one or more of any of several types of bus structure 336. the bus 336 represents a processor bus 332, including a variety of bus architectures using any one of a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus. the system memory 334 includes read only memory (ROM) 338 and random access memory (RAM) 340. the basic embodiment includes, for example to help transfer information between elements in computing device 330 during start Cheng basic input / output system (BIOS) 342 is stored in the ROM338. the computing device 330 further includes a hard disk (not shown) for reading and writing the hard disk drive 344, and may comprise a removable magnetic disk 348 for reading and writing a magnetic disk drive 346 and optical disk drive for a removable optical disk 352, such as a CD ROM or other optical media 350 to read or write 。石更盘驱动器344、磁盘驱动器346以及光盘驱动器350通过SCSI接口354或另外的某种适当接口连接到总线336。驱动器及其关联的计算机可读介质为计算装置330提供对计算机可读指令、数据结构、程序模块和其它数据的非易失性存储。 虽然本文所述的示范环境采用硬盘、可移动磁盘348和可移动光盘352,但诸如盒式磁带、闪存卡、数字视盘、随机存取存储器(RAM)、 只读存储器(ROM)之类的其它类型的计算机可读介质也可用于示范操作环境。许多程序模块可存储在硬盘344、》兹盘348、光盘352、 ROM 338 或RAM 340中,其中包括操作系统358、 一个或多个应用程序360、 其它程序模块362以及程序数据364。用户可通过例如键盘366和指示装置368等输入装置将命令和信息输入计算装置330。其它输入装置(未示出)可包括话筒、操纵杆、游戏控制垫、盘式卫星天线、扫描仪等等。 Stone More disk drive 344, magnetic disk drive 346 and optical disk drive 350 to provide the computing device 330 of computer readable instructions through a computer connected to the SCSI interface 354 or some other suitable interface to the bus 336. The drives and their associated readable medium, volatile storage of data structures, program modules, and other data. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 348 and removable optical disk 352, the magnetic tape cassettes, flash memory cards, digital video disks, such as a random access other types of memory (RAM), a read only memory (ROM) of a computer-readable medium can also be used in the exemplary operating environment. a number of program modules may be stored on the hard disk 344, "hereby disk 348, optical disk 352, ROM 338 or RAM 340 , including an operating system 358, one or more application programs 360, other program modules 362, and program data 364. the user via the keyboard 366 and a pointing device, for example, the input device 368 enter commands and information computing device 330. other input devices (not shown) may include a microphone, joystick, game control pad, satellite dish, scanner or the like. 些及其它输入装置通过耦合到总线336的接口370连接到处理单元332。监测器372或其它类型的显示装置还经由例如视频适配器374等接口连接到总线336。计算装置330可在采用到一个或多个远程计算机、如远程计算机376的逻辑连接的连网环境中工作。远程计算机376可以是个人计算机、服务器、路由器、网络PC、对等装置或其它公共网络节点,并且通常包括以上对于计算装置330所述的元件中的许多或全部,但在图3中只说明了存储器存储装置378。 These and other input devices 336 interface bus 370 connected to the processing unit 332. The monitor 372 or other type of display device is also connected through an interface such as a video adapter coupled via bus 374 to the other 336. The computing device 330 may employ one or a plurality of remote computers, such as remote computer environment networked work 376 logical connections the remote computer 376 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes a computing device for the above many or all of the elements of the 330, but in FIG. 3 illustrates a memory storage device 378. 图3所示的逻辑连接包括LAN 380和WAN 382。 Logical connections shown in Figure 3 comprises a LAN 380 and a WAN 382. 在用于LAN连网环境时,计算装置330通过网络接口或适配器384连接到局域网380。 When used in a LAN networking environment, computing device 330 is connected to the LAN 380 through a network interface or adapter 384. 在用于WAN连网环境时,计算装置330通常包括调制解调器386或用于在广域网382、如因特网上建立通信的其它部件。 When used in a WAN networking environment, computing device 330 typically includes a modem 386 or a WAN 382, ​​such as the other components establishing communications over the Internet. 可以是内置或外置的调制解调器386经由串行端口接口356连接到总线336。 Which may be internal or external modem 386 coupled to bus 336 via serial port interface 356. 在连网环境中,对于计算装置330或其若干部分所述的程序;f莫块可存储在远程存储器存储装置中。 In a networked environment, program for the computing device 330, or portions according to; F MO blocks may be stored in the remote memory storage device. 应当知道,所示的网络连接是示范性的,并且可采用建立计算机之间的通信链路的其它方法。 It should be appreciated that the network connections shown are exemplary and other methods of communication link established between the computers may be used. 主机216、 220可包括主机适配器硬件和软件,从而实现到通信网络212的连接。 Host 216, host adapter 220 may comprise hardware and software, enabling the connection to the communication network 212. 到通信网络212的连接可通过光耦合或者更传统的导电缆线,取决于带宽要求。 Connected to the communication network 212 or may be coupled by more conventional light conductive cable, depending on bandwidth requirements. 主机适配器可实现为计算装置330上的插入卡。 Host adapter may be implemented as a plug-in card on the computing device 330. 主机216、 220可实现任何数量的主机适配器,从而提供硬件和软件所支持的数量的到通信网络212的连接。 Host 216, 220 may implement any number of host adapter to provide the connection of hardware and software support to a communication network number of 212. 一般来说,计算装置330的数据处理器通过在不同时间存储于计算机的各种计算机可读存储介质中的指令来编程。 Generally, computing device 330 of the data processor readable storage medium by a variety of computer program instructions at different times stored in the computer. 例如,程序和操作系统可在寿欠盘、CD-ROM上或者以电方式分布,并且安装或装入计算;f几的辅助存储器。 For example, programs and operating systems may life under the disc, the CD-ROM or electrically distributed computing and install or load; f few secondary storage. 运行时,程序至少部分加载到计算机的主电子存储器。 Runtime, at least part of the program loaded into the computer's primary electronic memory. 图4是可用来实现例如210a、 210b或210c的存储单元的存储单元400的一个示范实现的示意说明。 FIG 4 is a schematic illustration used to implement an exemplary example of the memory locations 210a, 210b or 210c, 400 implementation. 参照图4,存储单元400包括又称作盘阵列控制器的两个网络存储控制器(NSC)410a、 410b来管理对一个或多个盘驱动器440、 442的数据的操作和传送。 Referring to FIG 4, the storage unit 400 includes two network storage controller (NSC) also known as a disk array controllers 410a, 410b, and management operations to transmit data to one or more disk drives 440, 442. NSC 410a、 410b 可实现为具有孩史处理器416a、 416b以及存储器418a、 418b的插入卡。 NSC 410a, 410b may be implemented with a history of children processors 416a, 416b and a memory 418a, 418b of the plug-in card. 每个NSC410a、 410b包括即通过例如交换结构等的通信网络提供到主才几的接口的双主才几适配器端口412a、 414a、 412b、 414b。 Each NSC410a, 410b provided to the main i.e. comprising only a few interface via a communication network structure of the double main switching only a few adapter port 412a, 414a, 412b, 414b. 在光纤信道实现中,主机适配器端口412a、 412b、 414a、 414b可实现为FCN— 端口。 Fiber Channel implementations, host adapter ports 412a, 412b, 414a, 414b may be implemented as FCN- port. 每个主机适配器端口412a、 412b、 414a、 414b管理登录以及与交换结构的接口,并在登录过程中被分配结构唯一端口ID。 Each host adapter port 412a, 412b, 414a, 414b interfaces with the registration and management of the switch fabric, and is assigned a unique port structure during login ID. 图4 所示的体系结构提供完全冗余存储单元;只要求单个NSC来实现存储单元。 Architecture shown in FIG. 4 provides full redundancy memory cells; NSC requires only a single memory cell is achieved. 每个NSC 410a、 410b还包括实现NSC 410a、 410b之间的通信连4矣438的通信端口428a、 428b。 Each NSC 410a, 410b further comprising achieve NSC 410a, 410b connected between a communication port 428a 4 carry communication 438, 428b. 通信连接438可实现为FC点到点连接,或者按照另外的任何适当的通信协议。 Communication link 438 may be implemented as a point to point connection FC, or according to any other suitable communications protocol. 在一个示范实现中,NSC 410a、 410b还包括实现与多个存储装置、如盘驱动器阵列440、 442的光纤信道仲裁环路(FCAL)通信连接的多个FCAL端口420a-426a、 420b-426b。 In one exemplary implementation, NSC 410a, 410b further comprises a plurality of storage devices to achieve, a plurality of ports 420a-426a FCAL fiber channel arbitrated-loop disk drive array 440, 442 (FCAL) communication connection, 420b-426b. 虽然所述实施例实现与盘驱动器阵列440、 442的FCAL连接,但大家会理解,与盘驱动器阵列440、 442的通信连接可采用其它通信协议来实现。 Although the embodiment is implemented with an array of disk drives 440, 442 are connected FCAL, but it will be appreciated, the connection may be used to implement communication array of disk drives 440, 442 with other communication protocols. 例如,可采用FC交换结构或小型计算机串行接口(SCSI)连接而不是FCAL配置。 For example, a FC switch fabric or Small Computer Serial Interface (SCSI) connection instead FCAL configuration. 在操作中,盘驱动器阵列440、 442所提供的存储容量可添加到存储装置池IIO。 In operation, the disk drive array 440, 442 to provide storage capacity of the storage device may be added to the pool IIO. 当应用需要存储容量时,主计算机128上的逻辑指令从一个或多个存储站点可用的盘驱动器阵列440、 442上可用的存储容量来建立LUN。 When the application needs memory capacity, the logic instructions on a host computer 128 or from a plurality of available storage site array of disk drives 440, 442 on the storage capacity available to establish the LUN. 大家会理解,由于LUN是逻辑单元而不一定是物理单元,因此构成LUN的物理存储空间可分布在多个存储单元上。 It will be appreciated that, since the logical unit LUN is not necessarily a physical unit, thus constituting a physical LUN storage space can be distributed across a plurality of memory cells. 应用的数据存储在存储网络中的一个或多个LUN中。 Application data stored in a storage network or a plurality of LUN. 需要存取数据的应用查询主计算机,主计算机从LUN中检索数据并将数据转发给应用。 Applications need to query the host computer access to data, the host computer to retrieve data from the LUN and forward the data to the application. 存储网络200中的存储单元210a、 210b、 210c中的一个或多个可实现基于RAID的存储,RAID(独立盘冗余阵列)存储系统是其中物理存储容量的一部分用于存储冗余数据的盘阵列系统。 Storage unit 200 stores a network 210a, 210b, 210c, one or more RAID-based storage may be implemented, RAID (Redundant Array of Independent Disks) disk storage system in which part of the physical storage capacity is used to store redundant data array system. RAID系统通常的特征为在首字母缩略词RAID下列举的六种体系结构其中之一。 One RAID system is usually characterized as recited in the first letter abbreviations under RAID architecture wherein six. RAID O体系结构是配置为没有任何冗余度的盘阵列系统。 RAID O architecture is configured without any redundancy disk array system. 由于这种体系结构实际上不是冗余体系结构,因此,RAIDO往往从RAID 系统的论述中省略。 Because of this architecture is not actually a redundant architecture, therefore, RAIDO often omitted from the discussion of RAID systems. RAID 1体系结构涉及根据镜像冗余度配置的存储盘。 RAID 1 architecture involves configuration of a storage disk mirroring redundancy. 原始数据存储在一组盘中,而数据的复制副本则保存在另外的盘中。 A set of raw data stored in the disk, and then replicate copy of the data stored in another dish. RAID 2 至RAID 5体系结构都涉及奇偶校验型冗余存储。 RAID 5 to RAID 2 architecture involves storing parity redundancy. 特别关注的是, RAID 5系统在多个盘上分布数据和奇偶校验信息。 It is particularly concerned, RAID 5 system distribution data and parity information across multiple disks. 这些盘通常分为相等大小的地址区域,称作"块,,。来自各盘、具有相同单元地址范围的一组块称作"条"。在RAID5中,各条具有N个块的数据以及包含N个块中的数据的冗余信息的一个奇偶校验块。在RAID5中,奇偶校验块在不同盘上逐条循环。例如,在具有五个盘的RAID5系统中,第一条的奇偶校验块可能在第五盘上;第二条的奇偶校验块可能在第四盘上;第三条的奇偶校验块可能在第三盘上;等等。随后的条的奇偶校验块通常以螺旋模式(但其它模式也是可行的)围绕盘驱动器"旋进"。RAID2至RAID4体系结构与RAID 5的不同之处在于它们计算并在盘上放置奇偶校验块的方式。所实现的特定RAID类并不重要。图5说明一个示范实现中的LUN112a、 112b的一个示范存储器表示。存储器表示主要是在NSC410a、 410b的存储器中实现的一种映射结构,它实现以逻辑块地址(LBA)表达的 These discs are typically divided into equal sized address areas referred to as "block ,,. A block from the disks, with the same unit address ranges are referred to" article ". In the RAID5, the pieces of data having N blocks and a parity data block contains redundancy information of the N blocks. in RAID5, the parity blocks on the various tray one by one cycle. For example, in a system having five RAID5 disk, the first parity subsequent parity strip like; may check block on the fifth disk; a second strip may parity block on the fourth disc; Article third parity block might be on the third disc. block is typically in a spiral pattern (although other patterns are possible) around the disk drive "precession" .RAID2 to RAID4 architecture differs from RAID 5 and that they calculate the parity block and placed on the disk embodiment. achieved RAID-specific classes is not important. FIG. 5 illustrates an exemplary implementation of LUN112a, an exemplary representation of the memory 112b memory map showing a main structure implemented in NSC410a, 410b of the memory, it is implemented in a logical block address ( LBA) expression 求从主机、如图l所示的主才几128到针对物理盘驱动器、如盘驱动器440、 442的特定部分的读/写命令的转换。希望存储器表示足够小,以便适合适当的存储器大小,使得它可易于在工作中被访问,而对于存储器表示页面调进和调出NSC的存储器的要求最小或没有。本文所述的存储器表示使各LUN 112a、 112b能够实现从1兆字节至2兆兆字节的存储容量。可设想每个LUN112a、 112b更大的存储容量。为了便于说明,本说明中采用2兆兆字节的最大值。此外, 存储器表示使各LUN 112a、 112b能够采用任何类型的RAID数据保护来定义,其中包括多级RAID保护以及支持没有任何冗余度。此夕卜,多种类型的RAID数据保护可在单个LUN 112a、 112b中实现, ^:得第一范围的逻辑盘地址(LDA)对应于未保护数据,以及相同LUN 112a、 112b中的第二组LDA实现RAID 5保护。因此,实现存储器表示的数据 Request from the primary host, as shown in Figure l only a few 128 to the driver for the physical disk, such as a specific part of the disk drives 440, 442 read / write command conversion hope memory represents small enough to fit the appropriate memory size, so that it can be easily accessible during operation, whereas for the memory and represents the page-NSC minimum required recall memory or not. the memory herein represents the respective LUN 112a, 112b can be realized from 1 megabyte to 2 terabytes of storage. each contemplated LUN112a, 112b greater storage capacity. for convenience of explanation, the maximum value of 2 terabytes used in the present description. Further, the memory indicates the respective LUN 112a, 112b can be employed any type of RAID data protection is defined, including a multistage RAID protection and support for this without any redundancy Bu Xi, a plurality of types of RAID data protection can be implemented in a single LUN 112a, 112b in ^: first range to give logical disc address (LDA) corresponding unprotected data, and the same LUN 112a, 112b in the second set of RAID 5 LDA achieve protection. Thus, data represented by memory 构必须灵活处理这种多样性,但仍然是有效的,使得LUN112a、 112b不需要额外的数据结构。图5所示的存储器表示的持久副本保存在前面所述的各LUN112a、 112b的PLDMDC中。当系统读取法定空间113中包含的元数据以便获取指向相应PLDMDC的指针、然后再检索PLDMDC并加载第2级映射图(L2MAP)501时,实现特定LUN 112a、 112b的存储器表示。这对于每个LUN112a、 U2b执行,但在一般操作中,这将在创建LUN112a、 112b时出现一次,此后,存储器表示将在被使用时保留在存储器中。 This structure must be flexible diversity, but is still valid, so LUN112a, 112b no additional data structures. Persistent copy of the memory shown in FIG. 5 stored in the representation of the previously described LUN112a, 112b in the PLDMDC. when the system reads the metadata contained in the legal space 113 to obtain a pointer to point to the corresponding PLDMDC then retrieves and loads PLDMDC level 2 map (L2MAP) when 501, to achieve a particular LUN 112a, 112b denotes a memory which, for each a LUN112a, U2b execution, but in general operation, which will create LUN112a, one occurrence 112b, thereafter, said it would remain in the memory when the memory is used. 逻辑盘映射层将请求中指定的LDA映射到特定RStore以及RStore中的偏移。 Layer mapping logical drive specified in the request is mapped to a specific RStore LDA and RStore offset. 参照图5中所示的实施例,LUN可采用L2MAP 501、 LMAP 503以及冗余度集合描述符(RSD)505作为将逻辑盘地址映射到地址所表示的物理存储单元的主结构来实现。 Referring to the embodiment shown in FIG. 5, LUN employed L2MAP 501, LMAP 503 and redundancy set descriptor (RSD) 505 mapped as a logical disk the physical storage address to the main structural units represented by the address is achieved. 图5所示的映射结构对于各LUN 112a、 112b来实现。 Mapping structure shown in Figure 5 is achieved for each LUN 112a, 112b. 单个L2MAP处理整个LUN 112a、112b。 L2MAP handle the entire single LUN 112a, 112b. 每个LUN 112a、112b由多个LMAP 503表示,其中LMAP 503的具体数量取决于任何给定时间所分配的实际地址空间。 Each LUN 112a, 112b is represented by a plurality of LMAP 503, wherein the specific number LMAP 503 at any given time depends on the allocated physical address space. RSD 505还只对所分配的存储空间存在。 RSD 505 there is still only the allocated memory space. 利用这种分割目录方法,稀疏地装载了所分配存储区的大存储巻、如图5所示的结构有效地表示所分配的存储区,同时使未分配存储区的数据结构最小。 This directory structure using a segmentation method, the large sparsely loaded storage Volume allocated memory area, as shown in FIG. 5 effectively represents the allocated memory area, while minimizing the data storage area is not assigned structure. L2MAP 501包括多个条目,其中的每个条目表示地址空间的2 千兆字节。 L2MAP 501 includes a plurality of entries, wherein each entry represents a 2 gigabyte address space. 因此,对于2兆兆LUN 112a、 112b, L2MAP 501包括1024 个条目,以便覆盖特定实例中的整个地址空间。 Thus, for 2 terabit LUN 112a, 112b, L2MAP 501 includes entries 1024, to cover the entire address space in a particular instance. 各条目可包含与相应的2千兆字节的存储区对应的状态信息以及相应LMAP描述符503 的指针,状态信息和指针仅在相应的2千兆字节的地址空间已经被分配时才有效,因此,L2MAP 501中的一部分条目在许多应用中将为空的或者无效。 Each entry may contain only the status information and corresponding LMAP descriptor pointer 503 corresponding to the storage area corresponding to 2 gigabytes, and the pointer state information has been assigned only the corresponding 2 gigabytes of address space efficient Therefore, L2MAP 501 is part of an entry in many applications will be blank or invalid. LMAP 503中的各条目所表示的地址范围称作逻辑盘地址分配单元(LDAAU)。 LMAP 503 address range represented by each entry in the address assigning unit called a logical disk (LDAAU). 在特定实现中,LDAAU为1兆字节。 In particular implementations, LDAAU 1 megabyte. 对于每个所分配LDAAU在LMAP 503中创建一个条目,而与LDAAU中的存储区的实际利用无关。 LDAAU LMAP 503 creates an entry for each allocated, irrespective of the actual use of the storage area LDAAU. 换言之,LUN102的大小可按照1兆字节的增量扩大或缩小。 In other words, LUN102 size may be enlarged or reduced in increments of 1 megabyte. LDAAU表示粒度,LUN 112a、 112b中的地址空间可采用这个粒度来分配给特定的存储任务。 The particle size LDAAU, LUN 112a, 112b in the address space can be allocated to a specific particle size storage tasks. LMAP 503仅对所分配地址空间的各2千兆字节增量存在。 2 gigabytes each incremental LMAP 503 is only present in the address space assigned. 如果小于2千兆字节的存储区用于特定LUN 112a、 112b,则仅需要一个LMAP 503,而如果2兆兆字节的存储区^皮使用,则将存在1024个LMAP 503。 Is less than 2 gigabytes of storage area for a particular LUN 112a, 112b, a LMAP 503 is only necessary, if 2 terabytes of storage area used transdermal ^, 1024 LMAP 503 will be present. 各LMAP 503包括多个条目,其中的每个条目可选地对应于冗余段(RSEG)。 Each LMAP 503 includes a plurality of entries, wherein each entry corresponds to optionally redundant segments (RSEG). RSEG是原子逻辑单元,它大致与物理域中的PSEG相似-类似于RStore的逻辑盘分区。 RSEG atomic logic unit, which is generally similar to the physical domain of PSEG - similar RStore logical disk partitions. 在一个特定实施例中, RSEG是跨越多个PSEG并实现所选类型的数据保护的存储区的逻辑单元。 In one particular embodiment, RSEG PSEG and implemented across multiple logical units of storage areas of the selected type of data protection. 在一个优选实现中,RStore中的全部RSEG绑定到邻接的LDA。 In a preferred implementation, all RSEG RStore is bound to the adjacent LDA. 为了保持顺序传送的基础物理盘性能,希望按照LDA空间按顺序从RStore毗连地定位所有RSEG,以便保持物理邻接性。 In order to maintain basic physical properties of the disk transfer sequence, it is desirable in order to locate all adjacent RSEG from RStore in accordance with LDA space, in order to maintain the physical contiguity. 但是, 如果物理资源变为不足,则可能需要跨过LUN 102的分离区域从RStore分布RSEG。 However, if the physical resources becomes insufficient, the separation area may need to cross RSEG distribution from the LUN 102 RStore. 请求501中指定的逻辑盘地址选择与特定RSEG 对应的LMAP 503中的特定条目,而特定RSEG又对应于分配给特定RSEG弁的1兆字节地址空间。 Logical drive specified in the request 501 to select the address corresponding to the specific LMAP 503 RSEG a particular entry, and in turn corresponds to a particular RSEG assigned to a particular address space of 1 megabyte of RSEG Bian. 各LMAP条目还包含与特定RSEG有关的状态信息以及RSD指针。 Each entry also contains LMAP state information, and a pointer associated with a particular RSD RSEG. RSEG弁可以可选地被省略,它使RStore本身为可分配的最小原子逻辑单元。 Bian RSEG may optionally be omitted, so that it itself is the smallest atom RStore logic unit may be assigned. RSEG弁的省略减小了LMAP条目的大小,并且允许LUN 102的存储器表示对于每兆字节的存储区需要更少的存储器资源。 Bian omitted RSEG reduced size LMAP entry, and allows the memory representation for each LUN 102 megabytes of storage area requires less memory resources. 或者,RSEG大小可增加而不是完全省略RSEG的概念,它还依靠减小的存储区的原子逻辑单元的粒度来降低对存储器资源的需求。 Alternatively, instead of the concept of increased size RSEG RSEG omitted entirely, particle size also rely atoms logical unit memory area is reduced to reduce the demand for memory resources. 因此, 与RStore成比例的RSEG大小可经过改变以满足特定应用的需要。 Thus, the size RStore RSEG be proportional been altered to meet the needs of a particular application. RSD指针指向特定RSD 505,它包含描述相应RSEG所在的RStore的元数据。 RSD pointer to a particular RSD 505, which contains a description of the corresponding RStore RSEG where metadata. 如图5所示,RSD包括冗余存储器集选择器(RSSS),其中包含冗余存储器集(RSS)标识、物理成员选择以及RAID 信息。 As shown in FIG. 5, RSD comprises a set of redundant memory selector (RSSS), which comprises a set of redundant memory (RSS) identification, selection and physical members RAID information. 物理成员选择本质上是RStore所使用的物理驱动器的列表。 Is a list of physical members selected physical drive RStore essentially used. RAID信息,或者更一般来说是数据保护信息,描述特定RStore中实现的任何数据保护类型。 RAID information, or more generally data protection information, describe any particular type of data protection RStore implemented. 各RSD还包括标识在物理上实现相应存储容量的物理成员选择的驱动器中的特定PSEG编号的多个字段。 Each RSD implement further comprising a plurality of field-specific identification number PSEG physical storage capacity corresponding member of the selected drive physically. 每个列示的PSEG弁对应于RSSS的物理成员选择列表中的列示成员之一。 Each listed PSEG Benten corresponds to the physical members of RSSS select one of the listed members of the list. 可包括任何数量的PSEG,但是,在一个特定实施例中,各RSEG采用由RStore实现的RAID类型所规定的四至八个PSEG来实现。 It may include any number of PSEG, however, in one particular embodiment, each of four to eight PSEG RSEG using specified RAID type implemented by RStore achieved. 在操作中,对于存储区存取的每个请求指定LUN 112a、 112b和地址。 In operation, specified LUN 112a, 112b, and addresses for each request to access the storage area. 例如NSC410a、 410b等的NSC将所指定的逻辑驱动器映射到特定LUN 112a、 112b,然后在那个LUN 102的L2MAP 501还没有存在于存储器时将其加栽到存储器中。 For example NSC410a, 410b, etc. The NSC specified logical drives mapped to a specific LUN 112a, 112b, then the LUN L2MAP 501 102 is not yet in its memory when the memory was added plummeted. LUN 102的所有LMAP和RSD 最好也加载到存储器中。 All LMAP LUN and RSD 102 is also preferably loaded into memory. 请求所指定的LDA用来索引到L2MAP 501,它又指向LMAP中特定的一个。 LDA request to the specified index into L2MAP 501, it is a particular point LMAP. 请求中所指定的地址用来确定到指定LMAP的偏移,以便返回与请求指定地址对应的特定RSEG。 Specified in the request is used to determine the offset address of the specified LMAP to return RSEG specific request corresponding to the specified address. 一旦RSEG弁为已知,则相应的RSD被检查以便标识属于冗余段的成员的特定PSEG以及使NSC 410a、 410b能够产生驱动器特定命令以便存取所请求数据的元数据。 Once RSEG Bian is known, the corresponding RSD is checked in order to identify specific PSEG are members of redundant segments and causing NSC 410a, 410b can generate drive commands to access particular metadata requested data. 这样,LDA易于映射到必须被存取以-使实现给定存储请求的一组PSEG。 Thus, LDA easily map to be accessed to be - so to achieve a given set of PSEG storage request. L2MAP每个LUN 112a、 U2b耗用4千字节,而与示范实现中的大小无关。 L2MAP each LUN 112a, U2b 4 kilobytes consumption, irrespective of the size of the exemplary implementation. 换言之,L2MAP包含覆盖整个2兆兆字节最大地址范围的条目,即使那个范围中只有一'卜部分实际上被分配给LUN 112a、 112b。 In other words, L2MAP comprising 2 terabytes covering the entire address range of the largest entry, even if that range is only a 'Bu section is actually allocated to the LUN 112a, 112b. 考虑可采用可变大小的L2MAP,但这种实现将增加复杂度而具有极少的存储器节省。 Can be considered variable size L2MAP, but such implementation would have little added complexity and memory savings. LMAP段每兆字节地址空间耗用4字节,而RSD每兆字节则耗用3字节。 LMAP per megabyte address space segment consume 4 bytes, and the consumption per megabyte RSD 3 bytes. 与L2MAP不同,LMAP段和RSD仅对于所分配的地址空间存在。 And L2MAP different, LMAP RSD section and is present only for the allocated address space. 图6是在虛拟化存储系统中的数据分配的示意说明。 FIG 6 is a data in the virtual storage system allocated a schematic illustration. 参照图6,并对它们进行组装以创建冗余存储器(RStore)。 Referring to FIG. 6, and they are assembled to create a redundant memory (RStore). 与特定冗余存储器集对应的PSEG的集合称作"RStore"。 Set corresponding to a particular set of redundant memory PSEG referred to "RStore". 数据保护规则可能要求RStore 中的PSEG位于分开的盘驱动器中或者分开的封装中,或者处于不同的地理位置。 Data protection rules may require RStore in PSEG in separate disk drive or a separate package, or in a different location. 例如,基本RAID-5规则假定条切数据涉及跨过一些独立驱动器的条切。 For example, assume that the basic rules of RAID-5 relates to a strip cut across the cutting data number of independent drives.但是,由于各驱动器包括多个PSEG,因此,本发明的冗余层确保PSEG是从满足预期数据保护标准以及数据可用性及性能标准的驱动器中选取的。 RStore净皮完整地分配给特定LUN 102。 RStore可分区为1兆字节段(RSEG),如图6所示。图6中的各RSEG由于根据RAID 5规则存储相当多的奇偶校验数据而只提供所耗用的物理盘容量的80%。当配置为RAID5存储集时,各RStore将包含四个PSEG上的数据以及与RAID4存储区相似的第五PSEG(未示出)上的奇偶校验信息。第五PSEG不构成从容量角度看来具有四个PSEG的RStore的整个存储容量。跨过多个RStore,奇偶校验将落在各种驱动器上,以便提供RAID 5保护。 RStore本质上是固定数量的虚拟地址空间(实例中为8兆字节)。 RStore耗用四至八个完整的PSEG,取决于数据保护等级'没有冗余度的条切RStore耗用4个PSEG(4-2048千字节PSEG=8兆字节),具有4+1奇偶校验的RStore耗用5个PSEG,以及镜像RStore耗用八个PSEG,来实现8兆字节的虛拟地址空间。 RStore与RAID盘集类似,不同之处在于,它包含PSEG而不是物理盘。 RStore小于传统的RAID存储巻,因此,与传统系统中的单个RAID存储巻相反,给定LUN 102将包含多个RStore 。考虑驱动器405可随时间从LDAD 103添加或删除。添加驱动器意味着现有数据可在更多驱动器上分布,而删除驱动器则意味着现有数据必须从退出的驱动器中迁移而填充剩余驱动器上的容量。这种数据迁移一般称作"均匀调整,,。均匀调整尝试在尽可能多的物理驱动器上分布给定LUN 102的数据。均匀调整的基本目的是分布每个LUN 102所表示的存储区的物理分配,使得给定物理盘上的为给定逻辑盘的使用量与那个物理巻在可用于分配到给定逻辑盘的物理存储区总量中所占份额成比例。通过将数据从一个PSEG复制到另一个,然后再改变适当RSD中的数据以表明新的隶属关系,现有RStore可修改为使用新的PSEG。在RSS中创建的后继RStore将自动使用新的成员。类似地, 可通过将数据从已填充PSEG复制到空PSEG,并改变LMAP 502中的数据以便反映RSD的新PSEG成分,来消除PSEG。这样,物理存储区与存储区的逻辑表示之间的关系可被连续管理及更新,从而以用户不可见的方式来反映当前存储环境。 快照差别文件在一个方面,该系统配置成实现在本文中称作快照差别文件或快照差别对象的文件。快照差别文件是设计成将快照的某些特性(即通过在快照差别的生存期中未改变数据时与后继者及前趋者文件共享数据的容量效率)与日志文件的时间特性结合的实体。快照差别文件还可与基本快照克隆及其它快照差别结合使用,以便提供查看数据通过时间的不同副本的能力。快照差别文件还捕捉针对以某个时间点开始的LUN的所有新数据,直到决定去活快照差别,并开始新快照差别。快照差别文件可与快照相似地构成。快照差别可采用与快照中所使用的元数据结构相似的元数据结构,使快照文件能够在适当的时候与前趋者LUN共享数据,但在数据到达时间出现在快照差别的活动周期时包含唯一或不同的数据。后继者快照差别可经由相同的机制来引用前趋者快照差别或前趋者LUN中的数据。作为实例,々i定LUN A在2004年9月12日下 1点之前是活动的。LUN A的快照差别l从2004年9月12日下午1点以后至下午2点是活动的。LUN A的快照差别2从2004年9月12日下午2 点以后至下午3点是活动的。LUN A的快照差别1和快照差别2的每个中的数据可采用相同的虚拟元数据索引方法来存取。快照差别1 包含从下午1点之后至下午2点已经改变(以所使用的索引方案的粒度)的唯一数据,并且与LUN^共享其它全部数据。快照差别2包含从下午2点之后至下午3点已经改变的唯一数据,并且与快照差别1或者LUN A共享其它全部数据。这种数据采用称作快照树的上述索引、共享位方案来存取。因此,随时间的变化被保持-下午1点之前的数据的LUN A视图,下午2点及之前的数据的快照差别1和LUN A视图,下午3点及以前的快照差别2和快照差别1及LUNA视图。或者,分段时间视图,从下午1点至下午2点的数据的快照差别1视图或者从下午2点至下午3点的数据的快照差别2视图。因此,快照差别与日志文件的相似性在于,快照差别文件将数据与时间关联(即它们采集从时间a到时间b的新数据),同时在结构上类似快照(即它们具有快照的特性,也就是数据存取的速度和空间效率以及保持随时间改变的能力)。通过将关键快照特性和结构与日志文件时间模型结合,快照差别可用来提供始终同步镜像功能、数据的时间维护、简单的空间效率高的增量备份以及强大的瞬时恢复机制。图7是结合了快照差别文件的存储数据体系结构的示意高级说明。参照图7,源巻710被复制到可以是预规格化快照克隆或者后规格化快照克隆的快照克隆720。本文所使用的术语"预规^ft快照克隆"指的是在快照克隆从源巻710分割之前与源巻710同步的快照克隆。预规格化快照克隆表示在快照克隆从源巻分离的时刻的源巻的时间点副本。相反,后规格化快照克隆在特定时间点上创建,但源巻710中的数据的完整独立副本在稍后的时间点之后才完成。快照差别文件在特定时间点上创建和激活,随后,影响源巻710 中的数据的所有I/O操作同时复制到活动的快照差别文件。在预期时间点或者在达到特定门限时(例如当快照差别文件达到预定大小时),可关闭快照差别文件,以及可激活另一个快照差别文件。在快照差别文件730、 732、 734被去活之后,它可合并到快照克隆720中。另外,快照差别文件可备份到磁带驱动器、如磁带驱动器742、 744、 746。在一个实现中,快照差别文件与快照克隆、如快照克隆720的创建同时被创建和激活。针对源巻710的1/0操作复制到活动的快照差别文件、如快照差别文件730。将参照图8、图9a-9b和图10-13更详细地说明快照差别文件。图8和图9a-9b是快照差别文件的存储器分配图的示意说明。简要参照图8,在一个实现中,快照差别文件的存储器映射在逻辑盘单元表800开始,它是映射可依次编号的多个逻辑盘状态块(LDSB)、即LDSB0、 LDSB1…LDSBN的数据结构的阵列。每个LDSB包含指向LMAP的指针、指向前趋者和后继者LDSB的指针。 LMAP指针指向LMAP映射数据结构,如上所迷,它最终映射到PSEG(或者映射到非虛拟化系统中的盘)。前趋者和后继者LDSB字段用来跟踪基本快照克隆及其相关快照差别。基本快照克隆由没有前趋者的LDSB表示, 以及活动快照差别由没有后继者的LDSB表示。图9a说明快照差别文件的存储器映射,其中设置了RSD的共享位。因此,表示快照差别的LMAP910结构映射RSD915, RSD915 又映射到不同数据结构的LMAP 920所表示的前趋者快照差别或基本快照克隆。这表明LMAP 910是LMAP 920的后继者并与LMAP 920 共享其数据。 LMAP 920映射到RSD 925, RSD 925又映射到RSS 930, RSS 930映射到物理盘空间935(或者映射到虛拟化存储系统中的PSEG)。图9b说明快照差别文件的存储器映射,其中没有设置RSD 的共享位,即它不是共享的。 LMAP 950映射到RSD 955, RSD 955 又映射到RSS 960, RSS 960映射到物理盘空间965(或者映射到虚拟化存储系统中的PSEG)。图10-13是流程图,说明分别用于对快照差别进行创建、读取、 写入或合并的示范方法中的操作。在以下描述中,大家会理解,流程图说明的各框以及流程图说明中的框的组合可通过计算机程序指令来实现。这些计算机程序指令可加载到计算机或者其它可编程设备以产生一种机器,使得在处理器或其它可编程设备上运行的指令创建用于实现流程图的一个或多个框中所指定的功能的部件。这些计算机程序指令还可存储在计算机可读存储器中,它们可指导计算机或其它可编程设备以特定方式工作,使得计算机可读存储器中存储的指令产生一种制造产品,其中包括实现流程图的一个或多个框中所指定的功能的指令部件。计算机程序指令还可加载到计算机或其它可编程设备, 使一系列操作步骤在计算机或其它可编程设备上执行,从而产生计算机实现的过程,使得在计算机或其它可编程设备上执行的指令提供用于实现流程图的一个或多个框中所指定的功能的步骤。因此,流程图说明的框支持用于执行指定功能的部件組合以及用于执行指定功能的步骤组合。还可理解,流程图说明的各框以及流程图说明中的框的组合可通过执行指定功能或步骤的基于专用硬件的计算机系统或者专用硬件及计算机指令的组合来实现。图IO是流程图,说明用于创建快照差别文件的示范方法中的操作。图10的操作可响应接收到创建快照差别文件的请求而在例如存储系统中的阵列控制器等的适当处理器中执行。参照图10,在搡作1010 ,创建新的LDSB ,表示新的快照差别。再次参照图8并假定LDSB 0至LDSB 3已经被分配,操作1010创建编号为LDSB 4的新LDSB。在操作1015-1020, LDSB后继者指针被遍历,以快照克隆的LDSB 开始,直至遇到空后继者指针为止。当遇到空后继者指针时,空指针重置为指向新建的LDSB(操作1025)。因此,在图8所示的情况下, 后继者指针从LDSB 0到LDSB2再到具有空后继者指针的LDSB3进行遍历。操作1025将LDSB 3中的后继者指针重置为指向LDSB4。然后,控制转到操作1030,其中,新LDSB的前趋者指针被设置。在图8所示的情况下,LDSB 4的前趋者指针设置为指向LDSB 3。图10的操作配置快照差别文件的高级数据映射图。低级数据映射(即, 从LMAP到PSEG或物理盘段)可根据以上提供的描述来进行。图ll是流程图,说明用于在利用一个或多个快照差别文件的环境下执行读操作的示范方法中的操作。参照图11,在操作1110,例如在存储系统中的阵列控制器接收到读请求。在一个示范实现中,读请求可由主计算机产生,并且可标识将被读取的存储系统中的逻辑块地址(LBA)或者地址的另一个标记。在操作1115,确定读请求是否针对快照差别文件。在一个示范实现中,快照差别文件可被分配可用来进行操作1115中所要求的判定的特定LBA和/或LD标识符。如果在操作1U5确定读请求不是针对快照差别文件,则控制转到操作1135,以及读请求可按照正常操作过程从读请求中所标识的LD执行。相反,如果在操作1115确定读请求是针对快照差别文件, 则执行操作1120-1130,从而遍历现有快照差别文件以查找读请求中所标识的LBA。在操作1120,活动快照差别文件被检查,以便确定与读请求中所标识的LBA关联的共享位是否被设置。如果共享位没有设置,这表明活动快照差别文件在所标识LBA中包含新数据,则控制转到操作1135,以及读请求可从读请求中所标识的快照差别文件中的LBA 执行。相反,如果在操作1120没有设置共享位,则控制转到操作1125,在其中确定活动快照差别文件的前趋者是否为另一个快照差别文件。在一个示范实现中,这可通过分析活动快照差别的前趋者指针所标识的LDSB来确定,如图8所示。如果前趋者不是快照差别文件, 则控制转到操作1135,以及读请求可按照正常操作过程从读请求中标识的LD执行。 .相反,如果在操作1125确定读请求针对快照差别文件,则执行操作1125-1130,从而遍历现有快照差别文件,直到在快照差别文件或者在LD中定位读请求中标识的LBA,以及LBA被读取(操作1135)并返回给请求主机(操作1140)。图12是流程图,说明用于在利用一个或多个快照差别文件的环境下执行写操作的示范方法中的操作。参照图12,在操作1210,例如在存储系统中的阵列控制器接收到写请求。在一个示范实现中,写请求可由主计算机产生,并且可标识写操作所针对的存储系统中的逻辑块地址(LBA)或者地址的另一个标记。在操作1215,确定写请求是否针对快照差别文件。在一个示范实现中,快照差别文件可被分配可用来进行操作1215中所要求的判定的特定LBA和/或LD标识符。如果在操作1215确定写请求不是针对快照差别文件,则控制转到操作1245,写请求按照正常操作过程针对写请求中标识的LD执行,以及确认被返回给主计算机(操作1255)。相反,如果在操作1215 确定写请求针对快照差别文件,则执行操作1220-1230,从而遍历现有快照差别文件以查找写请求中所标识的LBA。在操作1220,活动快照差别文件被检查,以便确定与读请求中所标识的LBA关联的共享位是否被设置。如果共享位没有设置,这表明活动快照差别文件在所标识LBA中包含新数据,则控制转到操作1250,以及写请求可对于写请求中标识的快照差别文件中的LBA 执行。可以理解,写操作可以只改写由写操作改变的LBA或者包含写操作所改变的LBA的整个RSEG,取决于系统的配置。相反,如果在操作1220没有设置共享位,则控制转到操作1225, 在其中确定活动快照差别文件的前趋者是否为另一个快照差别文件。在一个示范实现中,这可通过分析活动快照差别的前趋者指针所标识的LDSB来确定,如图8所示。如果前趋者不是快照差别文件, 则控制转到操作1235,以及与写请求中标识的LBA关联的RSEG可从写请求中标识的LD复制到緩沖器中。然后,控制转到操作1240, 以及写请求中的I/O数据合并到緩冲器中。然后,控制转到操作1250, 1/0数据被写入活动快照差别文件,以及在操作1255,确认被返回给主机。相反,如果在操作1225确定写请求针对快照差别文件,则执行搡作1225-1230,从而遍历现有快照差别文件,直到在快照差别文件或者在LD中定位写请求中标识的LBA。然后,执行操作1235-1250, 从而将写操作改变的RSEG复制到活动快照差别文件。如上所述,在一个实现中,快照差别文件可以是时间限制的,即,快照差别文件可在特定时间点被激活,并且可在特定时间点被去活。图13是流程图,说明用于将快照差别文件合并到逻辑盘、例如快照差别所关联的快照克隆的示范方法中的操作。图13的操作可作为后台进程定期运行,或者可由特定事件或事件序列来触发。过程在操作1310开始,这时接收到合并快照差别文件的请求。在一个示范实现中,合并请求可由主计算机产生,并且可标识一个或多个快照差别文件以及快照差别文件将合并到其中的快照克隆。在操作1315,查找"最早的"快照差别文件。在一个示范实现中,可通过跟随LDSB映射图的前趋者/后继者指针轨迹、直到定位具有映射到快照克隆的前趋者指针的LDSB为止,来定位最早的快照差别。再次参照图8并假定LDSB 4为活动快照差别文件,LDSB 4 的前趋者为LDSB3。 LDSB 3的前趋者为LDSB 2,以及LDSB2的前趋者是作为快照克隆的LDSB0。因此,LDSB2表示"最早的"快照差别文件,它将合并到快照克隆中。操作1320发起通过快照差别文件中映射的每个RSTORE中的各RSEG的迭代循环。如果在操作1325, RSTORE中不再有要分析的RSEG,则控制转到操作1360,它确定是否存在其它要分析的RSTORE。如果在操作1325,在RSTORE中还有要分析的其它RSEG,则控制转到操作1330,在其中确定是否对RSEG设置了后继者共享位或者前趋者共享位。如果这些共享位中任一个被设置,则需要合并RSEG中的数据,因此控制转到操作1355。相反,如果在操作1330没有设置共享位,则控制转到操作1335, 并且读取RSEG,以及RSEG中的数据被复制(操作1340)到前趋者、 即快照克隆中的相应存储单元。在操作1345,共享位在合并的快照差别的RSEG中重置。如果在操作1355, RSTORE中还有要分析的RSEG,则控制重新转到操作1330。操作1330-1355重复进行,直到已经分析了RSTORE中的所有RSEG,这时控制转到操作1360,它确定是否还有要分析的RSTORE。如果在操作1360,还有要分析的RSTORE,则控制重新转到操作1325,它对所选RSTORE重新开始操作1330至1355的循环。操作1325至1360重复进行,直到在操作1360中不再有要分析的RSTORE,在这种情况下,控制转到操作1365,以及前趋者LDSB(即与快照克隆关联的LDSB)中的后继者指针设置为指向被合并的LDSB的后继者。在操作1370,被合并的LDSB设置为"空,,,从而有效地终止合并LDSB的存在。这个过程可重复进行,以便将"最早的"快照差别文件相继地合并到快照克隆中。这也释放已合并的快照差别LDSB供再用。本文所述的是称作快照差别文件的文件结构以及用于创建和使用快照差别文件的示范方法。在一个示范实现中,快照差别文件可结合远程复制操作中的快照克隆来实现。差别文件可与快照克隆的产生同时^^皮创建和激活。改变与快照克隆关联的源巻中的数据的1/0操作-陂记录在活动快照差别文件中。活动快照差别文件可在特定时间点或者在满足与快照差别文件关联的特定门限时被关闭。另一个快照差别文件可与关闭现有快照差别文件同时被激活,以及快照差别文件可采用表明快照差别文件之间的时间关系的指针来链接。在快照差别文件已经关闭之后,文件可合并到与其关联的快照克隆中。备份操作在示范实现中,快照差别文件可用于在存储网络和/或存储装置中实现增量*过程,它们既节省空间又节省时间,因为备份操作只需要制作对源数据集的改变的副本。参照图14来说明一种这样的实现,其中,图14是流程图,说明在*操作中利用快照差别文件的示范方法中的操作。图14的操作可通过计算机程序指令来实现。这些计算机程序指令可加栽到计算机或者其它可编程设备以产生一种机器,使得在处理器或其它可编程设备上执行的指令创建用于实现流程图的一个或多个框中所指定的功能的部件。这些计算机程序指令还可存储在计算机可读存储器中,它们可指导计算机或其它可编程设备以特定方式工作,使得计算机可读存储器中存储的指令产生一种制造产品,其中包括实现流程图的一个或多个框中所指定的功能的指 部件。计算机程序指令还可加栽到计算机或者其它可编程设备,使一系列操作步骤在计算机或其它可编程设备上执行,从而产生计算机实现的过程,使得在计算机或其它可编程设备上执行的指令提供用于实现流程图的一个或多个框中所指定的功能的步骤。因此,流程图说明的框支持用于执行指定功能的部件组合以及用于执行指定功能的步骤组合。还会理解,流程图说明的各框以及流程图说明中的框的组合可通过执行指定功能或步骤的基于专用硬件的计算机系统或者专用硬件及计算机指令的组合来实现。参照图14,在操作1410,产生源文件的快照克隆。在操作1415, 快照差别文件被激活,以及在操作1420,改变源巻中的数据的1/0操作被记录在快照差别文件中。这些操作可根据以上提供的描述来执行。在操作1425,产生快照克隆的备份副本。这个操作可响应由用户在用户界面输入的名普清求而执行,或者响应例如由定时器驱动的自动备份操作之类的事件或者响应源巻或快照克隆达到特定大小而执行。备份副本可记录在另一个盘驱动器、磁带驱动器或其它介质上。复制操作可通过后台进程来实现,使得复制操作是存储系统的用户不可见的。在操作1435,活动快照差别文件可关闭,这时,新的快照差别文件;陂激活(操作1440),以及已关闭的快照差别文件可合并到快照克隆文件中,如上所述。在操作1443,产生快照差别文件的副本。这个操作可响应由用户在用户界面输入的备份请求而执行,或者响应例如由定时器驱动的自动备份操作之类的事件或者响应快照差别达到特定大小而执行。在备份操作之前,快照差别需要被去活或关闭,而另一个快照差别被激活。备份副本可记录在另一个盘驱动器、磁带驱动器或其它介质上。复制操作可通过后台进程来实现,使得复制操作是存储系统的用户不可见的。这种类型的备份通常称作增量备份,并且通常在快照差别文件的有效生存期中执行一次。采用快照差别的这种类型的增量备份的独特方面在于,它在无需外部应用或文件系统标识什么内容已被改变的情况下,提供仅备份在所使用的虛拟化映射的粒度等级上已经改变的内容的能力。操作1430至1445可无限地反复进行,持续在快照差别文件中记录I/O操作并在适当的存储介质中保存快照差别文件的副本。因此, 操作1410和1415可分别在第一时间点执行以产生快照克隆和快照差别文件。操作1325可在第二时间点执行以产生快照克隆的备份副本,以及操作1443可在第三时间点执行以产生快照差别文件的副本。快照差别的后续副本可在后续时间点产生。图14说明采用快照差别文件来执行增量备份操作的操作的完整序列。本领域的技术人员会理解,操作1410至1420可独立实现,即如上所述。图14的操作最适合于盘到磁带备份系统。采用快照差别文件的备份操作是节省空间的,因为只有对源巻的改变才在备份操作中记录。另外,快照差别文件可用于备份操作的自动管理例程。图15是流程图,说明用于自动管理备份操作的方法的一个示范实现中的操作。这种类型的备份模型最适合于盘到盘备份系统。在操作1510,接收*设置指示器信号。在一个实现中,备份设置指示器信号可由用户在适当的用户界面产生,并表明要保存的快照差别文件的门限数量。门限可表达为例如文件的数量、可分配给快照差别文件的存储空间的最大量或者表达为时间参数。在操作1515, 这个门限数量从信号中确定。如果在操作1520,快照差别文件的当前数量大于备份设置指示器信号中所表明的数量,则控制转到操作1525,以及采用上述过程将"最早的"快照差别文件合并到快照克隆文件中。操作1520至1525 可反复进行,直到快照差别文件的当前数量小于备份设置指示器信号中所表明的门限,这时,控制转到操作1530,并产生快照克隆。在操作1533,当前快照差别文件被去活,以及在操作1535,新的快照差别可被激活,并且对源巻的1/0操作^t写入活动的快照差别文件(操作1540)。图15的操作允许系统的用户指定在后台复制操作中要保存的快照差别文件的最大数量。作为实例,用户可能配置办公室中使用的、 设置为每天打开新的快照差别文件的存储系统,以及每天备份副本为快照差别文件。用户还可指定七个快照差别文件的最大数量,使得在每一天,系统每天产生快照克隆文件的循环副本。最早的快照差别每天可重新循环到快照克隆中。本领域的技术人员会知道,其它配置是可用的。虽然所述方案和过程已经通过结构特征和/或方法操作特定的语言进行了描述,但要理解,所附权利要求中定义的主题不一定局限于所述的特定特征或操作。相反,特定特征和操作作为实现要求其权益的本主题的优选形式来公开。

Claims (8)

1. 一种在存储网络中执行备份操作的方法,包括: 在第一时间点产生源盘卷(710)的快照克隆(720); 同时激活在逻辑上链接到所述快照克隆(720)的第一快照差别文件(730); 将改变所述源盘卷(710)中的数据集的I/O操作记录到所述第一快照差别文件(730)中; 关闭所述第一快照差别文件(730); 在所述第一时间点之后,在第二时间点产生所述快照克隆(720)的备份副本;以及在所述第二时间点之后,在第三时间点产生所述第一快照差别文件(730)的备份副本。 1. A method for performing backup operations in a storage network, comprising: generating point source coil (710) snap clone (720) at a first time; the same time linked to the activation of a snapshot clone (720) logically first snapshot difference file (730); changing the source coil I (710) the data sets / O operation to record the first snapshot difference file (730); and closing the first snapshot difference file (730); after the first time point, generating a backup copy of the snapshot clones (720) at the second point in time; and after the second time point, the third point is generated in the first time snapshot difference backup copy files (730) of.
2. 如权利要求1所述的方法,其特征在于,还包括:在关闭所述第一快照差别文件(730)之后激活第二快照差别文件(732);以及将改变源盘巻(710)中的数据集的I/O操作记录在所述第二快照差别文件(732)中。 2. The method according to claim 1, characterized in that, further comprising: activating a second snapshot difference file (732) after closing the first snapshot difference file (730); and a source disk will change the Volume (710) I / O operation record of the data set in the second snapshot difference file (732) in.
3. 如权利要求2所述的方法,其特征在于,还包括在所述第三时间点之后的第四时间点产生所述第二快照差别文件(732)的备份副本。 3. The method according to claim 2, characterized in that, further comprising generating a backup copy of the second snapshot difference file (732) at a fourth point of time after the third time point.
4. 如权利要求1所述的方法,其特征在于,所述第一快照差别文件(730)包括用于记录对于所述源盘巻(710)执行的1/0操作以及用于记录与各I/O操作关联的时间的数据字段。 4. The method according to claim 1, wherein the first snapshot difference file (730) comprises means for recording and for recording each of the source disk to the Volume (710) operation performed 1/0 I / O data associated with the operation time of the field.
5. 如权利要求1所述的方法,其特征在于,在所述第一时间点之后,在第二时间点产生所述快照克隆(720)的^f分副本包括将备份副本写入永久存储介质。 5. The method according to claim 1, wherein, after the first time point, generating a copy of f ^ sub clones of the snapshot (720) comprises writing the backup copy of the second point in time is stored permanently medium.
6. 如权利要求1所述的方法,其特征在于,在所述第二时间点之后,在第三时间点产生所述第一快照差别文件(730)的备份副本包括将备份副本写入永久存储介质。 6. The method according to claim 1, wherein, after the second time point, the third point in time to generate a backup copy of the first differential snapshot file (730) comprises writing the backup copy of the permanent storage media.
7. 如权利要求2所述的方法,其特征在于,还包括在关闭所述第一快照差别文件(730)之后将关闭的所述第一快照差别文件(730)合并到所述快照克隆(720)中。 7. The method according to claim 2, characterized by further comprising, after closing the first snapshot difference file (730) to close the first snapshot difference file (730) into the snap clone ( 720) in.
8. 如权利要求7所述的方法,其特征在于,还包括在执行所述合并操作之后产生所述快照克隆(720)的备份副本。 8. The method according to claim 7, characterized in that, further comprising generating a backup copy of the snapshot clones after performing the merge operation (720).
CN 200510118887 2004-11-02 2005-11-01 Incremental backup operations in storage networks CN100419664C (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/979395 2004-11-02
US10/979,395 US20060106893A1 (en) 2004-11-02 2004-11-02 Incremental backup operations in storage networks

Publications (2)

Publication Number Publication Date
CN1770088A CN1770088A (en) 2006-05-10
CN100419664C true CN100419664C (en) 2008-09-17

Family

ID=35825391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200510118887 CN100419664C (en) 2004-11-02 2005-11-01 Incremental backup operations in storage networks

Country Status (4)

Country Link
US (1) US20060106893A1 (en)
EP (1) EP1653358A3 (en)
CN (1) CN100419664C (en)
SG (1) SG121964A1 (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060253497A1 (en) * 2005-05-03 2006-11-09 Bulent Abali System and method for associating computational procedures with stored data objects
US20070226396A1 (en) * 2006-03-09 2007-09-27 International Business Machines Corporation System and method for backing up and recovering data in a computer system
US8025650B2 (en) * 2006-06-12 2011-09-27 Wound Care Technologies, Inc. Negative pressure wound treatment device, and methods
US7856424B2 (en) * 2006-08-04 2010-12-21 Apple Inc. User interface for backup management
US7853566B2 (en) * 2006-08-04 2010-12-14 Apple Inc. Navigation of electronic backups
US7853567B2 (en) * 2006-08-04 2010-12-14 Apple Inc. Conflict resolution in recovery of electronic data
US7860839B2 (en) * 2006-08-04 2010-12-28 Apple Inc. Application-based backup-restore of electronic information
US8370853B2 (en) * 2006-08-04 2013-02-05 Apple Inc. Event notification management
US7809688B2 (en) * 2006-08-04 2010-10-05 Apple Inc. Managing backup of content
US8311988B2 (en) 2006-08-04 2012-11-13 Apple Inc. Consistent back up of electronic information
US20080034004A1 (en) * 2006-08-04 2008-02-07 Pavel Cisler System for electronic backup
US7809687B2 (en) * 2006-08-04 2010-10-05 Apple Inc. Searching a backup archive
US20080126442A1 (en) * 2006-08-04 2008-05-29 Pavel Cisler Architecture for back up and/or recovery of electronic data
US9009115B2 (en) * 2006-08-04 2015-04-14 Apple Inc. Restoring electronic information
US20080034017A1 (en) * 2006-08-04 2008-02-07 Dominic Giampaolo Links to a common item in a data structure
US8166415B2 (en) * 2006-08-04 2012-04-24 Apple Inc. User interface for backup management
US20080091744A1 (en) * 2006-10-11 2008-04-17 Hidehisa Shitomi Method and apparatus for indexing and searching data in a storage system
US8589341B2 (en) * 2006-12-04 2013-11-19 Sandisk Il Ltd. Incremental transparent file updating
US7865473B2 (en) * 2007-04-02 2011-01-04 International Business Machines Corporation Generating and indicating incremental backup copies from virtual copies of a data set
US20080307347A1 (en) * 2007-06-08 2008-12-11 Apple Inc. Application-Based Backup-Restore of Electronic Information
US20080307017A1 (en) 2007-06-08 2008-12-11 Apple Inc. Searching and Restoring of Backups
US8745523B2 (en) * 2007-06-08 2014-06-03 Apple Inc. Deletion in electronic backups
US8099392B2 (en) 2007-06-08 2012-01-17 Apple Inc. Electronic backup of applications
US8010900B2 (en) * 2007-06-08 2011-08-30 Apple Inc. User interface for electronic backup
US8725965B2 (en) 2007-06-08 2014-05-13 Apple Inc. System setup for electronic backup
US8429425B2 (en) * 2007-06-08 2013-04-23 Apple Inc. Electronic backup and restoration of encrypted data
US8468136B2 (en) * 2007-06-08 2013-06-18 Apple Inc. Efficient data backup
US8307004B2 (en) * 2007-06-08 2012-11-06 Apple Inc. Manipulating electronic backups
CN100478904C (en) * 2007-07-18 2009-04-15 华为技术有限公司 Method and device for protecting snapshot
US20090210462A1 (en) * 2008-02-15 2009-08-20 Hitachi, Ltd. Methods and apparatus to control transition of backup data
US8135676B1 (en) * 2008-04-28 2012-03-13 Netapp, Inc. Method and system for managing data in storage systems
US20110093437A1 (en) * 2009-10-15 2011-04-21 Kishore Kaniyar Sampathkumar Method and system for generating a space-efficient snapshot or snapclone of logical disks
US8190574B2 (en) 2010-03-02 2012-05-29 Storagecraft Technology Corporation Systems, methods, and computer-readable media for backup and restoration of computer information
US8548944B2 (en) * 2010-07-15 2013-10-01 Delphix Corp. De-duplication based backup of file systems
CN102375696A (en) * 2010-08-20 2012-03-14 英业达股份有限公司 Data storage system and data access method by utilizing virtual disk
US9015119B2 (en) 2010-10-26 2015-04-21 International Business Machines Corporation Performing a background copy process during a backup operation
US8468174B1 (en) 2010-11-30 2013-06-18 Jedidiah Yueh Interfacing with a virtual database system
TW201227268A (en) * 2010-12-20 2012-07-01 Chunghwa Telecom Co Ltd Data backup system and data backup and retrival method
US8984029B2 (en) 2011-01-14 2015-03-17 Apple Inc. File system management
US8943026B2 (en) 2011-01-14 2015-01-27 Apple Inc. Visual representation of a local backup
US8639976B2 (en) * 2011-02-15 2014-01-28 Coraid, Inc. Power failure management in components of storage area network
CN102147754A (en) * 2011-04-01 2011-08-10 奇智软件(北京)有限公司 Automatic backup method and device for driver
JP5289642B1 (en) * 2013-01-25 2013-09-11 株式会社東芝 Backup storage system to backup data, the backup storage devices and methods
CN103164294A (en) * 2013-01-30 2013-06-19 浪潮(北京)电子信息产业有限公司 System, device and method achieving restoring points of computer
US20140279879A1 (en) * 2013-03-13 2014-09-18 Appsense Limited Systems, methods and media for deferred synchronization of files in cloud storage client device
CN103500130B (en) * 2013-09-11 2015-07-29 上海爱数软件有限公司 A method for hot standby data backup dual-real time
CN103581339A (en) * 2013-11-25 2014-02-12 广东电网公司汕头供电局 Storage resource allocation monitoring and processing method based on cloud computing
CN103645970B (en) * 2013-12-13 2017-04-19 华为技术有限公司 A remote copy method to achieve incremental deduplication and snapshots across multiple devices
US9645739B2 (en) 2014-09-26 2017-05-09 Intel Corporation Host-managed non-volatile memory
CN104375904A (en) * 2014-10-30 2015-02-25 浪潮电子信息产业股份有限公司 Disaster recovery backup method based on snapshot differentiation data transmission
US9880904B2 (en) * 2014-12-12 2018-01-30 Ca, Inc. Supporting multiple backup applications using a single change tracker
US9870289B2 (en) * 2014-12-12 2018-01-16 Ca, Inc. Notifying a backup application of a backup key change
CN106325769A (en) * 2016-08-19 2017-01-11 华为技术有限公司 Data storage method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6003044A (en) 1997-10-31 1999-12-14 Oracle Corporation Method and apparatus for efficiently backing up files using multiple computer systems
US6560615B1 (en) 1999-12-17 2003-05-06 Novell, Inc. Method and apparatus for implementing a highly efficient, robust modified files list (MFL) for a storage system volume
US6594744B1 (en) 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
CN1452737A (en) 2000-04-24 2003-10-29 微软公司 Method and apparatus for providing volume snapshot dependencies in computer system
CN1530841A (en) 2003-03-13 2004-09-22 威达电股份有限公司 Memory system and method by snapshot back-up function

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4751702A (en) * 1986-02-10 1988-06-14 International Business Machines Corporation Improving availability of a restartable staged storage data base system that uses logging facilities
US5857208A (en) * 1996-05-31 1999-01-05 Emc Corporation Method and apparatus for performing point in time backup operation in a computer system
US7203732B2 (en) * 1999-11-11 2007-04-10 Miralink Corporation Flexible remote data mirroring
US6915397B2 (en) * 2001-06-01 2005-07-05 Hewlett-Packard Development Company, L.P. System and method for generating point in time storage copy
US6665815B1 (en) * 2000-06-22 2003-12-16 Hewlett-Packard Development Company, L.P. Physical incremental backup using snapshots
US20020104008A1 (en) * 2000-11-30 2002-08-01 Cochran Robert A. Method and system for securing control-device-lun-mediated access to luns provided by a mass storage device
US6629203B1 (en) * 2001-01-05 2003-09-30 Lsi Logic Corporation Alternating shadow directories in pairs of storage spaces for data storage
US6594745B2 (en) * 2001-01-31 2003-07-15 Hewlett-Packard Development Company, L.P. Mirroring agent accessible to remote host computers, and accessing remote data-storage devices, via a communcations medium
US6606690B2 (en) * 2001-02-20 2003-08-12 Hewlett-Packard Development Company, L.P. System and method for accessing a storage area network as network attached storage
US6697881B2 (en) * 2001-05-29 2004-02-24 Hewlett-Packard Development Company, L.P. Method and system for efficient format, read, write, and initial copy processing involving sparse logical units
US6728848B2 (en) * 2001-06-11 2004-04-27 Hitachi, Ltd. Method and system for backing up storage system data
US6895467B2 (en) * 2001-10-22 2005-05-17 Hewlett-Packard Development Company, L.P. System and method for atomizing storage
US7296125B2 (en) * 2001-11-29 2007-11-13 Emc Corporation Preserving a snapshot of selected data of a mass storage system
US20030120676A1 (en) * 2001-12-21 2003-06-26 Sanrise Group, Inc. Methods and apparatus for pass-through data block movement with virtual storage appliances
US7036043B2 (en) * 2001-12-28 2006-04-25 Storage Technology Corporation Data management with virtual recovery mapping and backward moves
US6763436B2 (en) * 2002-01-29 2004-07-13 Lucent Technologies Inc. Redundant data storage and data recovery system
US6829617B2 (en) * 2002-02-15 2004-12-07 International Business Machines Corporation Providing a snapshot of a subset of a file system
US7143307B1 (en) * 2002-03-15 2006-11-28 Network Appliance, Inc. Remote disaster recovery and data migration using virtual appliance migration
US7844577B2 (en) * 2002-07-15 2010-11-30 Symantec Corporation System and method for maintaining a backup storage system for a computer system
US7100089B1 (en) * 2002-09-06 2006-08-29 3Pardata, Inc. Determining differences between snapshots
JP4402992B2 (en) * 2004-03-18 2010-01-20 株式会社日立製作所 Backup system and method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6003044A (en) 1997-10-31 1999-12-14 Oracle Corporation Method and apparatus for efficiently backing up files using multiple computer systems
US6560615B1 (en) 1999-12-17 2003-05-06 Novell, Inc. Method and apparatus for implementing a highly efficient, robust modified files list (MFL) for a storage system volume
CN1452737A (en) 2000-04-24 2003-10-29 微软公司 Method and apparatus for providing volume snapshot dependencies in computer system
US6594744B1 (en) 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
CN1530841A (en) 2003-03-13 2004-09-22 威达电股份有限公司 Memory system and method by snapshot back-up function

Also Published As

Publication number Publication date
SG121964A1 (en) 2006-05-26
CN1770088A (en) 2006-05-10
EP1653358A3 (en) 2009-10-28
US20060106893A1 (en) 2006-05-18
EP1653358A2 (en) 2006-05-03

Similar Documents

Publication Publication Date Title
US7444485B1 (en) Method and apparatus for duplicating computer backup data
US8095852B2 (en) Data recorder
JP4336129B2 (en) System and method for managing a plurality of snapshots
CN101410783B (en) Content addressable storage array element
EP2391968B1 (en) System and method for secure and reliable multi-cloud data replication
JP4568502B2 (en) Information processing systems and management device
US7464124B2 (en) Method for autonomic data caching and copying on a storage area network aware file system using copy services
US6985995B2 (en) Data file migration from a mirrored RAID to a non-mirrored XOR-based RAID without rewriting the data
US7308528B2 (en) Virtual tape library device
US6718436B2 (en) Method for managing logical volume in order to support dynamic online resizing and software raid and to minimize metadata and computer readable medium storing the same
US7769961B2 (en) Systems and methods for sharing media in a computer network
US7146465B2 (en) Determining maximum drive capacity for RAID-5 allocation in a virtualized storage pool
US5379391A (en) Method and apparatus to access data records in a cache memory by multiple virtual addresses
US7636814B1 (en) System and method for asynchronous reads of old data blocks updated through a write-back cache
CN102349053B (en) System and method for redundancy-protected aggregates
US7882081B2 (en) Optimized disk repository for the storage and retrieval of mostly sequential data
US7389393B1 (en) System and method for write forwarding in a storage environment employing distributed virtualization
KR101247083B1 (en) System and method for using a file system automatically backup a file as generational file
JP4292882B2 (en) A plurality of snapshots maintenance method and server devices and storage devices
US7818535B1 (en) Implicit container per version set
US8560879B1 (en) Data recovery for failed memory device of memory device array
US6460054B1 (en) System and method for data storage archive bit update after snapshot backup
JP4662548B2 (en) Snapshot management apparatus and method, and a storage system
US7613806B2 (en) System and method for managing replication sets of data distributed over one or more computer systems
US6119208A (en) MVS device backup system for a data processor using a data storage subsystem snapshot copy capability

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C14 Grant of patent or utility model
C41 Transfer of patent application or patent right or utility model
CF01