WO2014170810A1 - Duplication synchrone de matrices de stockage très rapides - Google Patents

Duplication synchrone de matrices de stockage très rapides Download PDF

Info

Publication number
WO2014170810A1
WO2014170810A1 PCT/IB2014/060689 IB2014060689W WO2014170810A1 WO 2014170810 A1 WO2014170810 A1 WO 2014170810A1 IB 2014060689 W IB2014060689 W IB 2014060689W WO 2014170810 A1 WO2014170810 A1 WO 2014170810A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage system
data record
data
secure
write request
Prior art date
Application number
PCT/IB2014/060689
Other languages
English (en)
Inventor
Alex Winokur
Original Assignee
Axxana (Israel) Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Axxana (Israel) Ltd. filed Critical Axxana (Israel) Ltd.
Publication of WO2014170810A1 publication Critical patent/WO2014170810A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2071Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
    • G06F11/2076Synchronous techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/275Synchronous replication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation

Definitions

  • the present invention relates generally to storage systems, and specifically to synchronously mirroring data from a primary storage system to a secondary storage system.
  • storage device mirroring replicates data stored on a primary data storage system to a secondary data storage system, in order to ensure redundancy.
  • mirroring can be implemented either synchronously or asynchronously.
  • a host communicating with the storage system receives a write acknowledgement after data is successfully written to both of the mirrored storage devices.
  • the host receives the write acknowledgement after the data is written to a first of the mirrored storage devices, and the data can be written to a second of the mirrored storage devices at a later time.
  • a method including receiving, from a mirroring application executing on a first storage system configured as a primary storage system, a first write request including a data record, storing the data record to a secure storage system, conveying, upon storing the data record to the secure storage system, a first message to the mirroring application indicating a successful completion of the first write, conveying, from the secure storage system to a second storage system configured as a secondary storage system, a second write request including the data record, receiving a second message from the secondary storage system indicating a successful completion of the second write request, and upon receiving the second message, deleting the data record from the secure storage system.
  • conveying the second write request includes retrieving currently-available data records from the secure storage system, compressing the retrieved data records into compressed data, and conveying the compressed data to the secondary storage system.
  • the secondary storage system includes a secondary storage device, and the method includes receiving, by the secondary storage system, the compressed data, decompressing the compressed data into decompressed data records, and storing the decompressed data records to the secondary storage device.
  • the secure storage system is in communication with the primary storage system over a first network having a first throughput, and wherein the secure storage system is in communication with the secondary storage system over a second network having a second throughput less than the first throughput.
  • the primary storage system includes a primary storage device having a first throughput, and wherein the secondary storage device has a second throughput less than the first throughput.
  • the data record includes a first amount of storage space
  • the secure storage system has a second amount of storage space available
  • the method includes conveying a write request incompletion acknowledgement to the primary storage system upon determining that the first amount of storage space is greater than the second amount of storage space.
  • the steps of receiving the first write request, storing the data record, conveying the first message, conveying the second write request, receiving the second message and deleting the data record are performed by a protection processor in communication with the primary storage system, the secure storage system and the secondary storage system. In further embodiments, the steps of receiving the first write request, storing the data record, conveying the first message, conveying the second write request, receiving the second message and deleting the data record are performed by a storage processor in the primary storage system, the storage processor in communication with the secure storage system and the secondary storage system.
  • the primary storage system includes a primary storage device
  • the method includes prior to the step of receiving the first write request, receiving, by the mirroring application from a data source, the data record, and conveying the first write request to the secure storage system upon successfully storing the data record to the primary storage device.
  • the method includes receiving, by the mirroring application, the first message, and wherein successfully storing the data record to the primary storage device has a first latency, and wherein conveying the first write request and receiving the first message has a second latency, and the method includes positioning the secure storage system so that the second latency exceeds the first latency by no more than a predefined threshold.
  • the first storage device has a first throughput
  • the secure storage system is in communication with the primary storage system over a network having a second throughput greater than or equal to the first throughput
  • a storage facility including a first storage system arranged as a primary storage system and configured to execute a mirroring application, a second storage system arranged as a secondary storage system, a secure storage system, and a mirroring processor configured, to receive from the mirroring application, a first write request including a data record, to store the data record to the secure storage system, to convey, upon storing the data record to the secure storage system, a first message to the mirroring application indicating a successful completion of the first write request, to convey, from the secure storage system to the secondary storage system, a second write request including the data record, to receive a second message from the secondary storage system indicating a successful completion of the second write request, and upon receiving the second message, to delete the data record from the secure storage system.
  • a computer software product including a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive, from a mirroring application executing on a first storage system configured as a primary storage system, a first write request including a data record, to store the data record to a secure storage system, to convey, upon storing the data record to the secure storage system, a first message to the mirroring application indicating a successful completion of the first write request, to convey, from the secure storage system to a second storage system configured as a secondary storage system, a second write request including sing the data record, to receive a second message from the secondary storage system indicating a successful completion of the second write request, and upon receiving the second message, to delete the data record from the secure storage system.
  • Figure 1 is a block diagram that schematically illustrates a storage facility comprising a secure storage system configured to perform synchronous mirroring between a primary storage system and a secondary storage system, in accordance with an embodiment of the present invention
  • FIG. 2 is a block diagram of the secure storage system, in accordance with an embodiment of the present invention.
  • FIG. 3 is a block diagram of a protection system in the storage facility, in accordance with an embodiment of the present invention.
  • Figure 4 is a flow diagram that schematically illustrates a method of performing synchronous mirroring, in accordance with an embodiment of the present invention.
  • network bandwidth also referred to herein as throughput
  • throughput the additional replication related latency overhead of a replicated write operation is composed of at least the following factors:
  • the typical latency overhead is of around 1 to 2 milliseconds.
  • Embodiments of the present invention provide methods and systems for a data facility to synchronously mirror a primary storage system to a secondary storage system.
  • the primary storage system executes a mirroring application
  • the data facility also comprises a secure storage system located in close proximity to the primary storage system.
  • a storage facility can mirror data by first synchronously mirroring data from the primary storage system to the secure storage system and then asynchronously mirroring the data from the secure storage system to the secondary storage system, wherein the synchronous and the asynchronous mirroring can be performed independently of one another.
  • the secure storage system can compress the data prior to asynchronously mirroring the data to the secondary storage system. Compression may typically be performed off-line, so as to avoid incurring high latency.
  • embodiments of the present invention can synchronously mirror data from a first storage system comprising the primary storage system to a secondary storage system comprising a combination of the secure storage system and the secondary storage system.
  • embodiments of the present invention enable the data facility to replicate data to a remote location and recover all of the data in a case of a disaster, in spite of the mirroring application's high write throughput and low write latency requirements.
  • the data in the secure storage system can be stored to a solid-state disk (SSD) or a flash memory storage device which has similar performance characteristics to the ultrafast primary storage.
  • SSD solid-state disk
  • flash memory storage device which has similar performance characteristics to the ultrafast primary storage.
  • the interconnect may not impose a throughput limitation that would be imposed by a wide-area network (WAN) connection when synchronously mirroring the data to the secondary storage system.
  • WAN wide-area network
  • a local storage facility comprising the primary and the secure storage systems communicates with a remote storage facility comprising the secondary storage system via a wide area network (WAN) whose throughput is significantly lower that the ultrafast short distance interconnect that couples the primary and the secure storage systems.
  • WAN wide area network
  • FIG. 1 is a block diagram that schematically illustrates a data facility 20 configured to mirror a data record 22 from a data source such as a primary storage system 24 in a local site 26 to a secondary storage system 28 in a remote site 30.
  • a source of data record 22 comprises an application server 32 (e.g., a database server or an email server) in communication with primary storage system 24 via a storage area network 34.
  • application server 32 (or other data source devices) can communicate with primary storage system 24 via any other type of high-speed network.
  • Primary storage system 24 comprises a primary storage processor 36, a primary memory 38 and one or more primary storage devices 40
  • secondary storage system 28 comprises a secondary storage processor 42, a secondary memory 44 and one or more secondary storage devices 46.
  • Storage devices 40 and 46 may comprise hard disks, computer memory devices (e.g., SSDs and flash memory storage devices), and/or devices based on any other suitable storage technology.
  • storage devices 40 and 46 comprise internal processors (not shown) that perform local data storage and retrieval -related functions.
  • primary storage device 40 has a first throughput (i.e., rate of data transfer), and secondary storage device 46 has a second throughput that is less than the first throughput.
  • primary storage device 40 may comprise an ultrafast storage device such as an SSD, and secondary storage device 46 may comprise a disk drive.
  • the primary and the secondary storage systems are physically located at two separate sites.
  • the sites are chosen to be sufficiently distant from one another, so that a disaster event in one of the sites will be unlikely to affect the other.
  • regulatory restrictions recommend a separation greater than 200 miles, although any other suitable distance can also be used.
  • the primary storage system is co-located with the application server at the local site, and the secondary storage system is located at the remote site.
  • processor 36 executes, from memory 38, a mirroring application 48 that mirrors data record 22 by storing replicas of the data record produced by application server 32 in the primary and the secondary storage systems.
  • the mirroring application accepts write commands from application server 32, the commands comprising the data record to be stored.
  • the mirroring application stores the data record in the primary and secondary storage systems using methods which are described hereinbelow.
  • mirroring application 48 may execute on a separate processor in facility 20.
  • facility 20 implements synchronous mirroring by synchronously mirroring data record 22 from primary storage system 24 to a secure storage system 50, and asynchronously mirroring the data record from the secure storage system to secondary storage system 28.
  • a mirroring application 48 conveys data record 22 for temporary storage in secure storage system 50.
  • a protection system 52 communicates with mirroring application 48 and secure storage system 50 via a high-speed local area network (LAN) 54 or storage area network (SAN), and communicates with secondary storage system 28 via a wide area network (WAN) 56.
  • LAN local area network
  • SAN storage area network
  • WAN wide area network
  • storage device 40 may comprise and ultrafast storage device such as an SSD. Therefore, to reduce latency between the primary storage system and the secure storage device during a mirroring operation, facility 20 may be configured so that storage device 40 has a first throughput, and LAN 54 has a second throughput greater than or equal to the first throughput.
  • protection system 52 communicates with mirroring application 48 using a suitable communication link, such as a Fibre Channel (FC) link, an Internet Protocol (IP) link or a bus such as a peripheral component interconnect (PCI) bus.
  • a suitable communication link such as a Fibre Channel (FC) link, an Internet Protocol (IP) link or a bus such as a peripheral component interconnect (PCI) bus.
  • FC Fibre Channel
  • IP Internet Protocol
  • PCI peripheral component interconnect
  • protection system 52 is typically located in close proximity to the mirroring application.
  • the mirroring application is typically configured to forward every mirrored write command it accepts, as well as any acknowledgments it receives, to protection system 52.
  • Protection system 52 may communicate with mirroring application 48 using any suitable protocol, such as the small computer systems interface (SCSI), network file system (NFS) and common internet file system (CIFS) protocols, which are commonly used for communication between servers and storage devices.
  • SCSI small computer systems interface
  • NFS network file system
  • CIFS common internet file system
  • protection system 52 is coupled to secure storage system 50 via a high-speed interconnect such as a Universal Serial Bus (USB) connection (not shown).
  • a high-speed interconnect such as a Universal Serial Bus (USB) connection (not shown).
  • USB Universal Serial Bus
  • protection system 52 stores a copy of the data in secure storage system 50.
  • the copy of the data record is cached in secure storage system 50 until an acknowledgement indicating successful storage is received from processor 42.
  • protection system 52 deletes the cached copy of the data record from secure storage system 50.
  • secure storage system 50 is mapped as virtual storage drives of protection system 52.
  • the USB connection also provides electrical power for powering the secure storage system.
  • secure storage system 50 is constructed in a durable manner, so as to enable the secure storage system to withstand disaster events while protecting the cached data.
  • An example of the mechanical construction of secure storage system 50, as well as additional configurations of facility 20, is described in U.S. Patent No. 7,707,453, to Winokur, whose disclosure is incorporated herein by reference.
  • Remote site 30 comprises a recovery processor 60 coupled to secondary storage system 28 via a high-speed interconnect 62 such as a USB connection.
  • recovery processor 60 extracts the data record stored in secure storage system 50 and uses the extracted data record to reconstruct the data in the secondary storage system.
  • the cached data record stored in secure storage system 50 can be used to reconstruct the data following the failure.
  • secure storage system 50 enables facility 20 to provide low latency write commands, regardless of the distance between the primary and the secondary storage systems.
  • facility 20 provides guaranteed mirroring of the data at both the primary and the secondary storage systems.
  • the data can be recovered and reconstructed within a relatively short time frame from secure storage system 50.
  • the operation of the protection system and secure storage system is transparent to the mirroring application and to the application server.
  • protection system 52 and secure storage system 50 can be installed as an add-on to existing mirroring applications.
  • FIG. 2 is a block diagram that schematically illustrates secure storage system 50, in accordance with an embodiment of the present invention.
  • Secure storage system 50 comprises a memory 70, which holds the cached copy of the data records corresponding to write commands, as described hereinabove.
  • Memory 70 typically comprises an ultrafast storage device, and secure storage system 50 can be positioned in close proximity to the primary storage system so as to reduce input/output (I/O) latency.
  • I/O input/output
  • memory 70 may comprise a non-volatile memory device such as an SSD, a flash device or an electrically erasable programmable read only memory (EEPROM) device.
  • memory 70 may comprise any other suitable non-volatile or battery-backed memory device.
  • memory 70 may comprise one or more memory devices. Further details of secure storage system 50 are described in detail hereinbelow.
  • protection system 52 may "listen" to the acknowledgement messages arriving from the secondary storage system. When an acknowledgement of a particular write command is received by protection system 52, the protection system deletes the corresponding record from secure storage system 50. However, in some system configurations it is complicated or otherwise undesirable to intercept the acknowledgement messages by protection system 52.
  • mirroring applications manage a finite size buffer of pending write commands, i.e., write commands that were sent to the secondary storage device but are not yet acknowledged. When this buffer is full, the mirroring application refuses to accept additional write commands from the application server.
  • memory 70 of secure storage system 50 can be dimensioned to hold at least the same number of data records 22 as the maximum number of write commands in the mirroring application buffer.
  • the mirroring application can be configured so that its buffer size matches the size of memory 70. Because the size of memory 70 and the size of the mirroring application buffer are matched, when a new write command is sent to protection system 52, the oldest record in secure storage system 50 can be safely deleted.
  • mirroring applications are configured to allow a maximum number of pending write commands, without necessarily holding a buffer.
  • the mirroring application tracks the number of write commands sent to the secondary storage system and the number of acknowledgements received, and maintains a current count of unacknowledged (i.e., pending) write commands. When the number of pending write commands reaches a predetermined limit, no additional write commands are accepted from the application server.
  • the size of memory 70 can be dimensioned to match the maximum number of pending write commands.
  • the mirroring application can be configured so that the maximum allowed number of pending write commands matches the size of memory 70.
  • any other suitable mechanism can be used to avoid overflow in memory 70 by matching the size of memory 70 with the maximum size of data pending to be acknowledged by the secondary storage system.
  • the data can be reconstructed quickly, without physically connecting secure storage system 50 directly to the recovery processor at the remote site.
  • secure storage system 50 communicates with a remote computer (not shown), which is remotely connected to recovery processor 60 using any suitable communication link, such as over a wireless Internet connection.
  • the data records stored in the secure storage system are then transmitted via the remote computer to the recovery processor.
  • the data records transmitted between the remote computer and the recovery processor are encrypted, so as to maintain data security when communicating over wireless channels and over public media such as the Internet.
  • the data records are already encrypted by protection system 52 before they are stored in secure storage system 50. Any software needed for extracting and/or transmitting the records may be stored in the memory of secure storage system 50 along with the data records so that any computer having Internet access (or other access means) and a suitable interface for connecting to secure storage system 50 can be used as a remote computer.
  • secure storage system 50 comprises a control unit 72 (also referred to herein as processor 72), which performs the various data storage and management functions of the secure storage system.
  • Control unit 72 may comprise a microprocessor running suitable software. Alternatively, control unit 72 may be implemented in hardware, or using a combination of hardware and software elements.
  • an interface circuit such as a USB interface 74 provides this voltage to the various elements of secure storage system 50.
  • secure storage system 50 comprises a homing device 76, coupled to a homing antenna 78.
  • Homing device 76 comprises a transmitter or transponder, which transmits a radio frequency (RF) homing signal in order to enable secure storage system 50 to be located and retrieved following a disaster event.
  • RF radio frequency
  • homing device 76 begins to operate when secure storage system 50 detects that a disaster event occurred.
  • Device 76 may comprise an active, passive or semi-active homing device.
  • homing device 76 is powered by a power source 80.
  • Power source 80 may comprise a rechargeable battery, which is charged by electrical power provided via USB interface 74 during normal system operation. Alternatively, power source 80 may comprise any other suitable battery. In some embodiments, power source 80 is used to power control unit 72 and/or memory 72.
  • secure storage system 50 communicates with LAN 54 via a high-speed LAN adapter 82 such as an InfinibandTM adapter.
  • secure storage system 50 comprises a wireless transmitter 84 coupled to a communication antenna 86.
  • Transmitter 84 is typically powered by power source 80.
  • elements of secure storage system 50 are coupled to a bus 88.
  • transmitter 84 is used for transmitting the records stored in memory 70 to a wireless receiver (not shown), when the communication between secure storage system 50 and protection system 52 is broken due to a disaster event.
  • transmitter 84 and antenna 86 serve as alternative communication means for transmitting information from secure storage system 50.
  • data stored in the secure storage system can be retrieved and reconstructed within minutes.
  • the other retrieval methods which involve physically locating and retrieving the secure storage system and may involve detaching memory 70 from the unit, may sometimes take several hours or even days.
  • Transmitter 84 may comprise, for example, a cellular transmitter, a WiMax transmitter, or any other suitable data transmitter type.
  • the wireless receiver may be coupled to secondary storage system 28 or to recovery processor 60.
  • the functions of homing device 76, transmitter 84, and antennas 78 and 86 can be performed by a single transmitter and a single antenna.
  • a single transmitter and a single antenna For example, several methods are known in the art for determining the position of a cellular transmitter. Such methods can be used to locate wireless transmitter 84 when it transmits data from secure storage system 50, thus eliminating the need for a separate homing device.
  • FIG. 3 is a block diagram that schematically illustrates protection system 52, in accordance with an embodiment of the present invention.
  • Protection system 52 comprises a bus 90 that couples a protection processor 92, a memory 94, a LAN adapter 96 coupled to LAN 54, and a WAN adapter 98 coupled to WAN 56.
  • protection processor 92 can bridge LAN 54 with WAN 56.
  • LAN adapter 96 typically comprises ultrafast communication ports like InfiniBandTM with remote direct memory access (RDMA) connectivity and ports which can directly connect to storage SSD or flash drives like fibre channel (FC), Serial Attached SCSI (SAS), Serial ATA (SATA) Ethernet ports or PCI bus devices.
  • RDMA remote direct memory access
  • FC fibre channel
  • SAS Serial Attached SCSI
  • SAS Serial Attached SCSI
  • SATA Serial ATA
  • PCI bus devices PCI bus devices.
  • protection system 52 may be implemented internally to the primary storage system.
  • secure storage system 50 is coupled to LAN 54
  • protection system 52 is coupled to LAN 54 and WAN 56
  • facility 20 is configured to enable the secure storage system to communicate with the WAN via the protection system.
  • LAN 54 e.g., InfinibandTM
  • WAN 56 e.g., a PPP leased line
  • second throughput is typically less than the LAN.
  • Processors 36, 42, 60, 72 and 92 typically comprise general-purpose central processing units (CPU), which are programmed in software to carry out the functions described herein.
  • the software may be downloaded to primary storage system 24, secondary storage system 28, secure storage system 50, protection system 52 and recovery processor 60 in electronic form, over a network, for example, or it may be provided on non-transitory tangible media, such as optical, magnetic or electronic memory media.
  • some or all of the functions of the processors may be carried out by dedicated or programmable digital hardware components, or using a combination of hardware and software elements.
  • FIG. 4 is a flow diagram that schematically illustrates a method of synchronously mirroring data record 22 from primary storage system 24 to secondary storage system 28, in accordance with an embodiment of the present invention.
  • the steps of the flow diagram in Figure 4 are performed by a "mirroring processor".
  • the mirroring processor comprises protection processor 92.
  • the mirroring processor comprises storage processor 36 that also executes mirroring application 48.
  • mirroring application 48 receives a write command from a data source such as application server 32.
  • the write command comprises data record 22.
  • the mirroring application stores the data record to storage device 40, and upon the primary storage system successfully storing the data record to the primary storage device, mirroring application conveys the first write request to the mirroring processor.
  • the mirroring processor receives, from mirroring application 48, a request to write the data record to secondary storage system 28.
  • the mirroring processor determines if there is sufficient free space in memory 70 to store data record 22.
  • the data record comprises a first amount of storage space
  • memory 70 has a second amount of storage space available, and there is sufficient space to store the data record to memory 70 if the first amount of storage space is less than or equal to the second amount of storage space.
  • the mirroring processor stores the data record to memory 70 in a store step 104, and conveys a completion acknowledgement to mirroring application 48 in an acknowledgement step 106.
  • primary storage device 40 may comprise an ultrafast storage device, wherein a write operation to the primary storage device may take approximately 300 microseconds to complete.
  • a first latency comprises a time duration for mirroring application 48 to successfully store data record 22 to storage device 40
  • a second latency comprises a time duration starting with the mirroring application conveying the write request to the mirroring processor and ending with the mirroring application receiving the acknowledgement from the mirroring processor.
  • secure storage device 50 and protection device 52 may be positioned in facility 20 so that the second latency exceeds the first latency by no more than a predefined threshold. For example, if the threshold is twenty percent, then secure storage device 50 and protection device 52 can be positioned in facility 20 so that the second latency exceeds the first latency by no more than twenty percent.
  • a second comparison step 108 if data compression is enabled in facility 20, then the mirroring processor retrieves the data record from memory 70, and compresses data record 22 into compressed data in a compression step 110.
  • compression can be performed efficiently if it can process large volumes of data rather than small data chunks.
  • the mirroring processor conveys, to secondary storage system 28, a write request comprising the compressed data, and upon receiving the write request, processor 42 decompresses the compressed data back into data record 22, stores the decompressed data record to storage device 46, and conveys a completion acknowledgement to the mirroring processor.
  • the mirroring processor Upon receiving the completion acknowledgement from processor 42 in a second receive step 114, the mirroring processor deletes data record 22 from memory 72 in a deletion step 116, and the method ends.
  • step 118 the mirroring processor retrieves the data record from memory 70 and conveys, to secondary storage system 28, a write request comprising data record 22.
  • processor 42 stores data record 22 to storage device 46, and conveys a completion acknowledgement to the mirroring processor.
  • the method continues with step 116.
  • the write request received in step 100 may also be referred to herein as a first write request, and the write requests conveyed in steps 112 and 118 may also be referred to herein as second write requests.
  • the completion acknowledgement conveyed in step 106 may also be referred to herein as a first message indicating a successful completion of the first write, and the completion acknowledgements received in steps 114 and 120 may also be referred to herein as second messages indicating a successful completion of the second write request.
  • step 122 the mirroring processor conveys a storage device full error message to mirroring application 48, and the method ends.
  • protection system 52 accepts all write operations from the primary storage system (e.g., comprising an ultrafast storage device 40) and places the accepted write operations in secure storage system 50. Protection system 52 is typically in charge of allocating space in the secure storage system and writing the data to memory 70. A possible implementation is described by the Data Protect and Allocate Buffer procedures listed hereinbelow.
  • Protection system 52 also periodically retrieves data from secure storage system 50, compresses the data, and transmits the compressed data to the secondary storage system as described supra. Once successfully stored on the secondary storage system, the protect/on system deletes the data from the secure storage system in order to free space.
  • a possible implementation is described by the Data Transmit and Receive Data at Remote Site procedures listed hereinbelow.
  • protection system 52 can perform one of the following depending on user preference:
  • the protection system machine can read missing data from the primary storage system and perform a resynchronization.
  • secure storage system 50 can start transmitting the contents of memory 70 via transmitter 84 to recovery processor 60, and the recovery processor can perform a recovery operation as described in the patent application cited above.
  • the major data structure maintained by the algorithms presented herein stores data associated with each write operation which is stored within memory 70.
  • data elements are maintained by the following pseudo-code: WRITE DATA
  • the protection system can perform operations indicated by the following pseudo-code:
  • BufferFrame is a pointer to this buffer */
  • protection system 52 can manage buffers within secure storage system 50 by performing operations indicated by the following pseudo-code:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne, dans certains modes de réalisation, des procédés, des moyens de stockage et des produits logiciels informatiques qui font intervenir les étapes consistant à recevoir, en provenance d'une application (48) de duplication s'exécutant sur un premier système (24) de stockage configuré en tant que système de stockage primaire, une demande de première écriture comprenant un enregistrement de données, et stocker l'enregistrement de données dans un système (50) de stockage sécurisé. Suite au stockage de l'enregistrement de données dans le système de stockage sécurisé, un premier message est transmis à l'application de duplication, indiquant un achèvement réussi de la première écriture, et une demande de deuxième écriture comprenant l'enregistrement de données est transmise du système de stockage sécurisé à un deuxième système (28) de stockage configuré en tant que système de stockage secondaire. Un deuxième message est reçu en provenance du système de stockage secondaire, indiquant un achèvement réussi de la demande de deuxième écriture, et à réception du deuxième message, l'enregistrement de données est supprimé du système de stockage sécurisé.
PCT/IB2014/060689 2013-04-14 2014-04-13 Duplication synchrone de matrices de stockage très rapides WO2014170810A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361811749P 2013-04-14 2013-04-14
US61/811,749 2013-04-14

Publications (1)

Publication Number Publication Date
WO2014170810A1 true WO2014170810A1 (fr) 2014-10-23

Family

ID=51730878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2014/060689 WO2014170810A1 (fr) 2013-04-14 2014-04-13 Duplication synchrone de matrices de stockage très rapides

Country Status (1)

Country Link
WO (1) WO2014170810A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017053749A1 (fr) * 2015-09-24 2017-03-30 The Florida International University Board Of Trustees Techniques et systèmes destinés à des domaines de défaillance indépendante locale
US9891849B2 (en) 2016-04-14 2018-02-13 International Business Machines Corporation Accelerated recovery in data replication environments
US9946617B2 (en) 2016-06-06 2018-04-17 International Business Machines Corporation Optimized recovery in data replication environments
US10379958B2 (en) 2015-06-03 2019-08-13 Axxana (Israel) Ltd. Fast archiving for database systems
US10437730B2 (en) 2016-08-22 2019-10-08 International Business Machines Corporation Read cache synchronization in data replication environments
US10592326B2 (en) 2017-03-08 2020-03-17 Axxana (Israel) Ltd. Method and apparatus for data loss assessment
US10769028B2 (en) 2013-10-16 2020-09-08 Axxana (Israel) Ltd. Zero-transaction-loss recovery for database systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083245A1 (en) * 1995-10-16 2004-04-29 Network Specialists, Inc. Real time backup system
WO2006111958A2 (fr) * 2005-04-20 2006-10-26 Axxana (Israel) Ltd. Systeme de miroitage de donnees a distance
US20090287967A1 (en) * 2008-05-19 2009-11-19 Axxana (Israel) Ltd. Resilient Data Storage in the Presence of Replication Faults and Rolling Disasters

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040083245A1 (en) * 1995-10-16 2004-04-29 Network Specialists, Inc. Real time backup system
WO2006111958A2 (fr) * 2005-04-20 2006-10-26 Axxana (Israel) Ltd. Systeme de miroitage de donnees a distance
US20090287967A1 (en) * 2008-05-19 2009-11-19 Axxana (Israel) Ltd. Resilient Data Storage in the Presence of Replication Faults and Rolling Disasters

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10769028B2 (en) 2013-10-16 2020-09-08 Axxana (Israel) Ltd. Zero-transaction-loss recovery for database systems
US10379958B2 (en) 2015-06-03 2019-08-13 Axxana (Israel) Ltd. Fast archiving for database systems
WO2017053749A1 (fr) * 2015-09-24 2017-03-30 The Florida International University Board Of Trustees Techniques et systèmes destinés à des domaines de défaillance indépendante locale
US9891849B2 (en) 2016-04-14 2018-02-13 International Business Machines Corporation Accelerated recovery in data replication environments
US10082973B2 (en) 2016-04-14 2018-09-25 International Business Machines Corporation Accelerated recovery in data replication environments
US9946617B2 (en) 2016-06-06 2018-04-17 International Business Machines Corporation Optimized recovery in data replication environments
US10437730B2 (en) 2016-08-22 2019-10-08 International Business Machines Corporation Read cache synchronization in data replication environments
US10592326B2 (en) 2017-03-08 2020-03-17 Axxana (Israel) Ltd. Method and apparatus for data loss assessment

Similar Documents

Publication Publication Date Title
WO2014170810A1 (fr) Duplication synchrone de matrices de stockage très rapides
US11709739B2 (en) Block-level single instancing
US20210382791A1 (en) Data Backup Technique for Backing Up Data to an Object Storage Service
CN110597455B (zh) 通过改进的元数据管理增加闪存耐用性的方法
EP2328089B1 (fr) Systeme de miroitage de donnees a distance
US20190278719A1 (en) Primary Data Storage System with Data Tiering
US7134044B2 (en) Method, system, and program for providing a mirror copy of data
US8495304B1 (en) Multi source wire deduplication
US11593217B2 (en) Systems and methods for managing single instancing data
US20160342545A1 (en) Data memory device
CN110462578B (zh) 用于在带卷容器中存储数据的系统
EP1875350B1 (fr) Systeme de miroitage de donnees a distance
US10769028B2 (en) Zero-transaction-loss recovery for database systems
US8688632B2 (en) Information processing system and method of controlling the same
US8527724B2 (en) Blocked based end-to-end data protection for extended count key data (ECKD)
US8438332B2 (en) Apparatus and method to maintain write operation atomicity where a data transfer operation crosses a data storage medium track boundary
US9384147B1 (en) System and method for cache entry aging
US10866742B1 (en) Archiving storage volume snapshots
US9715428B1 (en) System and method for cache data recovery
CN110941514B (zh) 一种数据备份方法、恢复方法、计算机设备和存储介质
US9672180B1 (en) Cache memory management system and method
US20160365874A1 (en) Storage control apparatus and non-transitory computer-readable storage medium storing computer program
US11068299B1 (en) Managing file system metadata using persistent cache
US8832395B1 (en) Storage system, and method of storage control for storage system
US9348704B2 (en) Electronic storage system utilizing a predetermined flag for subsequent processing of each predetermined portion of data requested to be stored in the storage system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14784733

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14784733

Country of ref document: EP

Kind code of ref document: A1