US11675612B2 - Achieving near-zero added latency for modern any point in time VM replication - Google Patents
Achieving near-zero added latency for modern any point in time VM replication Download PDFInfo
- Publication number
- US11675612B2 US11675612B2 US16/803,626 US202016803626A US11675612B2 US 11675612 B2 US11675612 B2 US 11675612B2 US 202016803626 A US202016803626 A US 202016803626A US 11675612 B2 US11675612 B2 US 11675612B2
- Authority
- US
- United States
- Prior art keywords
- splitter
- journal
- data
- recited
- replication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1471—Saving, restoring, recovering or retrying involving logging of persistent data for recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2097—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/1658—Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45575—Starting, stopping, suspending or resuming virtual machine instances
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2207/00—Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
- G11C2207/22—Control and timing of internal memory operations
- G11C2207/2272—Latency related aspects
Definitions
- Embodiments of the present invention generally relate to data replication. More particularly, at least some embodiments of the invention relate to systems, hardware, software, computer-readable media, and methods for reducing latency in data replication processes.
- IO latency is the round-trip-time (RTT) between an IO intercepting software, such as a splitter, and a replication appliance (RPA).
- RTT round-trip-time
- RPA replication appliance
- This RTT is typically a few hundreds of microseconds, which is on the same order of magnitude for spindle disks accessed over SAN/iSCSI, or even slow SSDs.
- communication speeds such as along IO paths, are not keeping pace with memory and storage write speeds.
- latency in communications is becoming increasingly problematic.
- FIG. 1 discloses aspects of a comparative example for illustration purposes.
- FIG. 2 discloses aspects of an example architecture and IO flow.
- FIG. 3 discloses aspects of a VM migration.
- FIG. 4 discloses aspects of an example method.
- FIG. 5 discloses aspects of an example computing device.
- Embodiments of the present invention generally relate to data replication. More particularly, at least some embodiments of the invention relate to systems, hardware, software, computer-readable media, and methods for reducing latency in data replication processes.
- example embodiments of the invention concern the reduction of latency that may be associated with IO processes involving a VM. More particularly, example embodiments of the invention embrace approaches that may eliminate added IO latency for a protected VM machine, while maintaining any-point-in-time restore capabilities.
- IO latency between an application of a protected machine and storage was reduced by intercepting application IOs and copying IO data and IO metadata to NVM, and asynchronously transmitting the IOs to a replication site.
- example embodiments may provide for a splitter that writes, to a splitter journal in a hypervisor, only (i) the metadata of the IO, and (ii) a pointer to the IO data.
- the pointer points to the IO data residing in an IO data buffer in VM memory, which may be referred to herein as a VM IO buffer.
- VM IO buffer a VM IO buffer
- Embodiments of the invention may be beneficial in a variety of respects.
- one or more embodiments of the invention may provide one or more advantageous and unexpected effects, in any combination, some examples of which are set forth below. It should be noted that such effects are neither intended, nor should be construed, to limit the scope of the claimed invention in any way. It should further be noted that nothing herein should be construed as constituting an essential or indispensable element of any invention or embodiment. Rather, various aspects of the disclosed embodiments may be combined in a variety of ways so as to define yet further embodiments. Such further embodiments are considered as being within the scope of this disclosure.
- one advantageous aspect of at least some embodiments of the invention is that copying of IO data from an IO data buffer to a splitter journal, that is, a mem-copy operation, may be avoided.
- latency associated with a write IO may be reduced by writing only IO metadata and a pointer to a splitter journal.
- An embodiment of the invention may help to maintain write order fidelity even when an associated VM moves from one host to another host.
- An embodiment of the invention may enable a VM to recover after a crash or other unplanned event by persistently saving the splitter journal in NVM, and thus avoiding the need for a full sweep after a crash.
- embodiments of the invention may be implemented in connection with systems, software, and components, that individually and/or collectively implement, and/or cause the implementation of, replication operations and operations related to replication.
- New and/or modified data collected and/or generated in connection with some embodiments may be stored in a data protection environment that may take the form of a public or private cloud storage environment, an on-premises storage environment, and hybrid storage environments that include public and private elements. Any of these example storage environments, may be partly, or completely, virtualized.
- the storage environment may comprise, or consist of, a datacenter which is operable to service read, write, delete, backup, restore, and/or cloning, operations initiated by one or more clients or other elements of the operating environment.
- a backup comprises groups of data with different respective characteristics, that data may be allocated, and stored, to different respective targets in the storage environment, where the targets each correspond to a data group having one or more particular characteristics.
- Example cloud computing environments which may or may not be public, include storage environments that may provide data protection functionality for one or more clients.
- Another example of a cloud computing environment is one in which processing and other services may be performed on behalf of one or more clients.
- Some example cloud computing environments in connection with which embodiments of the invention may be employed include, but are not limited to, Microsoft Azure, Amazon AWS, Dell EMC Cloud Storage Services, and Google Cloud. More generally however, the scope of the invention is not limited to employment of any particular type or implementation of cloud computing environment.
- the operating environment may also include one or more clients that are capable of collecting, modifying, and creating, data.
- a particular client may employ, or otherwise be associated with, one or more instances of each of one or more applications that perform such operations with respect to data.
- the operating environment may also include one or more clients that are capable of collecting, modifying, and creating, data.
- a particular client may employ, or otherwise be associated with, one or more instances of each of one or more applications that perform such operations with respect to data.
- Such clients may comprise physical machines, or virtual machines (VM)
- devices in the operating environment may take the form of software, physical machines, or VMs, or any combination of these, though no particular device implementation or configuration is required for any embodiment.
- data protection system components such as databases, storage servers, storage volumes (LUNs), storage disks, replication services, backup servers, restore servers, backup clients, and restore clients, for example, may likewise take the form of software, physical machines or virtual machines (VM), though no particular component implementation is required for any embodiment.
- VMs a hypervisor or other virtual machine monitor (VMM) may be employed to create and control the VMs.
- VMM virtual machine monitor
- the term VM embraces, but is not limited to, any virtualization, emulation, or other representation, of one or more computing system elements, such as computing system hardware.
- a VM may be based on one or more computer architectures, and provides the functionality of a physical computer.
- a VM implementation may comprise, or at least involve the use of, hardware and/or software.
- An image of a VM may take the form of a .VMX file and one or more .VMDK files (VM hard disks) for example.
- data is intended to be broad in scope. Thus, that term embraces, by way of example and not limitation, data segments such as may be produced by data stream segmentation processes, data chunks, data blocks, atomic data, emails, objects of any type, files of any type including media files, word processing files, spreadsheet files, and database files, as well as contacts, directories, sub-directories, volumes, and any group of one or more of the foregoing.
- Example embodiments of the invention are applicable to any system capable of storing and handling various types of objects, in analog, digital, or other form.
- terms such as document, file, segment, block, or object may be used by way of example, the principles of the disclosure are not limited to any particular form of representing and storing data or other information. Rather, such principles are equally applicable to any object capable of representing information.
- backup is intended to be broad in scope.
- example backups in connection with which embodiments of the invention may be employed include, but are not limited to, full backups, partial backups, clones, snapshots, and incremental or differential backups.
- FIG. 1 a brief overview is provided by way of a comparative example that will aid in the illustration of various concepts within the scope of the invention.
- embodiments of the invention may be employed in connection with native VM replication processes.
- an example operating environment may include a VM 102 that hosts one or more applications (not shown) which write IOs.
- the VM 102 may communicate with a hypervisor 104 that includes a splitter IO interception module 106 .
- the hypervisor 104 particularly the splitter IO interception module 106
- the hypervisor 104 particularly the splitter IO interception module 106
- may also communicate with a storage environment 110 which may take the form of one or more VM disks.
- a VM replication flow implemented by the splitter IO interception module 106 , might proceed as follows:
- the RPA 108 may keep a journal 109 of incoming IOs, and will send the intercepted IOs asynchronously to a replica location 112 where they can be saved in an Any-PIT Journal 113 .
- significant latency may be added by certain aspects of the replication flow, such as 2 Send copy of IO to the RPA.
- the process 2 may add several hundred microseconds to the overall replication flow. This is because the splitter IO interception module 106 would copy the incoming IO, and then send the IO copy to the RPA.
- communication processes such as 2 and 3 may be significantly slower than VM disk processes such as processes 4 and 5.
- the latency introduced by process 2 would adversely impact hi-performance applications such as those hosted by the VM 102 , which are connected to high-end storages with ultra-low latency, such as the storage environment 110 .
- the storage environment 110 may not realize its full potential, since the communication latency of process 2 for example may significantly undercut the benefit provided by high speed storage. This may be of particular concern in view of the fact that the latency incurred by storage continues to drop. For example, some NVMe and SSDs have dropped below 100 ⁇ s latency.
- NVM non-volatile memory
- SCM Storage Class Memory
- NVM may be employed that takes the form of persistent memory installed inside the protected machine, with a very low latency (such as about 1-10 ⁇ sec), and relatively low cost.
- NVMs may be configured either as another disk, or a cache layer for the storage such as Dell EMC VxFlex, or as memory-addressable, and thus not accessed using storage/IO constructs.
- NVM also embraces Non-Volatile Random Access Memory (NVRAM), and Non-Volatile Dual In-line Memory Modules (NVDIMM).
- DIMMS may include, for example, NAND+DRAM DIMMS and XP DIMMs/ReRAM.
- Other NVMs that may be employed in some embodiments include Fast NAND SSDs, and 3D XP SSD.
- the scope of the invention is not limited to the use of any particular type of NVM. Thus, the foregoing are presented only by way of example, and are not intended to limit the scope of the invention in any way. Following is a discussion of some aspects of example embodiments of the invention.
- the operating environment may include a one or more protected machines, such as a VM 202 that hosts one or more applications (not shown) which write IOs.
- a hypervisor 204 which may communicate with, and control, the VM 202 , may include a splitter 206 that comprises a splitter IO interception module 208 , and a journal manager 210 that may include one or more splitter journals 212 that may communicate with a replication module 214 .
- the journal manager 210 may reside on memory-accessed NVM, or storage-accessed NVMe, associated with the hypervisor 204 .
- the splitter 206 may communicate with the storage 216 , which may comprise one or more VM disks. Communication between the splitter 206 and the replication module 214 may take asynchronously relative to IO operations involving any one or more of the VM 202 , hypervisor 204 , and storage 216 .
- one splitter journal 212 is provided for a consistency group, so that disks of the consistency group can be maintained in a consistent state with respect to each other. That is, the protected machine 202 may have multiple disks that may need to be maintained in a consistent state with each other. In the case of a physical machine, there may only be one splitter journal 212 for that machine. Where the protected machine 202 is a VM however, the hypervisor 204 may include multiple splitter journals 212 .
- process 6 of FIG. 2 is performed asynchronously relative to processes 1, 3, 4 and 5, of FIG. 2 , with the result that process 6 does not impose any latency on the write IO path 1 and 3 between the application of the VM 202 and the storage environment 216 . Because, as noted herein with respect to the example of FIG. 1 , the latency imposed by processes 2 and 3 in FIG.
- IO latency 1 may be significantly greater than the latency imposed, whether individually or collectively, by the processes 1, 4, 5, and 6 in FIG. 1 .
- a significant reduction in IO latency may be obtained by reducing or eliminating the latency associated with communications to/from a replication module 214 , such as has been done in the example embodiment of FIG. 2 with respect to replication module 214 .
- the latency associated with IO operations may be further reduced by retaining the IO data intercepted in process 1 in an IO data buffer (not shown) of the hypervisor 204 , rather than copying that IO data to the splitter journal 212 . That is, and as noted herein, only the IO metadata, and a pointer to the IO data in the IO data buffer, are stored by process 2 in the splitter journal 212 . Because the IO metadata and the pointer are, individually and collectively, relatively small in size, only a very short amount of time is needed to store them in the splitter journal 212 . Thus, process 2 may impose little, or no, material latency on an IO process, such as a write IO process, that includes processes 1, 3, 4, and 5.
- a mem-copy process, inside the hypervisor 204 kernel, in which the IO data is written to a splitter journal may be eliminated. In general, this may be implemented through the use of logic in the VM memory management of the hypervisor OS.
- example embodiments may include and employ hypervisor memory, which may be used by the VM 202 , that may be allocated in 4K aligned chunks. That is, each 4K chunk of the hypervisor memory may start at a memory address that is divisible by 4K. This may be referred to as an aligned allocation.
- Embodiments of the invention may also include and employ VM memory, which may be allocated in 4K aligned pages. It is noted that the VM memory may be virtual memory, that is, the VM memory may comprise 4K pages of memory that do not need to be consecutive, and may be referenced by one or more pointers, as discussed below.
- the VM 202 OS may receive, or be allocated, memory pages by the hypervisor 204 .
- Memory pages may be referred to herein simply as ‘pages.’
- the list of memory pages associated with a particular VM 202 may change, for example, when unused memory pages are replaced. However, the changes to the memory list may not be apparent to the processes of the VM 202 .
- a page manager that manages a list of memory pages is the VMtools utility by VMware.
- each pointer to a particular page or chunk may be referred to as a reference.
- the total number of pointers to a particular page or chunk which can be 0 or any positive integer, may thus be referred to as the refcount for that page or chunk.
- the system may receive a pointer that points to the IO data in an IO buffer. Rather than copying the IO data from the IO buffer to a journal, for example, the refcount of pages for that IO data is incremented. As a result of incrementing the refcount, the page to which the refcount refers will not be freed at the end of the IO. Note that any non-zero refcount may ensure that the page will be retained.
- the VM memory refcount and/or the hypervisor memory refcount may be incremented as the result of an incoming IO.
- the incrementing of these refcounts is made possible because the IO buffer is 4K multiple and aligned, the OS virtual memory in the hypervisor is 4K multiple and aligned, and the VM memory is 4K multiple and aligned.
- assurance may be had that all of these memory managers are referring to the same full 4K page(s).
- embodiments of the invention may simply provide the VM with an additional page from the hypervisor memory. This process may be referred to as page exchange or page swapping.
- page exchange or page swapping instead of copying the IO data, that is, performing mem-copy, a pointer points to a page of the IO memory buffer, and the VM memory is provided with another page by the hypervisor.
- the page of the IO memory buffer may be released after the splitter journal is evacuated.
- embodiments of the invention may reduce the latency such that process 2 ( FIG.
- a near-zero latency includes latencies of about 5 nanoseconds or less.
- live migration of a VM between hosts may be accommodated.
- Such a live migration of a VM between hosts may be effected, for example, by the VMware vMotion platform, although any other software and/or hardware of comparable functionality may be used instead.
- one or more embodiments may include the following feature.
- a ‘host’ embraces a hypervisor, such as a VMware ESX hypervisor.
- a VMware ESX hypervisor may be simply referred to as an ESX host.
- write order fidelity To maintain write order fidelity, that is, to make sure that all of the IOs associated with the VM 302 , regardless of where it is hosted, are kept in order, the journal 314 of the replication module 312 must maintain the original order of the IOs received from host-1 304 and received from host-2 306 .
- Write order fidelity may be established and maintained in various ways.
- the replication module 312 may be configured such that it may not allow journal evacuation from host-2 306 until host-1 304 has finished evacuating the journal pertaining to the VM 302 and has informed the RPA that evacuation of that journal is complete. In the meantime, host-2 splitter 310 may retry evacuating the journal pertaining to the VM 302 until it succeeds. Since the journal evacuation occurs asynchronously with respect to the production IOs, the journal evacuation processes will not affect the production IOs, as long as there is enough memory to keep the journal on host-2 306 .
- the replication module 312 may allow evacuation of the journal of host-2 306 while host-1 304 is still evacuating its journal. In this case, the replication module 312 may save the journal IOs from host-2 306 in a different location, such as on disk for example, until evacuation of host-1 304 splitter journals is finished.
- Write order fidelity may also be established and maintained by marking each IO with a “session number,” which may be an incrementing number assigned by the replication module 312 on evacuation handshake, that is, when the replication modle 312 establishes communication with the journal of the host.
- the session number may be based on both (i) a splitter ID, and (ii) VM status information. For example, if a VM migrates between hosts thus 1 ⁇ 2 ⁇ 1, then the replication module 312 may need to know not only the splitter ID but also the fact that the VM has most recently moved from host-2 306 to host-1 304 .
- the assigned session numbers should reflect the fact that even though two sets of IOs from host-1 304 will be evacuated to the replication module 312 , those two sets are not processed one after the other at the replication module 312 . Rather, to maintain write order fidelity with respect to a VM that has migrated between hosts thus 1 ⁇ 2 ⁇ 1, the processing of the IO sets would be 1—first IO set from host-1 304 , 2—IO set from host-2 306 , and 3—second IO set from host-1 304 .
- each of the IOs may have an associated timestamp
- the use of timestamps may not be a reliable or effective way to establish and maintain write order fidelity of the IOs at the replication module 312 , since a clock of the host-1 304 may not be in synch with a clock of host-2 306 .
- the session number may take various forms, one example of which was discussed above.
- the session number may take the form of a splitter ID that is concatenated to a counter.
- the session number for the three IO sets may be as follows: 1-1, 2-2, and 1-3. The first number of each pair identifies the splitter, and the second number of each pair is the incrementing counter number.
- a session number may enable the replication module 312 to finish processing all IOs from a running session before switching to another session. It is noted further that while the discussion of FIG. 3 concerns two hosts, the scope of the invention is not limited to any particular number of hosts. Thus, example embodiments may be extended to any number of hosts that a VM 302 migrates between. In any VM migration scenario however, the IOs may be written to the final replication module journal 314 in order. As discussed below, problems may arise in connection with the migration of a VM from one host to another.
- this example scenario assumes the write order fidelity approach in which the destination host, host-2 306 , is not permitted by the replication module 312 to evacuate until the source host, host-1 304 , has finished evacuation.
- all replication IO activity may suspended, with the result that the customer data is not protected during this time. That is, IOs written by the VM 302 are not being replicated, and if the VM 302 were to fail for some reason, those IOs may be lost.
- the system may decide to wait for a period of time which may be user-selectable, for example about 3 hours after the last IO was received, for further IOs to come in. If connectivity to host-1 304 is restored before the time period expires, the host-1 304 journal evacuation may proceed. On the other hand, if connectivity is not restored within the specified period of time, the system may decide to resynchronize the VM disks, that is, the system may perform a volume-sweep, or full sweep. Performance of a full sweep will empty the host journals and the replication module journal, but will allow processing of IOs to begin again and will enable saving new snapshots.
- IOs may be sent from the splitter to a replication module asynchronously with respect to the IO processes, a circumstance may arise in which information about an IO resides on the splitter memory alone, inside the splitter journals, until the journal portion is sent to the RPA.
- This circumstance may be readily dealt with where an event is expected or planned to occur.
- the splitter of the host may delay the restart of the host until all splitter journal data is evacuated, or saved persistently to disk. Since the host is about to restart and this is a planned restart, the VM or VMs that had run on the host may have already been migrated to another host, or powered off. Thus, there will be no more incoming IOs from the VM(s) that require writing to the splitter journals of the host, and replication to the replication module.
- the restart of the host may occur as a result of an unexpected and unplanned event, such as ESX host crash scenario for example.
- the replication system may require a resynchronization of all the VM disks, for all protected VMs running on the ESX host. This is a hypervisor-level process and may be referred to as a “full sweep.”
- the full sweep process may take a relatively long time, measured in hours, depending on the size of the disks, the available bandwidth, and other considerations.
- one approach involves using NVM to save the splitter journal persistently, while still maintaining low-latency access.
- the splitter journal such as splitter journal 212 of FIG. 2 for example
- the splitter 206 will read the splitter journal 212 upon starting, and the evacuation of the splitter journal 212 may continue from the same place where it left off before the splitter 206 was restarted.
- This approach may be an effective way to deal with, at least, the unexpected restarts scenario described above.
- the NVM used for the splitter journal may be memory-accessed or storage-accessed. Where memory-accessed NVM, such as NVRAM for example, is used, the IO flow may be similar, or identical, to what is indicated in FIG. 2 . Further, to continue using the memory-manager method to avoid the need to perform the memcpy process, the whole VM memory may be required to reside in NVM, so this approach may only be used only for latency-sensitive apps of the VM. As used herein, a latency-sensitive application includes applications whose operation may be materially impaired by a latency penalty in the range of about 10 ⁇ s to about 100 ⁇ s or more.
- the hypervisor may be configured to use memcpy to copy the IO data buffer to the splitter-journal. This may add some latency, such as on an order of about 1 ⁇ s, but may improve the response of the VM in the case of unexpected restarts.
- storage-accessed NVM such as NVMe
- process 2 writes the data to the disk and only after getting an ack, process 2.1, the IO is sent, by process 3, to the production disk at storage 216 .
- Using storage-accessed NVM may also require that the memcpy process be performed.
- the replication module cannot access the splitter journal information in the NVM, until the host is back up. Until that happens, it may be assumed that the production disk(s) and replication disk(s) are in an inconsistent state with each other. Similar to the case, discussed above, where there has been a relatively long disconnection between a host and associated replication module, a user-modifiable timeout may be set, after the expiration of which the system will resynchronize the disks and thereby return to a consistent state. In such a case, after the ESX does eventually come back up, the splitter journal information on the NVM may be ignored and reset to a clean journal.
- embodiments of the invention may virtually, or completely, eliminate a latency hit for any-PIT data-protected applications running on a VM. This is a particularly useful feature for latency-sensitive applications running on NVMe and other low-latency VM disks.
- Embodiments of the invention also provide for processes that deal with VM and hypervisor disaster scenarios, and processes that are able to accommodate the migration of one or more VMs between/among multiple h hosts.
- Embodiments of the invention may employ processes and memory configurations, such as the use of NVM to persistently store splitter journals, that avoid the need to copy an IO buffer in connection with a replication process.
- Embodiments of the invention may employ NVM to facilitate resumption of a journal evacuation process from the point at which the evacuation process left off as a result of a splitter restart.
- FIG. 4 Details are provided concerning some methods for replicating the data of a protected machine, such as VM for example, without imposing latency, such as write IO latency, on an IO path between an application of a protected machine and a storage environment.
- a protected machine such as VM for example
- latency such as write IO latency
- One example of such a method is denoted generally at 400 .
- Example embodiments of the method may involve operations of a protected machine, replication module, and storage environment.
- Other embodiments may employ additional, or alternative, components however, and the scope of the invention is not limited to the aforementioned example embodiments.
- an application which may be a latency-sensitive application that resides on or is hosted by a production machine such as a VM that is protected by a replication process, issues an IO, such as a write IO for example.
- the method 400 may begin when the IO is intercepted 402 by a splitter IO interception module of a hypervisor.
- a pointer, and IO metadata concerning the IO, such as the identification of the application, and a timestamp, for example, may then be written to a splitter journal 404 on NVM of the hypervisor.
- the splitter journal may reside on a disk, storage, or memory, external to the protected machine.
- the pointer stored in the splitter journal points to the IO data stored in an IO buffer of the hypervisor. As such, there may be no need to copy the IO data from the IO buffer to the splitter journal. Rather, and as noted, only a pointer to the IO data, and IO metadata, are stored in the splitter journal, and the IO data is not copied from the IO buffer to the splitter journal.
- the splitter journal may be stored in storage-accessed NVM.
- the IO data, along with the IO metadata, may have to be copied from the IO buffer to the splitter journal, but there would be no need to use or store a pointer.
- the writing of the IO metadata and pointer to the splitter journal may be a relatively fast process as compared with a process in which the IO data and IO metadata are written to the splitter journal.
- the writing of the pointer and IO metadata to the splitter journal may not impose any material latency to the overall write IO path from the application to the storage environment.
- the IO data in the IO buffer, and the associated IO metadata may then be sent 406 by the splitter IO interception module to a storage environment, such as a production storage environment for example.
- the storage environment may then receive and write 408 the IO data and IO metadata received from the splitter IO interception module. Receipt of the IO data and IO metadata may be acknowledged 410 by the storage environment to the splitter IO interception module, which may receive 412 the acknowledgement.
- the storage of the IO data and the IO metadata at the replication site may then be acknowledged 414 by the splitter IO interception module to the application that issued the IO.
- the pointer and IO metadata written at 404 to the splitter journal may be evacuated 416 , either individually or in batches, to the replication module.
- the IO data in the IO buffer may be transmitted 416 to the replication module.
- the pointer corresponding to the IO data transmitted to the replication module may be flushed from the splitter journal.
- the IO data in the IO buffer may be transmitted to the replication module before, during, or after, the evacuation.
- the replication module may then receive 418 the IO data and IO metadata, and replicate the IO data and IO metadata 420 to a replication disk.
- Embodiment 1 A method, comprising: intercepting an IO issued by an application of a VM, the IO including IO data and IO metadata; storing the IO data in an IO buffer; writing the IO metadata and a pointer, but not the IO data, to a splitter journal in memory, wherein the pointer points to the IO data in the IO buffer; forwarding the IO to storage; and asynchronous with operations occurring along an IO path between the application and storage, evacuating the splitter journal by sending the IO metadata and the IO data from the splitter journal to a replication site.
- Embodiment 2 The method as recited in embodiment 1, wherein writing the pointer and IO metadata to the splitter journal site does not increase a latency associated with the operations between the application and storage.
- Embodiment 3 The method as recited in any of embodiments 1-2, further comprising, asynchronous with operations occurring along an IO path between the application and storage, sending the IO data from the IO buffer to the replication site.
- Embodiment 4 The method as recited in any of embodiments 1-3, further comprising maintaining write order fidelity of incoming IOs from the VM as the VM migrates from a first host to a second host, and maintaining write order fidelity comprises marking each incoming IO with a session number.
- Embodiment 5 The method as recited in any of embodiments 1-4, further comprising receiving IOs from two different hosts as the VM migrates from one of the hosts to the other host, and maintaining write order fidelity of the IOs.
- Embodiment 6 The method as recited in any of embodiments 1-5, further comprising experiencing a crash of the VM and, after restart of the VM, resuming evacuation of the splitter journal at a point where evacuation had previously ceased due to the crash of the VM.
- Embodiment 7 The method as recited in any of embodiments 1-6, wherein after replication of IOs to the replication site has been suspended due to a lack of communication between the VM and the replication site, the method further comprises either: resynchronizing a replication disk with a disk of the VM if communication between the VM and the replication site does not resume within a user-specified time period; or if communication between the VM and the replication site resumes within the user-specified time period, recommencing splitter journal evacuation.
- Embodiment 8 The method as recited in any of embodiments 1-7, wherein the memory comprises NVM.
- Embodiment 9 The method as recited in any of embodiments 1-8, wherein part of the method is performed inside a hypervisor kernel.
- Embodiment 10 The method as recited in any of embodiments 1-9, wherein the IO path comprises a path between the application and a splitter, and a path between the splitter and the storage.
- Embodiment 11 A method for performing any of the operations, methods, or processes, or any portion of any of these, disclosed herein.
- Embodiment 12 A non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform the operations of any one or more of embodiments 1 through 11.
- a computer may include a processor and computer storage media carrying instructions that, when executed by the processor and/or caused to be executed by the processor, perform any one or more of the methods disclosed herein, or any part(s) of any method disclosed.
- embodiments within the scope of the present invention also include computer storage media, which are physical media for carrying or having computer-executable instructions or data structures stored thereon.
- Such computer storage media may be any available physical media that may be accessed by a general purpose or special purpose computer.
- such computer storage media may comprise hardware storage such as solid state disk/device (SSD), RAM, ROM, EEPROM, CD-ROM, flash memory, phase-change memory (“PCM”), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage devices which may be used to store program code in the form of computer-executable instructions or data structures, which may be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Combinations of the above should also be included within the scope of computer storage media.
- Such media are also examples of non-transitory storage media, and non-transitory storage media also embraces cloud-based storage systems and structures, although the scope of the invention is not limited to these examples of non-transitory storage media.
- Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- module or ‘component’ may refer to software objects or routines that execute on the computing system.
- the different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system, for example, as separate threads. While the system and methods described herein may be implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
- a ‘computing entity’ may be any computing system as previously defined herein, or any module or combination of modules running on a computing system.
- a hardware processor is provided that is operable to carry out executable instructions for performing a method or process, such as the methods and processes disclosed herein.
- the hardware processor may or may not comprise an element of other hardware, such as the computing devices and systems disclosed herein.
- embodiments of the invention may be performed in client-server environments, whether network or local environments, or in any other suitable environment.
- Suitable operating environments for at least some embodiments of the invention include cloud computing environments where one or more of a client, server, or other machine may reside and operate in a cloud environment.
- any one or more of the entities disclosed, or implied, by FIGS. 1 - 4 and/or elsewhere herein, may take the form of, or include, or be implemented on, or hosted by, a physical computing device, one example of which is denoted at 500 .
- a physical computing device one example of which is denoted at 500 .
- any of the aforementioned elements comprise or consist of a virtual machine (VM)
- VM may constitute a virtualization of any combination of the physical components disclosed in FIG. 5 .
- the physical computing device 500 includes a memory 502 which may include one, some, or all, of random access memory (RAM), non-volatile memory (NVM) 504 such as NVRAM for example, read-only memory (ROM), and persistent memory, one or more hardware processors 506 , non-transitory storage media 508 , UI device 510 , and data storage 512 .
- RAM random access memory
- NVM non-volatile memory
- ROM read-only memory
- persistent memory one or more hardware processors 506
- non-transitory storage media 508 non-transitory storage media 508
- UI device 510 e.g., UI device 510
- data storage 512 e.g., a data storage
- One or more of the memory components 502 of the physical computing device 500 may take the form of solid state device (SSD) storage.
- SSD solid state device
- applications 514 may be provided that comprise instructions executable by one or more hardware processors 506 to perform any of the operations, or portions thereof,
- Such executable instructions may take various forms including, for example, instructions executable to perform any method or portion thereof disclosed herein, and/or executable by/at any of a storage site, whether on-premises at an enterprise, or a cloud storage site, client, datacenter, or backup server, to perform any of the functions disclosed herein. As well, such instructions may be executable to perform any of the other operations and methods, and any portions thereof, disclosed herein.
Abstract
Description
-
- 1. Intercept Write IO;
- 2. Send copy of IO to the RPA;
- 3. Ack (acknowledgement) from RPA;
- 4. Send IO to the storage;
- 5. Ack (acknowledgement) from storage; and
- 6. Ack (acknowledge) the IO to the application.
-
- 1. Intercept Write IO (IO data is temporarily stored in an IO data buffer);
- 2. Send copy of IO metadata and pointer to the IO data in the IO data buffer;
- 3. Send IO to the storage;
- 4. Ack (acknowledgement) from storage;
- 5. Ack (acknowledge) the IO to the application
- 6. Outside of the main IO flow, send the splitter journal IOs to the replication module asynchronously.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/803,626 US11675612B2 (en) | 2020-02-27 | 2020-02-27 | Achieving near-zero added latency for modern any point in time VM replication |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/803,626 US11675612B2 (en) | 2020-02-27 | 2020-02-27 | Achieving near-zero added latency for modern any point in time VM replication |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210271503A1 US20210271503A1 (en) | 2021-09-02 |
US11675612B2 true US11675612B2 (en) | 2023-06-13 |
Family
ID=77464365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/803,626 Active 2041-09-06 US11675612B2 (en) | 2020-02-27 | 2020-02-27 | Achieving near-zero added latency for modern any point in time VM replication |
Country Status (1)
Country | Link |
---|---|
US (1) | US11675612B2 (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6970987B1 (en) | 2003-01-27 | 2005-11-29 | Hewlett-Packard Development Company, L.P. | Method for storing data in a geographically-diverse data-storing system providing cross-site redundancy |
US8429362B1 (en) | 2011-03-31 | 2013-04-23 | Emc Corporation | Journal based replication with a virtual service layer |
US8478955B1 (en) | 2010-09-27 | 2013-07-02 | Emc International Company | Virtualized consistency group using more than one data protection appliance |
US8527990B1 (en) * | 2011-04-29 | 2013-09-03 | Symantec Corporation | Systems and methods for migrating virtual machines |
US8600945B1 (en) * | 2012-03-29 | 2013-12-03 | Emc Corporation | Continuous data replication |
US8806161B1 (en) * | 2011-09-29 | 2014-08-12 | Emc Corporation | Mirroring splitter meta data |
US20160342486A1 (en) | 2015-05-21 | 2016-11-24 | Zerto Ltd. | System and method for object-based continuous data protection |
US10108507B1 (en) | 2011-03-31 | 2018-10-23 | EMC IP Holding Company | Asynchronous copy on write |
US10191687B1 (en) | 2016-12-15 | 2019-01-29 | EMC IP Holding Company LLC | Adaptive snap-based replication in a storage system |
-
2020
- 2020-02-27 US US16/803,626 patent/US11675612B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6970987B1 (en) | 2003-01-27 | 2005-11-29 | Hewlett-Packard Development Company, L.P. | Method for storing data in a geographically-diverse data-storing system providing cross-site redundancy |
US8478955B1 (en) | 2010-09-27 | 2013-07-02 | Emc International Company | Virtualized consistency group using more than one data protection appliance |
US8429362B1 (en) | 2011-03-31 | 2013-04-23 | Emc Corporation | Journal based replication with a virtual service layer |
US10108507B1 (en) | 2011-03-31 | 2018-10-23 | EMC IP Holding Company | Asynchronous copy on write |
US8527990B1 (en) * | 2011-04-29 | 2013-09-03 | Symantec Corporation | Systems and methods for migrating virtual machines |
US8806161B1 (en) * | 2011-09-29 | 2014-08-12 | Emc Corporation | Mirroring splitter meta data |
US8600945B1 (en) * | 2012-03-29 | 2013-12-03 | Emc Corporation | Continuous data replication |
US20160342486A1 (en) | 2015-05-21 | 2016-11-24 | Zerto Ltd. | System and method for object-based continuous data protection |
US10191687B1 (en) | 2016-12-15 | 2019-01-29 | EMC IP Holding Company LLC | Adaptive snap-based replication in a storage system |
Also Published As
Publication number | Publication date |
---|---|
US20210271503A1 (en) | 2021-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9928003B2 (en) | Management of writable snapshots in a network storage device | |
US10140039B1 (en) | I/O alignment for continuous replication in a storage system | |
US10067837B1 (en) | Continuous data protection with cloud resources | |
US10133874B1 (en) | Performing snapshot replication on a storage system not configured to support snapshot replication | |
US10191687B1 (en) | Adaptive snap-based replication in a storage system | |
US9405481B1 (en) | Replicating using volume multiplexing with consistency group file | |
US9529885B1 (en) | Maintaining consistent point-in-time in asynchronous replication during virtual machine relocation | |
US9600377B1 (en) | Providing data protection using point-in-time images from multiple types of storage devices | |
US9804934B1 (en) | Production recovery using a point in time snapshot | |
US9146878B1 (en) | Storage recovery from total cache loss using journal-based replication | |
US8627012B1 (en) | System and method for improving cache performance | |
US10235061B1 (en) | Granular virtual machine snapshots | |
US10496487B1 (en) | Storing snapshot changes with snapshots | |
US8996460B1 (en) | Accessing an image in a continuous data protection using deduplication-based storage | |
US8949180B1 (en) | Replicating key-value pairs in a continuous data protection system | |
US20140208012A1 (en) | Virtual disk replication using log files | |
US11868640B2 (en) | Achieving near-zero added latency for any point in time OS kernel-based application replication | |
US10235196B1 (en) | Virtual machine joining or separating | |
US10244069B1 (en) | Accelerated data storage synchronization for node fault protection in distributed storage system | |
US10620851B1 (en) | Dynamic memory buffering using containers | |
US9053033B1 (en) | System and method for cache content sharing | |
US10853314B1 (en) | Overlay snaps | |
US10885061B2 (en) | Bandwidth management in a data storage system | |
US11675612B2 (en) | Achieving near-zero added latency for modern any point in time VM replication | |
US11900140B2 (en) | SmartNIC based virtual splitter ensuring microsecond latencies |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WOLFSON, KFIR;AZARIA, ITAY;SHEMER, JEHUDA;AND OTHERS;SIGNING DATES FROM 20200224 TO 20200227;REEL/FRAME:051955/0117 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:052771/0906 Effective date: 20200528 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169 Effective date: 20200603 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:052852/0022 Effective date: 20200603 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;REEL/FRAME:052851/0917 Effective date: 20200603 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT;REEL/FRAME:052851/0081 Effective date: 20200603 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 052771 FRAME 0906;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0298 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 052771 FRAME 0906;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0298 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052851/0917);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0509 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052851/0917);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0509 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052851/0081);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0441 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052851/0081);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0441 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052852/0022);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0582 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (052852/0022);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0582 Effective date: 20220329 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |