US20210026546A1 - Enhanced quality of service (qos) for multiple simultaneous replication sessions in a replication setup - Google Patents

Enhanced quality of service (qos) for multiple simultaneous replication sessions in a replication setup Download PDF

Info

Publication number
US20210026546A1
US20210026546A1 US16/521,730 US201916521730A US2021026546A1 US 20210026546 A1 US20210026546 A1 US 20210026546A1 US 201916521730 A US201916521730 A US 201916521730A US 2021026546 A1 US2021026546 A1 US 2021026546A1
Authority
US
United States
Prior art keywords
replication
sessions
replication sessions
available
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/521,730
Other versions
US10908828B1 (en
Inventor
David Meiri
Anton Kucherov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC IP Holding Co LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US16/521,730 priority Critical patent/US10908828B1/en
Application filed by EMC IP Holding Co LLC filed Critical EMC IP Holding Co LLC
Assigned to EMC IP Holding Company LLC reassignment EMC IP Holding Company LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KUCHEROV, ANTON, MEIRI, DAVID
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH SECURITY AGREEMENT Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT (NOTES) Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC
Publication of US20210026546A1 publication Critical patent/US20210026546A1/en
Publication of US10908828B1 publication Critical patent/US10908828B1/en
Application granted granted Critical
Assigned to DELL PRODUCTS L.P., EMC IP Holding Company LLC, EMC CORPORATION reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421 Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to EMC IP Holding Company LLC, DELL PRODUCTS L.P., EMC CORPORATION reassignment EMC IP Holding Company LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL PRODUCTS L.P., EMC CORPORATION, EMC IP Holding Company LLC reassignment DELL PRODUCTS L.P. RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Definitions

  • QoS Quality of Service
  • IO input/output
  • One goal of using Quality of Service (QoS) policies in a storage system is to balance the input/output (IO) rate or latency between different storage units in the system.
  • QoS policies it is oftentimes the case that the actual bandwidth or latency detected in the system is out of balance with the desired QoS bandwidth or latency. This can be due to factors, such as changes in resources needed for each type of IO and/or unanticipated changes occurring in the network a replication set up between two storage systems.
  • Replication is bounded by CPU and bandwidth.
  • a bottleneck is likely to be at the CPU.
  • the bottleneck is likely to be bandwidth.
  • the system may divide up resources; however, without a priori knowledge of required resources, a blind division can lead to unused resources.
  • One aspect may provide a method for enhanced QoS for multiple replication sessions in a replication set up of a storage system.
  • the method includes, for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile.
  • the resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session.
  • the method also includes determining available system resources for an aggregate of the replication sessions.
  • the available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system.
  • the method further includes apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • the system includes a memory having computer-executable instructions.
  • the system also includes a processor operated by a storage system.
  • the processor executes the computer-executable instructions.
  • the computer-executable instructions When executed by the processor, the computer-executable instructions cause the processor to perform operations.
  • the operations include, for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile.
  • the resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session.
  • the operations also include determining available system resources for an aggregate of the replication sessions.
  • the available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system.
  • the operations further include apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • the computer program product is embodied on a non-transitory computer readable medium.
  • the computer program product includes instructions that, when executed by a computer at a storage system, causes the computer to perform operations.
  • the operations include for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile.
  • the resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session.
  • the operations also include determining available system resources for an aggregate of the replication sessions.
  • the available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system.
  • the operations further include apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • FIG. 1 is a block diagram illustrating one example of a content-based storage system configured for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment
  • FIG. 2 depicts a block diagram depicting two replication sessions configured for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment
  • FIG. 3 is a flow diagram illustrating a process for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment
  • FIG. 4 is a block diagram of an illustrative computer that can perform at least a portion of the processing described herein.
  • the term “storage system” is intended to be broadly construed so as to encompass, for example, private or public cloud computing systems for storing data as well as systems for storing data comprising virtual infrastructure and those not comprising virtual infrastructure.
  • client refers, interchangeably, to any person, system, or other entity that uses a storage system to read/write data, as well as issue requests for configuration of storage units in the storage system.
  • storage device may also refer to a storage array including multiple storage devices.
  • a storage medium may refer to one or more storage mediums such as a hard drive, a combination of hard drives, flash storage, combinations of flash storage, combinations of hard drives, flash, and other storage devices, and other types and combinations of computer readable storage mediums including those yet to be conceived.
  • a storage medium may also refer both physical and logical storage mediums and may include multiple level of virtual to physical mappings and may be or include an image or disk image.
  • a storage medium may be computer-readable, and may also be referred to herein as a computer-readable program medium.
  • a storage unit may refer to any unit of storage including those described above with respect to the storage devices, as well as including storage volumes, logical drives, containers, or any unit of storage exposed to a client or application.
  • a storage volume may be a logical unit of storage that is independently identifiable and addressable by a storage system.
  • IO request or simply “IO” may be used to refer to an input or output request, such as a data read or data write request or a request to configure and/or update a storage unit feature.
  • a feature may refer to any service configurable for the storage system.
  • a storage device may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), solid state drivers (SSDs), flash devices (e.g., NAND flash devices), and similar devices that may be accessed locally and/or remotely (e.g., via a storage attached network (SAN) (also referred to herein as storage array network (SAN)).
  • NVM non-volatile memory
  • HDDs hard disk drives
  • SSDs solid state drivers
  • flash devices e.g., NAND flash devices
  • SAN storage attached network
  • SAN storage array network
  • a storage array may refer to a data storage system that is used for block-based, file-based or object storage, where storage arrays can include, for example, dedicated storage hardware that contains spinning hard disk drives (HDDs), solid-state disk drives, and/or all-flash drives. Flash, as is understood, is a solid-state (SS) random access media type that can read any address range with no latency penalty, in comparison to a hard disk drive (HDD) which has physical moving components which require relocation when reading from different address ranges and thus significantly increasing the latency for random IO data.
  • An exemplary content addressable storage (CAS) array is described in commonly assigned U.S. Pat. No. 9,208,162 (hereinafter “'162 patent”), which is hereby incorporated by reference).
  • a data storage entity may be any one or more of a file system, object storage, a virtualized device, a logical unit, a logical unit number, a logical volume, a logical device, a physical device, and/or a storage medium.
  • the embodiments described herein provide a technique for implementing enhanced QoS for multiple replication sessions in a replication set up.
  • a storage system that implements data replication there are typically many links through which replication requests can be processed.
  • Each of these links may experience differences in throughput and latency due to conditions, such as different media use used for the link, the amount of work already sent to the link, link issues, or load on the target.
  • the embodiments enable a system operating multiple replication sessions to apportion the available system resources among the individual replication sessions as a function of each session's priority levels, resource profiles, and available system resources.
  • the apportioning enables each of the replication sessions to operate at its optimal requirements while remaining within the constraints of the overall system available resources.
  • the content-addressable storage system may be implemented using a storage architecture, such as XtremIO by EMC DELL of Hopkinton, Mass.
  • a storage architecture such as XtremIO by EMC DELL of Hopkinton, Mass.
  • the system 100 is described herein as performing replication sessions in any type and/or combination of replication modes (e.g., synchronous, asynchronous, active/active).
  • the storage system 100 may include a plurality of modules 104 , 106 , 108 , and 110 , a plurality of storage units 112 A- 112 n , which may be implemented as a storage array, and a primary storage 118 .
  • the storage units 112 A- 112 n may be provided as, e.g., storage volumes, logical drives, containers, or any units of storage that are exposed to a client or application (e.g., one of clients 102 ).
  • modules 104 , 106 , 108 , and 110 may be provided as software components, e.g., computer program code that, when executed on a processor, may cause a computer to perform functionality described herein.
  • the storage system 100 includes an operating system (OS) (shown generally in FIG. 4 ), and the one or more of the modules 104 , 106 , 108 , and 110 may be provided as user space processes executable by the OS.
  • OS operating system
  • one or more of the modules 104 , 106 , 108 , and 110 may be provided, at least in part, as hardware, such as digital signal processor (DSP) or an application specific integrated circuit (ASIC) configured to perform functionality described herein. It is understood that the modules 104 , 106 , 108 , and 110 may be implemented as a combination of software components and hardware components. Any number of routing, control, and data modules 104 , 106 , and 108 , respectively, may be implemented in the system 100 in order to realize the advantages of the embodiments described herein.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the routing modules 104 may be configured to terminate storage and retrieval operations and distribute commands to the control modules 106 that may be selected for operations in such a way as to retain balanced usage within the system.
  • the control modules 106 may be communicatively coupled to one or more routing modules 104 and the routing modules 104 , in turn, may be communicatively coupled to one or more storage units 112 A- 112 n.
  • control modules 106 select an appropriate routing module 104 to send a replication IO request from a client 102 .
  • the routing module 104 receiving the replication IO request sends the IO request to a data module 108 for execution and returns results to the control module 106 .
  • the requests may be sent using SCSI or similar means.
  • the control module 106 may control execution of read and write commands to the storage units 112 A- 112 n through the routing modules 104 .
  • the data modules 108 may be connected to the storage units 112 A- 112 n and, under control of the respective control module 106 , may pass data to and/or from the storage units 112 A- 112 n via suitable storage drivers (not shown).
  • Data module 108 may be communicatively coupled to corresponding control modules 106 , routing modules 104 , and the management module 110 .
  • the data module 108 is configured to perform the actual read/write (R/W) operations by accessing the storage units 112 A- 112 n attached to them.
  • the data module 108 performs read/write operations with respect to one or more storage units 112 A- 112 n .
  • the storage system 100 performs replication sessions in synchronous, asynchronous, or metro replication mode in which one or more of the storage units 112 A- 112 n may be considered source devices and others of the storage units 112 A- 112 n may be considered target devices to which data is replicated from the source devices.
  • the storage system 100 may be configured to perform native replication.
  • the management module 110 may be configured to monitor and track the status of various hardware and software resources within the storage system 100 .
  • the management module 110 may manage the allocation of memory by other modules (e.g., routing modules 104 , control modules 106 , and data modules 108 .
  • the primary memory 118 can be any type of memory having access times that are faster compared to the storage units 112 A- 112 n .
  • primary memory 118 may be provided as dynamic random-access memory (DRAM).
  • primary memory 118 may be provided as synchronous DRAM (SDRAM).
  • primary memory 118 may be provided as double data rate SDRAM (DDR SDRAM), such as DDR3 SDRAM. These differing types of memory are shown generally in FIG. 1 as 116 A- 116 n.
  • the system 100 may employ more than a single type of memory technology, including a mix of more than one Flash technology (e.g., single level cell (SLC) flash and multilevel cell (MLC) flash), and a mix of Flash and DRAM technologies.
  • Flash technology e.g., single level cell (SLC) flash and multilevel cell (MLC) flash
  • data mapping may optimize performance and life span by taking advantage of the different access speeds and different write/erase cycle limitations of the various memory technologies.
  • a database 120 that is used to provide session information, such as session resource profile information and assigned session priorities to the system to allocate bandwidth and CPU resources to each of the sessions.
  • the session resource profile information may be derived, in part, from statistical session data.
  • FIG. 2 a portion of a system 200 (e.g., the system 100 of FIG. 1 ) for implementing enhanced QoS for multiple replication sessions in a replication set up in a replication setup will now be described.
  • a system 200 e.g., the system 100 of FIG. 1
  • enhanced QoS for multiple replication sessions in a replication set up in a replication setup will now be described.
  • the system 200 includes two replication sessions 202 (A) and 204 (B), each of which may be implemented by one or more of the modules shown in FIG. 1 . While only two replication sessions are shown in FIG. 2 , it will be understood that any number of sessions may be simultaneously implemented in the system in order to realize the advantages of the embodiments described herein.
  • links 208 A- 208 E which communicatively connect each of the sessions A and B to respective storage units 212 A- 212 C.
  • Each of the links may be implemented as serial data cables or wires. In other embodiments, the links may be implemented over a wireless network.
  • the storage units 212 A- 212 C are storage units of a destination storage array 210 in which data from a source device is replicated to the destination storage array 210 .
  • the destination storage array may be identical to the source storage array; however, this is not required.
  • the destination storage array may be different than the source storage array (e.g., the destination storage array may have a different architecture or may be manufactured by a different vendor).
  • FIG. 3 a flow diagram 300 for implementing enhanced QoS for multiple replication sessions in a replication set up in a replication setup for multiple active replication sessions will now be described in accordance with an embodiment.
  • the process 300 of FIG. 3 assumes that blocks 302 and 304 are performed for each replication session (e.g., session A and session B).
  • the multiple active replication sessions may be synchronous, asynchronous, or metro modes of replication.
  • the replication may be hash-based replication.
  • user data is transported to the target without any changes, or compression (e.g., an 8 KB page of user data may be transported as 8 KB if not compressed and 5B when compressed). While compression saving is significant, the compressed data still requires significant bandwidth.
  • hash-based replication it may be possible to transfer an 8 KB page of user data by sending a few bytes, such as 20 bytes of SHA1 signature. This eliminates almost completely any bandwidth requirements.
  • the page being transferred needs to be on the target
  • the source needs to be able to recognize that the page is on the target
  • the target needs to be able to verify that a hash signature is indeed representing a page that already resides on the target
  • a backup mechanism must be in place in case the hash-based transfer fails (i.e., the hash ends up missing on target, requiring a normal full page or compressed page transfer).
  • hash-based transfer requires multiple round-trip 10 across the replication links, and as a result, is more CPU intensive.
  • different sessions may see different benefits, as the chances of having a hash signature that already exists on target highly depends on the type of data being transferred.
  • one session may transfer Virtual Desktop Images that have a high likelihood of deduplication, resulting in many hash-based transfers
  • another session may transfer database or image data, that has low likelihood of deduplication.
  • different sessions may have different benefits from hash-based replication.
  • the process 300 determines a priority level assigned to that session.
  • QoS Quality of Service
  • one goal of Quality of Service (QoS) is to balance the 10 rate or latency between different storage elements on the same array based on different policies. For instance, a preferred user may end up consuming fewer resources than a lower priority user due to the lower priority user pushing in a lot more IO. In this instance, the preferred user would experience lower performance and higher latency.
  • QoS There are a couple of ways to enable QoS: limiting the host bandwidth per client (maximum-based QoS), and assigning different levels of service (e.g., Platinum, Gold, Silver, etc.) and trying to prioritize host bandwidth based on the levels of service using different queues in the scheduler for each type. It will be understood that in some instances, the same priority may be assigned to one or more sessions. In other set ups, there may be multiple levels of priority (e.g., 1-10, where 10 is the maximum) set for different sessions.
  • levels of service e.g., Platinum, Gold, Silver, etc.
  • the process 300 calculates, for each replication session, a corresponding resource profile that is specific to that session.
  • the resource profile specifies an amount of bandwidth and IO operations required for the session.
  • the amount of bandwidth required corresponds to user data as opposed to other types of data, such as application data that is generated by the replication engine. Typically, it is mostly user data that is sent on the links as compared to application data.
  • Application data may include hash signatures, information about the replication state, address and volume identifiers for replication data, etc.
  • the resource profiles may be calculated for varying levels of deduplication among the replication sessions. These levels of deduplication are described further herein.
  • the amount of bandwidth and IO operations required for a session may be calculated by collecting statistical information about previous replication sessions, such as the number of IO operations performed, the size of the IO operations, and the amount of user data transmitted.
  • the process 300 determines the available system resources for the aggregate of the replication sessions.
  • the available system resources may specify the maximum amount of bandwidth and IO rate available for the system.
  • the process 300 apportions the available system resources (from block 306 ) among the individual replication sessions as a function of the priority levels, resource profiles, and available system resources. The apportioning enables each of the replication sessions to operate at their optimal requirements while remaining within the constraints of the overall system available resources. This process 300 is performed in an iterative fashion over time as bandwidth requirements can change periodically.
  • n replication sessions determine for each replication session its priority P 1 , . . . , Pn. This can be a user input, or a default. Priorities may range, e.g., from 1-10 (where 10 is the highest priority).
  • P 1 a user input
  • Priorities may range, e.g., from 1-10 (where 10 is the highest priority).
  • For each replication session its replication resources profile is determined: for session i, in order to transmit 1 MB of user data, determine how many MB Mi are required to transmit over the link, and how many IO operations Ci. Determine the available system resources is determined as the maximal available bandwidth M and maximal available IO rate C; here M is measures as MB/sec, and C is measured as round trip messages per second.
  • X 1 , . . . , Xn represent the throughput of sessions 1, . . . , n measured in MB/sec of user data.
  • the first equation results in a single parameter X.
  • Maximizing Xi under constraints (b) and (c) results in the following formula for X. This in turn determines X 1 , . . . , Xn:
  • Replication is constrained by both bandwidth and CPU.
  • R 1 is related to the constraints on replication resulting from bandwidth and represents a scaling factor that takes into consideration the entire bandwidth M as well as all the bandwidth used by the different sessions.
  • R 2 is related to the constraint on replication resulting from CPU and represents a scaling factor that takes into consideration the entire CPU utilization as measured by maximal possible round trip messages per second C as well as all the round trip messages per second used by the different sessions.
  • FIG. 4 shows an exemplary computer 400 (e.g., physical or virtual) that can perform at least part of the processing described herein.
  • the computer 400 includes a processor 402 , a volatile memory 404 , a non-volatile memory 406 (e.g., hard disk or flash), an output device 407 and a graphical user interface (GUI) 408 (e.g., a mouse, a keyboard, a display, for example).
  • GUI graphical user interface
  • the non-volatile memory 406 stores computer instructions 412 , an operating system 416 and data 418 .
  • the computer instructions 412 are executed by the processor 402 out of volatile memory 404 .
  • an article 420 comprises non-transitory computer-readable instructions.
  • Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
  • the system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers).
  • a computer program product e.g., in a machine-readable storage device
  • data processing apparatus e.g., a programmable processor, a computer, or multiple computers.
  • Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system.
  • the programs may be implemented in assembly or machine language.
  • the language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • a computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer.
  • Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
  • Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)

Abstract

In one aspect, implementing enhanced QoS for multiple replication sessions in a replication setup includes, for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile. The resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session. An aspect also includes determining available system resources for an aggregate of the replication sessions. The available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system. An aspect further includes apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.

Description

    BACKGROUND
  • One goal of using Quality of Service (QoS) policies in a storage system is to balance the input/output (IO) rate or latency between different storage units in the system. However, despite the use of QoS policies it is oftentimes the case that the actual bandwidth or latency detected in the system is out of balance with the desired QoS bandwidth or latency. This can be due to factors, such as changes in resources needed for each type of IO and/or unanticipated changes occurring in the network a replication set up between two storage systems.
  • In certain types of replication, such as hash-based replication, it is possible that one session has high deduplication and another session has low deduplication. Replication is bounded by CPU and bandwidth. With high deduplication, a bottleneck is likely to be at the CPU. With low deduplication, the bottleneck is likely to be bandwidth. To reduce bottlenecks, the system may divide up resources; however, without a priori knowledge of required resources, a blind division can lead to unused resources.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described herein in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • One aspect may provide a method for enhanced QoS for multiple replication sessions in a replication set up of a storage system. The method includes, for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile. The resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session. The method also includes determining available system resources for an aggregate of the replication sessions. The available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system. The method further includes apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • Another aspect may provide a system for enhanced QoS for multiple replication sessions in a replication set up for a storage system. The system includes a memory having computer-executable instructions. The system also includes a processor operated by a storage system. The processor executes the computer-executable instructions. When executed by the processor, the computer-executable instructions cause the processor to perform operations. The operations include, for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile. The resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session. The operations also include determining available system resources for an aggregate of the replication sessions. The available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system. The operations further include apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • Another aspect may provide a computer program product for enhanced QoS for multiple replication sessions in a replication set up for a storage system. The computer program product is embodied on a non-transitory computer readable medium. The computer program product includes instructions that, when executed by a computer at a storage system, causes the computer to perform operations. The operations include for each of a number of replication sessions simultaneously implemented via the storage system, determining an assigned priority level and calculating a corresponding resource profile. The resource profile specifies a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session. The operations also include determining available system resources for an aggregate of the replication sessions. The available system resources indicate a maximum available amount of bandwidth and a maximum available IO rate across the storage system. The operations further include apportioning resources among the replication sessions as a function of collective priority levels, resource profiles, and the available system resources.
  • BRIEF DESCRIPTION OF THE DRAWING FIGURES
  • Objects, aspects, features, and advantages of embodiments disclosed herein will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which like reference numerals identify similar or identical elements. Reference numerals that are introduced in the specification in association with a drawing figure may be repeated in one or more subsequent figures without additional description in the specification in order to provide context for other features. For clarity, not every element may be labeled in every figure. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments, principles, and concepts. The drawings are not meant to limit the scope of the claims included herewith.
  • FIG. 1 is a block diagram illustrating one example of a content-based storage system configured for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment;
  • FIG. 2 depicts a block diagram depicting two replication sessions configured for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment;
  • FIG. 3 is a flow diagram illustrating a process for implementing enhanced QoS for multiple replication sessions in a replication set up in accordance with an embodiment; and
  • FIG. 4 is a block diagram of an illustrative computer that can perform at least a portion of the processing described herein.
  • DETAILED DESCRIPTION
  • Before describing embodiments of the concepts, structures, and techniques sought to be protected herein, some terms are explained. The following description includes a number of terms for which the definitions are generally known in the art. However, the following glossary definitions are provided to clarify the subsequent description and may be helpful in understanding the specification and claims.
  • As used herein, the term “storage system” is intended to be broadly construed so as to encompass, for example, private or public cloud computing systems for storing data as well as systems for storing data comprising virtual infrastructure and those not comprising virtual infrastructure. As used herein, the terms “client,” “host,” and “user” refer, interchangeably, to any person, system, or other entity that uses a storage system to read/write data, as well as issue requests for configuration of storage units in the storage system. In some embodiments, the term “storage device” may also refer to a storage array including multiple storage devices. In certain embodiments, a storage medium may refer to one or more storage mediums such as a hard drive, a combination of hard drives, flash storage, combinations of flash storage, combinations of hard drives, flash, and other storage devices, and other types and combinations of computer readable storage mediums including those yet to be conceived. A storage medium may also refer both physical and logical storage mediums and may include multiple level of virtual to physical mappings and may be or include an image or disk image. A storage medium may be computer-readable, and may also be referred to herein as a computer-readable program medium. Also, a storage unit may refer to any unit of storage including those described above with respect to the storage devices, as well as including storage volumes, logical drives, containers, or any unit of storage exposed to a client or application. A storage volume may be a logical unit of storage that is independently identifiable and addressable by a storage system.
  • In certain embodiments, the term “IO request” or simply “IO” may be used to refer to an input or output request, such as a data read or data write request or a request to configure and/or update a storage unit feature. A feature may refer to any service configurable for the storage system.
  • In certain embodiments, a storage device may refer to any non-volatile memory (NVM) device, including hard disk drives (HDDs), solid state drivers (SSDs), flash devices (e.g., NAND flash devices), and similar devices that may be accessed locally and/or remotely (e.g., via a storage attached network (SAN) (also referred to herein as storage array network (SAN)).
  • In certain embodiments, a storage array (sometimes referred to as a disk array) may refer to a data storage system that is used for block-based, file-based or object storage, where storage arrays can include, for example, dedicated storage hardware that contains spinning hard disk drives (HDDs), solid-state disk drives, and/or all-flash drives. Flash, as is understood, is a solid-state (SS) random access media type that can read any address range with no latency penalty, in comparison to a hard disk drive (HDD) which has physical moving components which require relocation when reading from different address ranges and thus significantly increasing the latency for random IO data. An exemplary content addressable storage (CAS) array is described in commonly assigned U.S. Pat. No. 9,208,162 (hereinafter “'162 patent”), which is hereby incorporated by reference).
  • In certain embodiments, a data storage entity may be any one or more of a file system, object storage, a virtualized device, a logical unit, a logical unit number, a logical volume, a logical device, a physical device, and/or a storage medium.
  • While vendor-specific terminology may be used herein to facilitate understanding, it is understood that the concepts, techniques, and structures sought to be protected herein are not limited to use with any specific commercial products. In addition, to ensure clarity in the disclosure, well-understood methods, procedures, circuits, components, and products are not described in detail herein.
  • The phrases, “such as,” “for example,” “e.g.,” “exemplary,” and variants thereof, are used herein to describe non-limiting embodiments and are used herein to mean “serving as an example, instance, or illustration.” Any embodiments herein described via these phrases and/or variants are not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. In addition, the word “optionally” is used herein to mean that a feature or process, etc., is provided in some embodiments and not provided in other embodiments.” Any particular embodiment of the invention may include a plurality of “optional” features unless such features conflict.
  • As described above, the embodiments described herein provide a technique for implementing enhanced QoS for multiple replication sessions in a replication set up. In a storage system that implements data replication, there are typically many links through which replication requests can be processed. Each of these links may experience differences in throughput and latency due to conditions, such as different media use used for the link, the amount of work already sent to the link, link issues, or load on the target.
  • The embodiments enable a system operating multiple replication sessions to apportion the available system resources among the individual replication sessions as a function of each session's priority levels, resource profiles, and available system resources. The apportioning enables each of the replication sessions to operate at its optimal requirements while remaining within the constraints of the overall system available resources.
  • Turning now to FIG. 1, a content-addressable storage system for implementing enhanced QoS for multiple replication sessions in a replication set up will now be described. In an embodiment, the content-addressable storage system may be implemented using a storage architecture, such as XtremIO by EMC DELL of Hopkinton, Mass. For purposes of illustration, the system 100 is described herein as performing replication sessions in any type and/or combination of replication modes (e.g., synchronous, asynchronous, active/active).
  • The storage system 100 may include a plurality of modules 104, 106, 108, and 110, a plurality of storage units 112A-112 n, which may be implemented as a storage array, and a primary storage 118. In some embodiments, the storage units 112A-112 n may be provided as, e.g., storage volumes, logical drives, containers, or any units of storage that are exposed to a client or application (e.g., one of clients 102).
  • In one embodiment, modules 104, 106, 108, and 110 may be provided as software components, e.g., computer program code that, when executed on a processor, may cause a computer to perform functionality described herein. In a certain embodiment, the storage system 100 includes an operating system (OS) (shown generally in FIG. 4), and the one or more of the modules 104, 106, 108, and 110 may be provided as user space processes executable by the OS.
  • In other embodiments, one or more of the modules 104, 106, 108, and 110 may be provided, at least in part, as hardware, such as digital signal processor (DSP) or an application specific integrated circuit (ASIC) configured to perform functionality described herein. It is understood that the modules 104, 106, 108, and 110 may be implemented as a combination of software components and hardware components. Any number of routing, control, and data modules 104, 106, and 108, respectively, may be implemented in the system 100 in order to realize the advantages of the embodiments described herein.
  • The routing modules 104 may be configured to terminate storage and retrieval operations and distribute commands to the control modules 106 that may be selected for operations in such a way as to retain balanced usage within the system. The control modules 106 may be communicatively coupled to one or more routing modules 104 and the routing modules 104, in turn, may be communicatively coupled to one or more storage units 112A-112 n.
  • In embodiments, the control modules 106 select an appropriate routing module 104 to send a replication IO request from a client 102. The routing module 104 receiving the replication IO request sends the IO request to a data module 108 for execution and returns results to the control module 106. The requests may be sent using SCSI or similar means.
  • The control module 106 may control execution of read and write commands to the storage units 112A-112 n through the routing modules 104. The data modules 108 may be connected to the storage units 112A-112 n and, under control of the respective control module 106, may pass data to and/or from the storage units 112A-112 n via suitable storage drivers (not shown).
  • Data module 108 may be communicatively coupled to corresponding control modules 106, routing modules 104, and the management module 110. In embodiments, the data module 108 is configured to perform the actual read/write (R/W) operations by accessing the storage units 112A-112 n attached to them.
  • As indicated above, the data module 108 performs read/write operations with respect to one or more storage units 112A-112 n. In embodiments, the storage system 100 performs replication sessions in synchronous, asynchronous, or metro replication mode in which one or more of the storage units 112A-112 n may be considered source devices and others of the storage units 112A-112 n may be considered target devices to which data is replicated from the source devices. The storage system 100 may be configured to perform native replication.
  • The management module 110 may be configured to monitor and track the status of various hardware and software resources within the storage system 100. In some embodiments, the management module 110 may manage the allocation of memory by other modules (e.g., routing modules 104, control modules 106, and data modules 108.
  • The primary memory 118 can be any type of memory having access times that are faster compared to the storage units 112A-112 n. In some embodiments, primary memory 118 may be provided as dynamic random-access memory (DRAM). In certain embodiments, primary memory 118 may be provided as synchronous DRAM (SDRAM). In one embodiment, primary memory 118 may be provided as double data rate SDRAM (DDR SDRAM), such as DDR3 SDRAM. These differing types of memory are shown generally in FIG. 1 as 116A-116 n.
  • In some examples, the system 100 may employ more than a single type of memory technology, including a mix of more than one Flash technology (e.g., single level cell (SLC) flash and multilevel cell (MLC) flash), and a mix of Flash and DRAM technologies. In certain embodiments, data mapping may optimize performance and life span by taking advantage of the different access speeds and different write/erase cycle limitations of the various memory technologies.
  • Also shown in the system 100 of FIG. 1 is a database 120 that is used to provide session information, such as session resource profile information and assigned session priorities to the system to allocate bandwidth and CPU resources to each of the sessions. The session resource profile information may be derived, in part, from statistical session data. These elements are described further with respect to FIGS. 2 and 3.
  • Turning now to FIG. 2, a portion of a system 200 (e.g., the system 100 of FIG. 1) for implementing enhanced QoS for multiple replication sessions in a replication set up in a replication setup will now be described.
  • As shown in FIG. 2, the system 200 includes two replication sessions 202 (A) and 204 (B), each of which may be implemented by one or more of the modules shown in FIG. 1. While only two replication sessions are shown in FIG. 2, it will be understood that any number of sessions may be simultaneously implemented in the system in order to realize the advantages of the embodiments described herein.
  • Also shown in FIG. 2 are links 208A-208E, which communicatively connect each of the sessions A and B to respective storage units 212A-212C. Each of the links may be implemented as serial data cables or wires. In other embodiments, the links may be implemented over a wireless network.
  • The storage units 212A-212C are storage units of a destination storage array 210 in which data from a source device is replicated to the destination storage array 210. In one embodiment, the destination storage array may be identical to the source storage array; however, this is not required. In an alternative embodiment, for example, the destination storage array may be different than the source storage array (e.g., the destination storage array may have a different architecture or may be manufactured by a different vendor).
  • Turning now to FIG. 3, a flow diagram 300 for implementing enhanced QoS for multiple replication sessions in a replication set up in a replication setup for multiple active replication sessions will now be described in accordance with an embodiment. The process 300 of FIG. 3 assumes that blocks 302 and 304 are performed for each replication session (e.g., session A and session B).
  • The multiple active replication sessions may be synchronous, asynchronous, or metro modes of replication. In one embodiment, the replication may be hash-based replication. In standard, non hash-based replication, user data is transported to the target without any changes, or compression (e.g., an 8 KB page of user data may be transported as 8 KB if not compressed and 5B when compressed). While compression saving is significant, the compressed data still requires significant bandwidth. With hash-based replication, it may be possible to transfer an 8 KB page of user data by sending a few bytes, such as 20 bytes of SHA1 signature. This eliminates almost completely any bandwidth requirements. However, to achieve this highly desirable savings, the page being transferred needs to be on the target, the source needs to be able to recognize that the page is on the target, the target needs to be able to verify that a hash signature is indeed representing a page that already resides on the target, and a backup mechanism must be in place in case the hash-based transfer fails (i.e., the hash ends up missing on target, requiring a normal full page or compressed page transfer). This means that hash-based transfer requires multiple round-trip 10 across the replication links, and as a result, is more CPU intensive. Moreover, different sessions may see different benefits, as the chances of having a hash signature that already exists on target highly depends on the type of data being transferred. For example, while one session may transfer Virtual Desktop Images that have a high likelihood of deduplication, resulting in many hash-based transfers, another session may transfer database or image data, that has low likelihood of deduplication. Thus, different sessions may have different benefits from hash-based replication.
  • In block 302 of FIG. 3, for each of a number of replication sessions, the process 300 determines a priority level assigned to that session. In a replication set up, one goal of Quality of Service (QoS) is to balance the 10 rate or latency between different storage elements on the same array based on different policies. For instance, a preferred user may end up consuming fewer resources than a lower priority user due to the lower priority user pushing in a lot more IO. In this instance, the preferred user would experience lower performance and higher latency. There are a couple of ways to enable QoS: limiting the host bandwidth per client (maximum-based QoS), and assigning different levels of service (e.g., Platinum, Gold, Silver, etc.) and trying to prioritize host bandwidth based on the levels of service using different queues in the scheduler for each type. It will be understood that in some instances, the same priority may be assigned to one or more sessions. In other set ups, there may be multiple levels of priority (e.g., 1-10, where 10 is the maximum) set for different sessions.
  • In block 304, the process 300 calculates, for each replication session, a corresponding resource profile that is specific to that session. The resource profile specifies an amount of bandwidth and IO operations required for the session. The amount of bandwidth required corresponds to user data as opposed to other types of data, such as application data that is generated by the replication engine. Typically, it is mostly user data that is sent on the links as compared to application data. Application data may include hash signatures, information about the replication state, address and volume identifiers for replication data, etc. The resource profiles may be calculated for varying levels of deduplication among the replication sessions. These levels of deduplication are described further herein. The amount of bandwidth and IO operations required for a session may be calculated by collecting statistical information about previous replication sessions, such as the number of IO operations performed, the size of the IO operations, and the amount of user data transmitted.
  • In block 306, the process 300 determines the available system resources for the aggregate of the replication sessions. The available system resources may specify the maximum amount of bandwidth and IO rate available for the system.
  • In block 308, the process 300 apportions the available system resources (from block 306) among the individual replication sessions as a function of the priority levels, resource profiles, and available system resources. The apportioning enables each of the replication sessions to operate at their optimal requirements while remaining within the constraints of the overall system available resources. This process 300 is performed in an iterative fashion over time as bandwidth requirements can change periodically.
  • An example of the process 300 will now be described with respect to particular priorities, profiles, and system resources. Consider a replication set up that is capable of operating at 200 MB/sec and 1,000 replication round trip messages per second using one or more links. Using the system 200 of FIG. 2, it is assumed that there are two simultaneous replication sessions A and B. The profile of session A is low deduplication, where 1 MG of user data equals 1 MB on the link 208A with a single round trip. The profile of session B is very high deduplication, where 1 MB of user data equals 50 KB on the links 208B-208E with four round trips.
  • In order for session A to transfer X MB of user data, it must consume X MB/sec of bandwidth and X round trip messages per second. In order for session B to transfer Y MB of user data, it must consume Y*0.05 MB/sec of bandwidth and 4*Y round trip messages per second. Thus, to be able to meet the above-referenced system resources, the requirements are X+Y*0.05<=200, X+4*Y<=1,000.
  • In one example, suppose session A has the same priority as session B. Therefore it is desirable that they both transfer user data at the same rate. Hence, X=Y. Solving the above for maximizing X and Y we get approximately X=Y=190. This means that both sessions can work simultaneously, each transmitting 190 MB/sec of user data. Of course, the first session will also transmit 190 MB/sec on the link and consume 190 round trip messages per second, while the second one will transmit only 9.5 MB/sec on the link while consuming 760 round trip messages per second. Together, the two sessions are within the available resources. The rates established above are the maximum available under the given constraints. It is clear that in this example the bottleneck is the link bandwidth—the two sessions reach 199.5 MB/sec on the links.
  • In a separate example, suppose A and B have the same profile as above, but session B has a priority that is double of session A. In this case, Y=2*X. Solving the equations, it is evident that the bottleneck is with the CPU (X=111, Y=222). In other words, session A will transmit 111 MB/sec of user data while consuming 111 MB/sec on the link and 111 round trip messages per second. Session B will transmit 222 MB/sec of user data while consuming 111 MB/sec on the link and 888 round trip messages per second. Together the two sessions use up almost all of the available round trip messages per second resources of the storage system (999 round trip messages per second).
  • A formula for calculating the above processes will now be described. Given n replication sessions determine for each replication session its priority P1, . . . , Pn. This can be a user input, or a default. Priorities may range, e.g., from 1-10 (where 10 is the highest priority). For each replication session its replication resources profile is determined: for session i, in order to transmit 1 MB of user data, determine how many MB Mi are required to transmit over the link, and how many IO operations Ci. Determine the available system resources is determined as the maximal available bandwidth M and maximal available IO rate C; here M is measures as MB/sec, and C is measured as round trip messages per second.
  • Let X1, . . . , Xn represent the throughput of sessions 1, . . . , n measured in MB/sec of user data.

  • Xi/Pi=Xj/Pj for all I,j  (a)

  • SUM(Xi*Mi)<M  (b)

  • SUM(Xi*Ci)<C  (c)
  • The first equation results in a single parameter X. For example, X can be set as X-X1 and Xj=Pj*X/P1. Maximizing Xi under constraints (b) and (c) results in the following formula for X. This in turn determines X1, . . . , Xn:

  • R1=M/SUM(Pi*Mi/P1)

  • R2=C/SUM(Pj*Ci/P1)

  • X=MIN(R1,R2)
  • Replication is constrained by both bandwidth and CPU. R1 is related to the constraints on replication resulting from bandwidth and represents a scaling factor that takes into consideration the entire bandwidth M as well as all the bandwidth used by the different sessions. Similarly, R2 is related to the constraint on replication resulting from CPU and represents a scaling factor that takes into consideration the entire CPU utilization as measured by maximal possible round trip messages per second C as well as all the round trip messages per second used by the different sessions. By using these two scaling factors, two different resources (bandwidth and CPU) can be put into a single formula. This enables a means to compare, in a generic way, two otherwise disparate factors by converting them into a generic format.
  • FIG. 4 shows an exemplary computer 400 (e.g., physical or virtual) that can perform at least part of the processing described herein. The computer 400 includes a processor 402, a volatile memory 404, a non-volatile memory 406 (e.g., hard disk or flash), an output device 407 and a graphical user interface (GUI) 408 (e.g., a mouse, a keyboard, a display, for example). The non-volatile memory 406 stores computer instructions 412, an operating system 416 and data 418. In one example, the computer instructions 412 are executed by the processor 402 out of volatile memory 404. In one embodiment, an article 420 comprises non-transitory computer-readable instructions.
  • Processing may be implemented in hardware, software, or a combination of the two. Processing may be implemented in computer programs executed on programmable computers/machines that each includes a processor, a storage medium or other article of manufacture that is readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code may be applied to data entered using an input device to perform processing and to generate output information.
  • The system can perform processing, at least in part, via a computer program product, (e.g., in a machine-readable storage device), for execution by, or to control the operation of, data processing apparatus (e.g., a programmable processor, a computer, or multiple computers). Each such program may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the programs may be implemented in assembly or machine language. The language may be a compiled or an interpreted language and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network. A computer program may be stored on a storage medium or device (e.g., CD-ROM, hard disk, or magnetic diskette) that is readable by a general or special purpose programmable computer for configuring and operating the computer when the storage medium or device is read by the computer. Processing may also be implemented as a machine-readable storage medium, configured with a computer program, where upon execution, instructions in the computer program cause the computer to operate.
  • Processing may be performed by one or more programmable processors executing one or more computer programs to perform the functions of the system. All or part of the system may be implemented as, special purpose logic circuitry (e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit)).
  • Having described exemplary embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. The embodiments contained herein should not be limited to the disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety.
  • Elements of different embodiments described herein may be combined to form other embodiments not specifically set forth above. Various elements, which are described in the context of a single embodiment, may also be provided separately or in any suitable subcombination. Other embodiments not specifically described herein are also within the scope of the following claims.

Claims (18)

What is claimed is:
1. A method for enhanced quality of service (QoS) for multiple replication sessions in a replication setup of a storage system, the method comprising:
for each of a number of replication sessions simultaneously implemented via the storage system:
determining, by a processor-based system, an assigned priority level; and
calculating, by the processor-based system, a corresponding resource profile, the resource profile specifying a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session;
determining, by the processor-based system, available system resources for an aggregate of the replication sessions, the available system resources indicating a maximum available amount of bandwidth and a maximum available IO rate across the storage system; and
apportioning, by the processor-based system, resources among the replication sessions as a function of collective priority levels, resource profiles and the available system resources such that each of the replication sessions operate at their optimal requirements while remaining within the constraints of the overall system available resources.
2. The method of claim 1, wherein the resource profile is calculated for varying levels of deduplication among the replication sessions.
3. The method of claim 1, wherein the calculating the resource profile includes collecting statistical information on previous replication sessions, the statistical information including a number of IO operations performed, a size of the IO operations, and an amount of user data corresponding to the IO operations.
4. The method of claim 1, wherein the required bandwidth corresponds to user data.
5. The method of claim 1, wherein the assigned priority level differs among the replication sessions.
6. The method of claim 1, wherein the replication sessions are hash-based replication sessions.
7. A system for implementing enhanced quality of service (QoS) for multiple replication sessions in a replication setup of a storage system, the system comprising:
a memory comprising computer-executable instructions; and
a processor operable by a storage system, the processor executing the computer-executable instructions, the computer-executable instructions when executed by the processor cause the processor to perform operations comprising:
for each of a number of replication sessions simultaneously implemented via the storage system:
determining an assigned priority level; and
calculating a corresponding resource profile, the resource profile specifying a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session;
determining available system resources for an aggregate of the replication sessions, the available system resources indicating a maximum available amount of bandwidth and a maximum available IO rate across the storage system; and
apportioning resources among the replication sessions as a function of collective priority levels, resource profiles and the available system resources such that each of the replication sessions operate at their optimal requirements while remaining within the constraints of the overall system available resources.
8. The system of claim 7, wherein the resource profile is calculated for varying levels of deduplication among the replication sessions.
9. The system of claim 7, wherein the calculating the resource profile includes collecting statistical information on previous replication sessions, the statistical information including a number of IO operations performed, a size of the IO operations, and an amount of user data corresponding to the IO operations.
10. The system of claim 7, wherein the required bandwidth corresponds to user data.
11. The system of claim 7, wherein the assigned priority level differs among the replication sessions.
12. The system of claim 7, wherein the replication sessions are hash-based replication sessions.
13. A computer program product for implementing enhanced quality of service (QoS) for multiple replication sessions in a replication setup of a storage system, the computer program product embodied on a non-transitory computer readable medium, the computer program product including instructions that, when executed by a computer, causes the computer to perform operations comprising:
for each of a number of replication sessions simultaneously implemented via the storage system:
determining an assigned priority level; and
calculating a corresponding resource profile, the resource profile specifying a minimum required amount of bandwidth and a minimum amount of input/output (IO) operations for the replication session;
determining available system resources for an aggregate of the replication sessions, the available system resources indicating a maximum available amount of bandwidth and a maximum available IO rate across the storage system; and
apportioning resources among the replication sessions as a function of collective priority levels, resource profiles and the available system resources such that each of the replication sessions operate at their optimal requirements while remaining within the constraints of the overall system available resources.
14. The computer program product of claim 13, wherein the resource profile is calculated for varying levels of deduplication among the replication sessions.
15. The computer program product of claim 13, wherein the calculating the resource profile includes collecting statistical information on previous replication sessions, the statistical information including a number of IO operations performed, a size of the IO operations, and an amount of user data corresponding to the IO operations.
16. The computer program product of claim 13, wherein the required bandwidth corresponds to user data.
17. The computer program product of claim 13, wherein the assigned priority level differs among the replication sessions.
18. The computer program product of claim 13, wherein the replication sessions are hash-based replication sessions.
US16/521,730 2019-07-25 2019-07-25 Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup Active US10908828B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/521,730 US10908828B1 (en) 2019-07-25 2019-07-25 Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/521,730 US10908828B1 (en) 2019-07-25 2019-07-25 Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup

Publications (2)

Publication Number Publication Date
US20210026546A1 true US20210026546A1 (en) 2021-01-28
US10908828B1 US10908828B1 (en) 2021-02-02

Family

ID=74190367

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/521,730 Active US10908828B1 (en) 2019-07-25 2019-07-25 Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup

Country Status (1)

Country Link
US (1) US10908828B1 (en)

Family Cites Families (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6938122B2 (en) 2001-01-23 2005-08-30 Emc Corporation Remote mirroring in a switched environment
US6883018B1 (en) 2001-01-24 2005-04-19 Emc Corporation Scanning a message-list
US7870195B1 (en) 2001-01-24 2011-01-11 Emc Corporation Inter-processor messaging
US7032228B1 (en) 2001-03-01 2006-04-18 Emc Corporation Common device interface
US6553464B1 (en) 2001-03-04 2003-04-22 Emc Corporation Obtaining data from a remote storage device
US7577957B1 (en) 2001-03-04 2009-08-18 Emc Corporation Multiple jobs per device that are linked via a device record for servicing by different adapters
US6640280B1 (en) 2001-03-04 2003-10-28 Emc Corporation Obtaining data from a remote storage device using multiple jobs per device on RA
US6886164B2 (en) 2001-05-08 2005-04-26 Emc Corporation Selection of a resource in a distributed computer system
US6496908B1 (en) 2001-05-18 2002-12-17 Emc Corporation Remote mirroring
US6968369B2 (en) 2001-09-27 2005-11-22 Emc Corporation Remote data facility over an IP network
US6862632B1 (en) 2001-11-14 2005-03-01 Emc Corporation Dynamic RDF system for transferring initial data between source and destination volume wherein data maybe restored to either volume at same time other data is written
US6910075B2 (en) 2001-11-14 2005-06-21 Emc Corporation Dynamic RDF groups
US6944726B2 (en) 2001-11-14 2005-09-13 Emc Corporation Distributed background track processing
US6976139B2 (en) 2001-11-14 2005-12-13 Emc Corporation Reversing a communication path between storage devices
US7113945B1 (en) 2002-04-10 2006-09-26 Emc Corporation Virtual storage device that uses volatile memory
US7266542B2 (en) * 2002-04-12 2007-09-04 International Business Machines Corporation Enforcement of service terms through adaptive edge processing of application data
US7292969B1 (en) 2002-09-27 2007-11-06 Emc Corporation Method and system for simulating performance on one or more data storage systems
US7640342B1 (en) 2002-09-27 2009-12-29 Emc Corporation System and method for determining configuration of one or more data storage systems
US7380082B2 (en) 2003-03-25 2008-05-27 Emc Corporation Reading virtual ordered writes at local storage device
US6898685B2 (en) 2003-03-25 2005-05-24 Emc Corporation Ordering data writes from a local storage device to a remote storage device
US7051176B2 (en) 2003-03-25 2006-05-23 Emc Corporation Reading data provided to a remote storage device
US7114033B2 (en) 2003-03-25 2006-09-26 Emc Corporation Handling data writes copied from a remote data storage device
US20050114465A1 (en) * 2003-11-20 2005-05-26 International Business Machines Corporation Apparatus and method to control access to logical volumes using one or more copy services
US7228456B2 (en) 2003-12-01 2007-06-05 Emc Corporation Data recovery for virtual ordered writes for multiple storage devices
US7054883B2 (en) 2003-12-01 2006-05-30 Emc Corporation Virtual ordered writes for multiple storage devices
US8185708B2 (en) 2004-09-30 2012-05-22 Emc Corporation Host implementation of triangular asynchronous replication
US7912056B1 (en) * 2005-12-30 2011-03-22 Juniper Networks, Inc. Dynamic traffic shaping adjustments for distributed multicast replication
US20070156982A1 (en) 2006-01-03 2007-07-05 David Meiri Continuous backup using a mirror device
US8677087B2 (en) 2006-01-03 2014-03-18 Emc Corporation Continuous backup of a storage device
US7613890B1 (en) 2006-03-31 2009-11-03 Emc Corporation Consistent replication across multiple storage devices
US7617372B1 (en) 2006-09-28 2009-11-10 Emc Corporation Avoiding copy on first write
US7702871B1 (en) 2007-08-31 2010-04-20 Emc Corporation Write pacing
US8775549B1 (en) * 2007-09-27 2014-07-08 Emc Corporation Methods, systems, and computer program products for automatically adjusting a data replication rate based on a specified quality of service (QoS) level
US8335899B1 (en) 2008-03-31 2012-12-18 Emc Corporation Active/active remote synchronous mirroring
US8652202B2 (en) 2008-08-22 2014-02-18 Edwards Lifesciences Corporation Prosthetic heart valve and delivery apparatus
US8706959B1 (en) 2009-06-30 2014-04-22 Emc Corporation Virtual storage machine
US8332687B1 (en) 2010-06-23 2012-12-11 Emc Corporation Splitter used in a continuous data protection environment
US8694700B1 (en) 2010-09-29 2014-04-08 Emc Corporation Using I/O track information for continuous push with splitter for storage device
US8335771B1 (en) 2010-09-29 2012-12-18 Emc Corporation Storage array snapshots for logged access replication in a continuous data protection system
US8578204B1 (en) 2010-12-29 2013-11-05 Emc Corporation Witness facility for distributed storage system
US8600943B1 (en) 2010-12-31 2013-12-03 Emc Corporation Porting replication relationships
US8468180B1 (en) 2010-12-31 2013-06-18 Emc Corporation Porting storage metadata
US9110693B1 (en) 2011-02-17 2015-08-18 Emc Corporation VM mobility over distance
US9009437B1 (en) 2011-06-20 2015-04-14 Emc Corporation Techniques for shared data storage provisioning with thin devices
US8862546B1 (en) 2011-06-30 2014-10-14 Emc Corporation Virtual access roll
US8719497B1 (en) 2011-09-21 2014-05-06 Emc Corporation Using device spoofing to improve recovery time in a continuous data protection environment
US8825964B1 (en) 2011-09-26 2014-09-02 Emc Corporation Adaptive integration of cloud data services with a data storage system
US8838849B1 (en) 2011-12-08 2014-09-16 Emc Corporation Link sharing for multiple replication modes
US8966211B1 (en) 2011-12-19 2015-02-24 Emc Corporation Techniques for dynamic binding of device identifiers to data storage devices
US8977826B1 (en) 2011-12-28 2015-03-10 Emc Corporation Extent commands in replication
US9524220B1 (en) 2011-12-30 2016-12-20 EMC IP Holding Company, LLC Memory optimization for configuration elasticity in cloud environments
US8732124B1 (en) 2012-03-26 2014-05-20 Emc Corporation Multisite replication with coordinated cycle switching
US9026492B1 (en) 2012-03-26 2015-05-05 Emc Corporation Multisite replication with uncoordinated cycle switching
US9100343B1 (en) * 2012-03-29 2015-08-04 Emc Corporation Storage descriptors and service catalogs in a cloud environment
US9483355B1 (en) 2012-06-29 2016-11-01 EMC IP Holding Company LLC Tracking copy sessions
US9418131B1 (en) 2013-09-24 2016-08-16 Emc Corporation Synchronization of volumes
US9378106B1 (en) 2013-09-26 2016-06-28 Emc Corporation Hash-based replication
US9208162B1 (en) 2013-09-26 2015-12-08 Emc Corporation Generating a short hash handle
US9037822B1 (en) 2013-09-26 2015-05-19 Emc Corporation Hierarchical volume tree
US9606870B1 (en) 2014-03-31 2017-03-28 EMC IP Holding Company LLC Data reduction techniques in a flash-based key/value cluster storage
US9342465B1 (en) 2014-03-31 2016-05-17 Emc Corporation Encrypting data in a flash-based contents-addressable block device
US9396243B1 (en) 2014-06-27 2016-07-19 Emc Corporation Hash-based replication using short hash handle and identity bit
US9304889B1 (en) 2014-09-24 2016-04-05 Emc Corporation Suspending data replication
US10025843B1 (en) 2014-09-24 2018-07-17 EMC IP Holding Company LLC Adjusting consistency groups during asynchronous replication
US10152527B1 (en) 2015-12-28 2018-12-11 EMC IP Holding Company LLC Increment resynchronization in hash-based replication
US10095428B1 (en) 2016-03-30 2018-10-09 EMC IP Holding Company LLC Live migration of a tree of replicas in a storage system
US9959073B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Detection of host connectivity for data migration in a storage system
US9959063B1 (en) 2016-03-30 2018-05-01 EMC IP Holding Company LLC Parallel migration of multiple consistency groups in a storage system
US10261853B1 (en) 2016-06-28 2019-04-16 EMC IP Holding Company LLC Dynamic replication error retry and recovery
US10656869B1 (en) * 2018-06-28 2020-05-19 Amazon Technologies, Inc. Performance-based volume replica migration
US11146455B2 (en) * 2019-12-20 2021-10-12 Intel Corporation End-to-end quality of service in edge computing environments

Also Published As

Publication number Publication date
US10908828B1 (en) 2021-02-02

Similar Documents

Publication Publication Date Title
US11221975B2 (en) Management of shared resources in a software-defined storage environment
US20220334975A1 (en) Systems and methods for streaming storage device content
US20220237133A1 (en) Quality of service control of logical devices for a memory sub-system
US10430329B2 (en) Quality of service aware storage class memory/NAND flash hybrid solid state drive
US11262916B2 (en) Distributed storage system, data processing method, and storage node
US10216423B1 (en) Streams across multiple controllers to improve solid state drive performance
US10860223B1 (en) Method and system for enhancing a distributed storage system by decoupling computation and network tasks
US11520715B2 (en) Dynamic allocation of storage resources based on connection type
US11755241B2 (en) Storage system and method for operating storage system based on buffer utilization
US11704160B2 (en) Redistribution of processing groups between server nodes based on hardware resource utilization
US10866934B1 (en) Token-based data flow control in a clustered storage system
US11416396B2 (en) Volume tiering in storage systems
US9229891B2 (en) Determining a direct memory access data transfer mode
US20210019276A1 (en) Link selection protocol in a replication setup
US10908828B1 (en) Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup
US20190324668A1 (en) Storage system with binding of host non-volatile memory to one or more storage devices
US20190129626A1 (en) Selectively limiting throughput of test objects that share system resources with production objects
US20180267714A1 (en) Managing data in a storage array
US20200348874A1 (en) Memory-fabric-based data-mover-enabled memory tiering system
US11829618B2 (en) Memory sub-system QOS pool management
US20240126687A1 (en) Garbage collection processing in storage systems
US11971771B2 (en) Peer storage device messaging for power management
US11435954B2 (en) Method and system for maximizing performance of a storage system using normalized tokens based on saturation points
US11972125B2 (en) Memory sub-system dynamic QOS pool
US11163475B2 (en) Block input/output (I/O) accesses in the presence of a storage class memory

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEIRI, DAVID;KUCHEROV, ANTON;REEL/FRAME:049886/0259

Effective date: 20190719

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050406/0421

Effective date: 20190917

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:050724/0571

Effective date: 20191010

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169

Effective date: 20200603

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST AT REEL 050406 FRAME 421;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058213/0825

Effective date: 20211101

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0571);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060436/0088

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742

Effective date: 20220329