US10860236B2 - Method and system for proactive data migration across tiered storage - Google Patents
Method and system for proactive data migration across tiered storage Download PDFInfo
- Publication number
- US10860236B2 US10860236B2 US16/403,344 US201916403344A US10860236B2 US 10860236 B2 US10860236 B2 US 10860236B2 US 201916403344 A US201916403344 A US 201916403344A US 10860236 B2 US10860236 B2 US 10860236B2
- Authority
- US
- United States
- Prior art keywords
- learning model
- data
- event metadata
- storage
- olm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
- G06F3/0649—Lifecycle management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
Definitions
- FIG. 1A shows a system in accordance with one or more embodiments of the invention.
- FIG. 1B shows a data storage system in accordance with one or more embodiments of the invention.
- FIG. 1C shows an access prediction service in accordance with one or more embodiments of the invention.
- FIG. 2 shows a tiered storage architecture in accordance with one or more embodiments of the invention.
- FIG. 3 shows a flowchart describing a method for adjusting an optimized learning model in accordance with one or more embodiments of the invention.
- FIG. 4 shows a flowchart describing a method for proactively migrating data across storage tiers in accordance with one or more embodiments of the invention.
- FIG. 5 shows a computing system in accordance with one or more embodiments of the invention.
- any component described with regard to a figure in various embodiments of the invention, may be equivalent to one or more like-named components described with regard to any other figure.
- descriptions of these components will not be repeated with regard to each figure.
- each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components.
- any description of the components of a figure is to be interpreted as an optional embodiment which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.
- ordinal numbers e.g., first, second, third, etc.
- an element i.e., any noun in the application.
- the use of ordinal numbers is not to necessarily imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements.
- a first element is distinct from a second element, and a first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
- embodiments of the invention relate to a method and system for proactive data migration across tiered storage.
- one or more embodiments of the invention employs machine learning, directed to data prediction, to accurately estimate the likelihood that any given datum may be accessed at a discrete point in time, or window of time, of the near future. Given sufficiently high probabilities, the given datum may be proactively, rather than reactively, moved (as is the case with existing PID based solutions) between storage tiers to place the datum in an appropriate performance storage class.
- FIG. 1A shows a system in accordance with one or more embodiments of the invention.
- the system ( 100 ) may include one or more application hosts ( 102 A- 102 N) operatively connected to a data storage system (DSS) ( 104 ).
- DSS data storage system
- the aforementioned system ( 100 ) components may operatively connect to one another through a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, a mobile network, etc.).
- the network may be implemented using any combination of wired and/or wireless connections.
- the network may encompass various interconnected, network-enabled components (or systems) (e.g., switches, routers, gateways, etc.) that may facilitate communications between the aforementioned system ( 100 ) components.
- the aforementioned system ( 100 ) components may communicate with one another using any combination of wired and/or wireless communication protocols.
- an application host ( 102 A- 102 N) may represent any physical appliance or computing system designed and configured to receive, generate, process, store, and/or transmit data.
- an application host ( 102 A- 102 N) may include functionality to submit input-output (IO) requests to the DSS ( 104 ), which may entail reading data from and/or writing data to the DSS ( 104 ).
- IO input-output
- an application host ( 102 A- 102 N) may perform other functionalities without departing from the scope of the invention.
- Examples of an application host may include, but are not limited to, a desktop computer, a tablet computer, a laptop computer, a server, a mainframe, or any other computing system similar to the exemplary computing system shown in FIG. 5 .
- the DSS ( 104 ) may represent an enterprise storage platform (e.g., a centralized repository for various forms of data).
- the DSS ( 104 ) may be implemented on one or more servers (not shown). Each server may be a physical server, residing in a datacenter, or a virtual server, which may alternatively reside in a cloud computing environment. Additionally or alternatively, the DSS ( 104 ) may be implemented using one or more computing systems similar to the exemplary computing system shown in FIG. 5 .
- the DSS ( 104 ) is described in further detail below with respect to FIG. 1B .
- FIG. 1A shows a configuration of components
- system ( 100 ) configurations may be used without departing from the scope of the invention.
- FIG. 1B shows a data storage system (DSS) in accordance with one or more embodiments of the invention.
- the DSS ( 104 ) may include a hardware layer ( 106 ) operatively connected to an operating system (OS) ( 124 ).
- OS operating system
- FIG. 1B shows a data storage system (DSS) in accordance with one or more embodiments of the invention.
- the DSS ( 104 ) may include a hardware layer ( 106 ) operatively connected to an operating system (OS) ( 124 ).
- OS operating system
- the hardware layer ( 106 ) may represent a portion of DSS ( 104 ) architecture that includes various physical and/or tangible components. Collectively, these various physical and/or tangible components may enable and provide the framework and resources on which at least the OS ( 124 ) may operate. Accordingly, the hardware layer ( 106 ) may include one or more central processing units (CPUs) ( 108 , 112 ), one or more graphics processing units (GPUs) ( 114 ), system memory ( 118 ), and a physical storage array (PSA) ( 120 ). Each of these hardware layer ( 106 ) subcomponents is described below.
- CPUs central processing units
- GPUs graphics processing units
- PSA physical storage array
- a CPU ( 108 , 112 ) may represent an integrated circuit designed and configured for processing instructions (e.g., computer readable program code).
- a CPU ( 108 , 112 ) may encompass one or more cores, or micro-cores, which may be optimized to execute sequential or serial instructions at high clock speeds.
- a CPU ( 108 , 112 ) may be more versatile than a GPU ( 114 ) and, subsequently, may handle a diversity of functions, tasks, and/or activities.
- the primary CPU ( 108 ) may, on occasion and for specific computational tasks, interact with the secondary CPU ( 112 ) and/or GPU ( 114 ).
- a GPU ( 114 ) may represent a specialized CPU (or integrated circuit) designed and configured to render graphics and/or perform specific computational tasks.
- a GPU ( 114 ) may encompass hundreds or thousands of cores, or micro-cores, which may be optimized to execute parallel operations at slower clock speeds.
- a GPU ( 114 ) may be superior to a CPU ( 108 , 112 ) in processing power, memory bandwidth, speed, and efficiency when executing tasks that predominantly require multiple parallel processes such as, for example, graphics rendering, machine learning, big data analysis, etc.
- a GPU ( 114 ) may include dedicated GPU memory (not shown), which may refer to physical memory that may only be accessed by the GPU ( 114 ).
- Dedicated GPU memory may be implemented using any specialized volatile physical memory such as, for example, video random access memory (VRAM).
- VRAM may be similar to dynamic RAM (DRAM) with the exceptions of being faster than DRAM, and exhibiting the capability of being written to and read from simultaneously.
- hardware layer ( 106 ) design and/or architecture may partition system functions across one or more logical processing domains.
- These logical processing domains may include, but are not limited to, a CPU domain ( 110 ) and an offload domain ( 116 ).
- the CPU domain ( 110 ) may encompass the primary CPU ( 108 ), and may be responsible for implementing a vast majority of system functions.
- the offload domain ( 116 ) may encompass the secondary CPU ( 112 ) and/or GPU ( 114 ), and may be responsible for implementing few, often computing-intensive system functions. Accordingly, the offload domain ( 116 ) may exist to relieve the CPU domain ( 110 ) of any workloads that may bottleneck the CPU domain ( 110 ), and subsequently, impact the various system functions for which the CPU domain ( 110 ) may be responsible.
- system memory ( 118 ) may refer to physical memory that may store the instructions (e.g., computer readable program code) that which at least the primary CPU ( 108 ) executes. Further, system memory ( 118 ) may be implemented using volatile (e.g., DRAM, static RAM (SRAM), etc.) and/or non-volatile (e.g., read-only memory (ROM), etc.) physical memory.
- volatile e.g., DRAM, static RAM (SRAM), etc.
- non-volatile e.g., read-only memory (ROM), etc.
- the PSA ( 120 ) may refer to a collection of one or more physical storage devices (PSD) ( 122 A- 122 N) on which various forms of data—e.g., application data (not shown)—may be consolidated.
- PSD physical storage devices
- Each PSD ( 122 A- 122 N) may encompass non-transitory computer readable storage media on which data may be stored in whole or in part, and temporarily or permanently.
- each PSD ( 122 A- 122 N) may be implemented using a storage device technology. Examples of storage device technologies may include, but are not limited to, flash based storage devices, fibre channel (FC) based storage devices, serial-attached small computer system interface (SCSI) (SAS) based storage devices, and serial advanced technology attachment (SATA) storage devices.
- FC fibre channel
- SAS serial-attached small computer system interface
- SAS serial advanced technology attachment
- the PSA ( 120 ) may be implemented using persistent (i.e., non-volatile) storage.
- persistent storage may include, but are not limited to, optical storage, magnetic storage, NAND Flash Memory, NOR Flash Memory, Magnetic Random Access Memory (M-RAM), Spin Torque Magnetic RAM (ST-MRAM), Phase Change Memory (PCM), or any other storage defined as non-volatile Storage Class Memory (SCM).
- the OS ( 124 ) may refer to a computer program that executes over the hardware layer ( 106 ).
- the OS ( 124 ) may be responsible for managing the utilization of the hardware layer ( 106 ) by the various services (described below) executing on the DSS ( 104 ), as well as the by external entities operatively connected to the DSS ( 104 ) such as, for example, one or more application hosts (see e.g., FIG. 1A ).
- the OS ( 124 ) may include functionality, but is not limited, to supporting fundamental DSS ( 104 ) functions, scheduling tasks, allocating and deallocating hardware layer ( 106 ) resources, executing or invoking one or more services, and controlling peripherals (if any).
- the OS ( 124 ) may perform other functionalities without departing from the scope of the invention.
- the OS ( 124 ) may include one or more services, each of which may implement one or more functionalities of the OS ( 124 ). Examples of these functionalities, including the handful mentioned above, may be directed, but not limited, to user interfacing, program execution, file system manipulation, input-output (IO) operations, communications, resource allocation, error detection, accounting, and security or protection. Of these services, a storage tiering service (STS) ( 126 ) and an access prediction service (APS) ( 128 ) may be included. Each of these OS ( 124 ) services is described below.
- the STS ( 126 ) may refer to a computer program or process (i.e., an instance of a computer program) that executes over the hardware layer ( 106 ). Further, the STS ( 126 ) may be responsible for configuring a tiered storage architecture (described below) (see e.g., FIG. 2 ), entailing at least a portion of the PSA ( 120 ), based on datacenter administrator instructions and/or preferences.
- a tiered storage architecture described below
- the STS ( 126 ) may perform other functionalities without departing from the scope of the invention.
- the APS ( 128 ) may refer to a computer program or process (i.e., an instance of a computer program) that executes over the hardware layer ( 106 ). Further, the APS ( 128 ) may be responsible for predicting which data, stored on at least a portion of the PSA ( 120 ), shall be accessed (or needed) in the future based, at least in part, on observed historical data access patterns. To that extent, the APS ( 128 ) may include functionality to optimize and employ learning models (described below) (see e.g., FIGS. 3 and 4 ) to derive probabilities directed to which data may most likely be accessed in the future by one or more application hosts (see e.g., FIG. 1A ). The APS ( 128 ) is described in further detail below with respect to FIG. 1C .
- FIG. 1B shows a configuration of components
- DSS ( 104 ) configurations may be used without departing from the scope of the invention.
- FIG. 1C shows an access prediction service (APS) in accordance with one or more embodiments of the invention.
- the APS ( 128 ) may include various components—a subset of which may execute on the offload domain ( 116 ), while another subset may execute on the CPU domain ( 110 ).
- the more compute-intensive APS ( 128 ) components, which may execute on the offload domain ( 116 ), may include a learning model trainer (LMT) ( 140 ) and an optimized learning model (OLM) ( 142 ).
- LMT learning model trainer
- OLM optimized learning model
- the less compute-intensive APS ( 128 ) components which may alternatively execute on the CPU domain ( 110 ), may include a model output interpreter (MOI) ( 144 ) and one or more data migration queues (DMQ) ( 146 ).
- MOI model output interpreter
- DMQ data migration queues
- the LMT ( 140 ) may refer to a computer program or process (i.e., an instance of a computer program) that executes over the hardware layer ( 106 ) (see e.g., FIG. 1B ). Further, the LMT ( 140 ) may be designed and configured to optimize (i.e., train) one or more learning models.
- a learning model may generally refer to a machine learning paradigm or algorithm (e.g., a neural network, a decision tree, a support vector machine, a linear regression model, etc.) that may be used in data classification, data prediction, and other forms of data analysis.
- the LMT ( 140 ) may include functionality to: aggregate input-output (IO) event metadata ( 148 ) (described below); partition aggregated IO event metadata into learning model training and validation sets; train the learning model(s) using the training sets, to derive optimal learning model parameters (described below); validate the learning model(s) using the validation sets, to derive optimal learning model hyper-parameters (described below); and configure or adjust the OLM ( 142 ) using the derived optimal learning model parameters and hyper-parameters.
- IO input-output
- the IO event metadata ( 148 ) may be aggregated or received from the STS ( 126 ), or process(es) executing therein, that may be responsible for handling (and examining) IO events directed thereto by one or more application hosts ( 102 A- 102 N).
- the LMT ( 140 ) may perform other functionalities without departing from the scope of the invention.
- the OLM ( 142 ) may refer to a computer program or process (i.e., an instance of a computer program) that executes over the hardware layer ( 106 ). Further, the OLM ( 142 ) may be designed and configured to implement a machine learning algorithm, which has been optimized through supervised or unsupervised learning (described below). The objective of the OLM ( 142 ) may be directed to estimating, within a high accuracy, data access probabilities ( 152 ), based on various optimized configuration variables (i.e., optimal learning model parameters and hyper-parameters (described above)), and from a given input data set (e.g., IO event metadata ( 148 )).
- IO event metadata e.g., IO event metadata
- a data access probability ( 152 ) may refer to a numerical value that estimates the likelihood that a given data, at least associated with an inputted, given IO event metadata ( 148 ), will be accessed by an application host in the near future.
- the OLM ( 142 ) may also include functionality to derive feedback data ( 150 ) from false-positive learning model outputs (i.e., data access probabilities ( 152 )) and, subsequently, provide the feedback data ( 150 ) back to the LMT ( 140 ) to be used in future training phases.
- the learning model may attain the capability to adapt and overcome from its mistakes.
- the OLM ( 142 ) and/or learning model employed may be acknowledged as a recurrent learning machine algorithm (e.g., a recurrent neural network (RNN)).
- a recurrent learning machine algorithm e.g., a recurrent neural network (RNN)
- RNN recurrent neural network
- the MOI ( 144 ) may refer to a computer program or process (i.e., an instance of a computer program) that executes over the hardware layer ( 106 ). Further, the MOI ( 144 ) may be designed and configured to interpret learning model outputs (i.e., data access probabilities ( 152 )) (see e.g., FIG. 4 ). The MOI ( 144 ) may interpret learning model outputs based on learning model output thresholds, which may be used to determine whether data migration requests ( 154 ) should be generated and queued in a DMQ ( 146 ). A data migration request ( 154 ) may refer to a service request directed to migrating certain data from one storage tier to another.
- the request may include, but is not limited to, a unique data identifier associated with the certain data, a source storage tier where the certain data may currently be stored (e.g., pre-migration), and a destination storage tier where the certain data should reside (e.g., post-migration).
- a migration cost metric may refer to an estimation of a length of time that may elapse to complete the proactive data migration.
- a DMQ ( 146 ) may refer to a first-in, first out (FIFO) buffer that enables data migration requests ( 154 ) to be queued and, accordingly, await retrieval and processing from/by the STS ( 126 ), or process(es) therein.
- a DMQ ( 146 ) may be implemented using physical memory storage (e.g., random access memory (RAM)), which permits any queued data migration requests ( 154 ) to be stored temporarily.
- RAM random access memory
- FIG. 1C shows a configuration of components
- the APS ( 128 ) may exclude the DMQ ( 146 ), wherein the responsibility of processing and performing the proactive data migrations, based the interpreted data access probabilities ( 152 ), may fall to the MOI ( 144 )—rather than the STS ( 126 ).
- FIG. 2 shows a tiered storage architecture in accordance with one or more embodiments of the invention.
- the tiered storage architecture ( 200 ) may represent an exemplary framework for the tiering of data storage based on a set of service level objectives (SLO) ( 210 ) (described below).
- the tiered storage architecture ( 200 ) may include one or more disk groups (DG) ( 202 A- 202 N), one or more data pools (DP) ( 204 A- 204 N), one or more storage resource pools (SRP) ( 206 A- 206 N), one or more storage groups (SG) ( 208 A- 208 N), and one or more SLOs ( 210 A- 210 N).
- SLO service level objectives
- a disk group (DG) ( 202 A- 202 N) may refer to a collection of physical storage devices (PSDs) (see e.g., FIG. 1B ) that share the same physical and performance characteristics.
- PSDs physical storage devices
- one or more PSDs may be grouped to form a DG ( 202 A- 202 N) based on any subset or all of the following attributes: storage device technology (e.g., flash based storage devices, fibre channel (FC) based storage devices, serial-attached small computer system interface (SCSI) (SAS) based storage devices, or serial advanced technology attachment (SATA) storage devices); storage capacity (e.g., in bytes); form factor; rotational speed (e.g., in revolutions per minute (RPM)); and desired redundant array of independent disks (RAID) protection type (e.g., RAID1, RAID5, RAID6, or unprotected).
- storage device technology e.g., flash based storage devices, fibre channel (FC) based storage devices
- the given DG ( 202 A- 202 N) may automatically be configured with one or more data devices (not shown).
- the cardinality (i.e., number) of data devices automatically configured for the given DG ( 202 A- 202 N) may match the cardinality of PSDs grouped in the given DG ( 202 A- 202 N).
- a data device may represent an internal logical device, which may provide the physical storage backing a corresponding virtually provisioned device (described below).
- a data pool (DP) ( 204 A- 204 N) may refer to a collection of data devices that share the same emulation (e.g., fixed block architecture (FBA), count-key data (CKD), etc.) and RAID protection type.
- FBA fixed block architecture
- CKD count-key data
- RAID protection type e.g., RAID protection type
- a storage resource pool (SRP) ( 206 A- 206 N) may refer to a collection of DP ( 204 A- 204 N), which may define a data migration domain. That is, any migration of data (stored, physically, in the DG(s) ( 202 A- 202 N), across storage tiers, must be performed within the bounds of a given SRP ( 206 A- 206 N) in which the DG(s) ( 202 A- 202 N) reside.
- SRP storage resource pool
- a storage group ( 208 A- 208 N) may refer to a logical collection of one or more virtually provisioned devices that may be managed together.
- a virtually provisioned device may refer to a host (e.g., application host (see e.g., FIG. 1A )) accessible device to which the host may direct storage device IO requests.
- the physical storage that may back the storage capacity, consumed by a given virtually provisioned device, may be allocated from a data device in a DP ( 204 A- 204 N).
- any given SG ( 208 A- 208 N) may be associated with a SRP ( 206 A- 206 N), a SLO ( 210 A- 210 N), or a combination thereof.
- a SLO may refer to an expected average response time goal for one or more applications (residing on one or more application hosts (see e.g., FIG. 1A )), which may access data on the data storage system (DSS).
- Examples of expected average response times which may reflect the performance attained through the employment of flash based storage devices (e.g., high-performance storage devices) to SATA based storage devices (e.g., low-performance storage devices), may range from 0.8 milliseconds to 14 milliseconds, respectively.
- a storage tier may refer to a collection of PSDs that share the same storage device technology and RAID protection type.
- a storage tier may encompass: one or more flash based storage devices and a selected RAID protection type, which may be used as high-performance storage characterized by low response times and high costs per unit storage capacity; one or more FC based storage devices and a selected RAID protection type, which may be used as medium-performance storage characterized by medium response times and medium costs per unit storage capacity; one or more SAS based storage devices and a selected RAID protection type, which may be used as medium-performance storage characterized by medium response times and medium costs per unit storage capacity; and one or more SATA based storage devices and a selected RAID protection type, which may be used as low-performance storage characterized by high response times and low cost per unit storage capacity.
- a tiered storage policy may refer to a policy that manages data placement and migration across storage tiers to achieve SLOs ( 210 A- 210 N) for one or more SG ( 208 A- 208 N).
- Each tiered storage policy may subsequently group one or more storage tiers, and specify upper usage limits for each storage tier.
- the upper usage tier assigned to a given storage tier may reflect a percentage of the total storage capacity, of a SG ( 208 A- 208 N) associated with the tiered storage policy, that can be reside on the given storage tier.
- the percentage of storage capacity for each storage tier specified in a tiered storage policy, when combined, must total one-hundred percent.
- a tiered storage policy may be applied to multiple SG ( 208 A- 208 N), however, any given SG ( 208 A- 208 N) may only be associated with one tiered storage policy.
- FIG. 3 shows a flowchart describing a method for adjusting an optimized learning model in accordance with one or more embodiments of the invention.
- the various steps outlined below may be performed by the access prediction service (APS) executing on the data storage system (DSS) (see e.g., FIGS. 1B and 1C ). Further, while the various steps in the flowchart are presented and described sequentially, one of ordinary skill will appreciate that some or all steps may be executed in different orders, may be combined or omitted, and some or all steps may be executed in parallel.
- APS access prediction service
- DSS data storage system
- IO event metadata may refer to information that describes one or more IO events.
- An IO event may refer to a storage device (e.g., disk) IO request, which may have been submitted to the DSS by an application host (see e.g., FIG. 1A ).
- the storage device IO request may be directed to reading data from a physical storage array (PSA) of the DSS or, alternatively, may be directed to writing data to the PSA.
- the aggregated IO event metadata may include historical (i.e., previously observed or received) 10 event metadata describing one or more historical IO events. Examples of IO event metadata may include, but is not limited to, observed IO per seconds (IOPS), read percentages, read IO sizes, write IO sizes, and IO response times.
- Step 302 the IO event metadata (aggregated in Step 300 ) is partitioned into two IO event metadata subsets.
- a first IO event metadata subset may be designated as a training set
- a second IO event metadata subset may alternatively be designated as a validation set.
- a learning model is trained using the training set (i.e., first IO event metadata subset) (obtained in Step 302 ).
- training of the learning model may entail: initializing a set of learning model parameters that, at least in part, define the learning model; and adjusting these aforementioned learning model parameters through various iterations of supervised or unsupervised learning, until a goal training accuracy (or another metric) is reached.
- Supervised learning may refer to the learning of inferences from labeled training sets, while unsupervised learning may alternatively refer to the learning of inferences from unlabeled training sets.
- a labeled training set may refer to a training set that includes input data and a target or desired output that is sought to be obtained from processing the input data.
- An unlabeled training set on the other hand, may refer to a training set that only includes input data.
- the above-mentioned learning model may refer to a machine learning paradigm (or algorithm) that may be directed to prediction or forecasting. More specifically, the objective of the learning model may pertain to predicting which data, stored in the DSS, may most likely be accessed within a discrete time or a window of time in the future. Examples of machine learning paradigms or algorithms may include, but are not limited to, neural networks, decision trees, support vector machines, linear regression models, clustering, etc. Furthermore, the above-mentioned learning model parameters may vary depending on an architecture of the learning model. Generally, a learning model parameter may represent an internal learning model configuration variable, which may be optimized from the processing of data during training of the learning model.
- the associated learning model parameters may include, but are not limited to, a number of layers residing between the model input and the model output, a number of nodes occupying each layer, an interconnectivity configuration between the various nodes, values of weights representative of the strengths of the various inter-nodal connections, and propagation functions through which nodal outputs are computed with respect to nodal inputs and/or other parameters (e.g., weights).
- training of the learning model may also incorporate feedback data derived from previous learning model outputs (described below) (see e.g., FIG. 4 ). That is, the learning model may include functionality to adapt (or correct itself) by learning from any mistakes. Mistakes (or the feedback data) may encompass real-time prediction runs, where analysis of any real-time IO event metadata results in a false-positive learning model output.
- a false-positive learning model output may reference a learning model output that predicts certain data will be accessed in the future, when in actuality, it is not.
- Step 306 the learning model is subsequently validated using the validation set (i.e., second IO event metadata subset) (obtained in Step 302 ).
- validation of the learning model may entail: initializing a set of learning model hyper-parameters that, at least in part, define the learning model; and adjusting these aforementioned learning model hyper-parameters through various iterations of supervised or unsupervised learning, until a goal validation accuracy (or another metric) is reached.
- a learning model hyper-parameter may represent an external learning model configuration variable, which cannot be optimized through the processing of data. Further, a learning model hyper-parameter may influence how the learning model parameter(s) may be optimized.
- learning model hyper-parameters may vary depending on an architecture of the learning model.
- the associated learning model hyper-parameters may include, but are not limited to, a learning rate for training the neural network, a specificity of a learning rule for governing how the learning model parameter(s) may be adjusted to produce desired training results, a number of epochs (or iterations) the training of the learning model should elapse, etc.
- an optimized learning model may be adjusted or configured using the optimal learning model parameters (derived in Step 304 ) and hyper-parameters (derived in Step 306 ).
- the OLM is representative of a first OLM version
- the finalized learning model obtained as a result of reaching the goal validation accuracy in Step 306
- a previous OLM version may be updated, using the optimal learning model parameters and hyper-parameters, to arrive at an adjusted or updated OLM.
- updating a previous OLM version may entail replacing a previously optimal set of learning model parameters and hyper-parameters with the recently derived optimal learning model parameters and hyper-parameters.
- FIG. 4 shows a flowchart describing a method for proactively migrating data across storage tiers in accordance with one or more embodiments of the invention.
- the various steps outlined below may be performed by the access prediction service (APS) executing on the data storage system (DSS) (see e.g., FIGS. 1B and 1C ).
- APS access prediction service
- DSS data storage system
- IO event metadata may refer to information that describes one or more IO events.
- An IO event may refer to a storage device (e.g., disk) IO request, which may have been submitted to the DSS by an application host (see e.g., FIG. 1A ).
- the storage device IO request may be directed to reading data from a physical storage array (PSA) of the DSS or, alternatively, may be directed to writing data to the PSA.
- the aggregated IO event metadata may include real-time IO event metadata describing a recently received/observed IO event. Examples of IO event metadata may include, but is not limited to, observed IO per seconds (IOPS), read percentages, read IO sizes, write IO sizes, and IO response times.
- the IO event metadata (aggregated in Step 400 ) is analyzed using an optimized learning model (OLM).
- the OLM may refer to a machine learning paradigm (or algorithm) that may be directed to prediction or forecasting. More specifically, the objective of the OLM may pertain to predicting which data, stored in the DSS, may most likely be accessed within a discrete time or a window of time in the future. Examples of machine learning paradigms or algorithms may include, but are not limited to, neural networks, decision trees, support vector machines, linear regression models, clustering, etc. Further, the OLM may represent a learning model (described above) that exhibits optimal learning model parameters and hyper-parameters, which may have been optimized through iterative supervised or unsupervised learning.
- a learning model output may refer to data produced by the OLM based on a configuration of the OLM (i.e., defined through optimal learning model parameters and hyper-parameters) and a given input data (e.g., the IO event metadata).
- the learning model output may include the estimation of one or more data access probabilities. Each data access probability may refer to a numerical value that estimates a likelihood that a given data, relevant to at least a portion of the IO event metadata (aggregated in Step 400 ), will be accessed by an application host (see e.g., FIG. 1A ) at some point in time in the near future.
- the process may proceed along a first path that includes Steps 404 and 406 .
- the process may take this first path if learning model training (see e.g., FIG. 3 ) incorporates feedback data (described below).
- the process may alternatively proceed along a second path that excludes Steps 404 and 406 .
- the process may alternatively take this second path if learning model training does not incorporate feedback data.
- Step 404 feedback data is derived from at least a subset of the learning model output (produced in Step 402 ).
- feedback data may refer to a false-positive learning model output (should any be produced based on received, real-time IO event metadata.
- a false-positive learning model output may reference a learning model output that predicts certain data will be accessed in the near future, when in actuality, it is not. Further, feedback data may serve to allow a learning model to adapt and overcome these false-positive learning model output(s).
- the feedback data (derived in Step 404 ) is stored. Specifically, in one embodiment of the invention, the feedback data may be stored until retrieved and incorporated into a future training phase of the learning model.
- the learning model output (produced in Step 402 ) is interpreted.
- interpretation of the learning model output may entail, for example, comparing the learning model output against a learning model output threshold (i.e., a data access probability threshold—e.g., the numerical value 0.9 representative of a 90% (or very high) likelihood that certain data will be accessed in the near future); and making a determination, based on the comparison, as to whether the learning model output falls short of, or meets/exceeds, the learning model output threshold.
- a learning model output threshold i.e., a data access probability threshold—e.g., the numerical value 0.9 representative of a 90% (or very high) likelihood that certain data will be accessed in the near future
- the proactive migration of the data across storage tiers may not transpire because the measure of confidence (or probability) is not a minimum required to trigger the data migration.
- the proactive migration of the data across storage tiers would take place because the measure of confidence (or probability) is sufficiently high.
- a data migration request may refer to a service request directed to migrating certain data from one storage tier to another.
- the request may include, but is not limited to, a unique data identifier associated with the certain data, a source storage tier where the certain data may currently be stored (e.g., pre-migration), and a destination storage tier where the certain data should reside (e.g., post-migration).
- interpretation of the learning model output, to determine whether a data migration request is to be generated may further rely on a migration cost metric.
- a migration cost metric may refer to an estimation of a length of time that may elapse to complete the proactive data migration. This migration cost metric may, in turn, be compared against historically observed lengths of time, reflecting data access time, for accessing the certain data. Further, based on the migration cost metric (i.e., estimated data migration time) exceeding the data access time, migration of the certain data may be aborted. Alternatively, based on the migration cost metric exhibiting a value below the data access time, migration of the certain data may proceed.
- Step 410 the certain data, for which at least a portion of the IO event metadata (aggregated in Step 400 ) is associated, is subsequently migrated across one storage tier to another.
- the migration may entail servicing the data migration request(s) (generated in Step 408 ) in order to migrate the certain data from a low-performance storage tier to a high-performance storage tier.
- the data migration request(s) may be serviced to, alternatively, migrate the certain data from a high-performance storage tier to a low-performance storage tier.
- FIG. 5 shows a computing system in accordance with one or more embodiments of the invention.
- the computing system ( 500 ) may include one or more computer processors ( 502 ), non-persistent storage ( 504 ) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage ( 506 ) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface ( 512 ) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), input devices ( 510 ), output devices ( 508 ), and numerous other elements (not shown) and functionalities. Each of these components is described below.
- non-persistent storage e.g., volatile memory, such as random access memory (RAM), cache memory
- persistent storage e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive
- the computer processor(s) ( 502 ) may be an integrated circuit for processing instructions.
- the computer processor(s) may be one or more cores or micro-cores of a central processing unit (CPU) and/or a graphics processing unit (GPU).
- the computing system ( 500 ) may also include one or more input devices ( 510 ), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
- the communication interface ( 512 ) may include an integrated circuit for connecting the computing system ( 500 ) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
- a network not shown
- LAN local area network
- WAN wide area network
- the Internet such as the Internet
- mobile network such as another computing device.
- the computing system ( 500 ) may include one or more output devices ( 508 ), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device.
- a screen e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device
- One or more of the output devices may be the same or different from the input device(s).
- the input and output device(s) may be locally or remotely connected to the computer processor(s) ( 502 ), non-persistent storage ( 504 ), and persistent storage ( 506 ).
- Software instructions in the form of computer readable program code to perform embodiments of the invention may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium.
- the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/403,344 US10860236B2 (en) | 2019-05-03 | 2019-05-03 | Method and system for proactive data migration across tiered storage |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/403,344 US10860236B2 (en) | 2019-05-03 | 2019-05-03 | Method and system for proactive data migration across tiered storage |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200348875A1 US20200348875A1 (en) | 2020-11-05 |
| US10860236B2 true US10860236B2 (en) | 2020-12-08 |
Family
ID=73017303
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/403,344 Active US10860236B2 (en) | 2019-05-03 | 2019-05-03 | Method and system for proactive data migration across tiered storage |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10860236B2 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12210760B2 (en) | 2021-12-03 | 2025-01-28 | Samsung Electronics Co., Ltd. | Object storage system, migration control device, and migration control method |
| US12282689B2 (en) | 2022-07-25 | 2025-04-22 | Dell Products L.P. | Dynamic redundant array of independent disks (RAID) transformation |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112667154A (en) * | 2020-12-22 | 2021-04-16 | 平安科技(深圳)有限公司 | Hierarchical method, system, electronic device and computer readable storage medium |
| US12379761B2 (en) * | 2022-02-28 | 2025-08-05 | Dell Products L.P. | Management of energy efficiency parameters for resources in edge computing system |
| CN116663654B (en) * | 2023-07-31 | 2023-11-21 | 中国石油大学(华东) | Time window migration reinforcement learning injection and production optimization method based on history regulation experience |
Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150234719A1 (en) * | 2014-02-18 | 2015-08-20 | International Business Machines Corporation | Preemptive relocation of failing data |
| US20160189027A1 (en) * | 2014-12-24 | 2016-06-30 | Google Inc. | Augmenting neural networks to generate additional outputs |
| US20170262216A1 (en) * | 2015-09-29 | 2017-09-14 | EMC IP Holding Company LLC | Dynamic storage tiering based on predicted workloads |
| US20180018379A1 (en) * | 2016-07-13 | 2018-01-18 | International Business Machines Corporation | Application performance using multidimensional predictive algorithm for automated tiering mechanisms |
| US20180240010A1 (en) * | 2017-02-19 | 2018-08-23 | Intel Corporation | Technologies for optimized machine learning training |
| US20180246659A1 (en) * | 2017-02-28 | 2018-08-30 | Hewlett Packard Enterprise Development Lp | Data blocks migration |
| US20180373722A1 (en) * | 2017-06-26 | 2018-12-27 | Acronis International Gmbh | System and method for data classification using machine learning during archiving |
| US20190114559A1 (en) * | 2016-04-29 | 2019-04-18 | Hewlett Packard Enterprise Development Lp | Storage device failure policies |
| US10409516B1 (en) * | 2018-01-12 | 2019-09-10 | EMC IP Holding Company LLC | Positional indexing for a tiered data storage system |
| US10409501B1 (en) * | 2017-12-07 | 2019-09-10 | EMC IP Holding Company LLC | Tiered data storage system using mobility scoring |
| US20200042234A1 (en) * | 2018-07-31 | 2020-02-06 | EMC IP Holding Company LLC | Offload processing using storage device slots |
| US20200125639A1 (en) * | 2018-10-22 | 2020-04-23 | Ca, Inc. | Generating training data from a machine learning model to identify offensive language |
-
2019
- 2019-05-03 US US16/403,344 patent/US10860236B2/en active Active
Patent Citations (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150234719A1 (en) * | 2014-02-18 | 2015-08-20 | International Business Machines Corporation | Preemptive relocation of failing data |
| US20160189027A1 (en) * | 2014-12-24 | 2016-06-30 | Google Inc. | Augmenting neural networks to generate additional outputs |
| US20170262216A1 (en) * | 2015-09-29 | 2017-09-14 | EMC IP Holding Company LLC | Dynamic storage tiering based on predicted workloads |
| US20190114559A1 (en) * | 2016-04-29 | 2019-04-18 | Hewlett Packard Enterprise Development Lp | Storage device failure policies |
| US20180018379A1 (en) * | 2016-07-13 | 2018-01-18 | International Business Machines Corporation | Application performance using multidimensional predictive algorithm for automated tiering mechanisms |
| US20180240010A1 (en) * | 2017-02-19 | 2018-08-23 | Intel Corporation | Technologies for optimized machine learning training |
| US20180246659A1 (en) * | 2017-02-28 | 2018-08-30 | Hewlett Packard Enterprise Development Lp | Data blocks migration |
| US20180373722A1 (en) * | 2017-06-26 | 2018-12-27 | Acronis International Gmbh | System and method for data classification using machine learning during archiving |
| US10409501B1 (en) * | 2017-12-07 | 2019-09-10 | EMC IP Holding Company LLC | Tiered data storage system using mobility scoring |
| US10409516B1 (en) * | 2018-01-12 | 2019-09-10 | EMC IP Holding Company LLC | Positional indexing for a tiered data storage system |
| US20200042234A1 (en) * | 2018-07-31 | 2020-02-06 | EMC IP Holding Company LLC | Offload processing using storage device slots |
| US20200125639A1 (en) * | 2018-10-22 | 2020-04-23 | Ca, Inc. | Generating training data from a machine learning model to identify offensive language |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12210760B2 (en) | 2021-12-03 | 2025-01-28 | Samsung Electronics Co., Ltd. | Object storage system, migration control device, and migration control method |
| US12282689B2 (en) | 2022-07-25 | 2025-04-22 | Dell Products L.P. | Dynamic redundant array of independent disks (RAID) transformation |
Also Published As
| Publication number | Publication date |
|---|---|
| US20200348875A1 (en) | 2020-11-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10860236B2 (en) | Method and system for proactive data migration across tiered storage | |
| US11221886B2 (en) | Optimizing dynamical resource allocations for cache-friendly workloads in disaggregated data centers | |
| US11586381B2 (en) | Dynamic scheduling of distributed storage management tasks using predicted system characteristics | |
| US11204827B2 (en) | Using a machine learning module to determine when to perform error checking of a storage unit | |
| US11330042B2 (en) | Optimizing dynamic resource allocations for storage-dependent workloads in disaggregated data centers | |
| US10168946B2 (en) | Extent migration in multi-tier storage systems | |
| US10601903B2 (en) | Optimizing dynamical resource allocations based on locality of resources in disaggregated data centers | |
| US11797187B2 (en) | Optimized I/O performance regulation for non-volatile storage | |
| US10353730B2 (en) | Running a virtual machine on a destination host node in a computer cluster | |
| US10606649B2 (en) | Workload identification and display of workload-specific metrics | |
| US10977085B2 (en) | Optimizing dynamical resource allocations in disaggregated data centers | |
| US11409453B2 (en) | Storage capacity forecasting for storage systems in an active tier of a storage environment | |
| US20220101178A1 (en) | Adaptive distributed learning model optimization for performance prediction under data privacy constraints | |
| US20200225988A1 (en) | Dynamic queue depth management with non-volatile memory controllers | |
| US20190354413A1 (en) | Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers | |
| JP7442523B2 (en) | Improving data performance by transferring data between storage tiers using workload characteristics | |
| US11941454B1 (en) | Dynamically modifying block-storage volumes using workload classifications | |
| US11592989B1 (en) | Dynamically modifying block-storage volumes using forecasted metrics | |
| US20250053323A1 (en) | Systems and Methods for Improving the Performance of Computing Systems | |
| US10002173B1 (en) | System and methods for dynamically adjusting between asynchronous and synchronous data replication policies in a networked virtualization environment | |
| US11175959B2 (en) | Determine a load balancing mechanism for allocation of shared resources in a storage system by training a machine learning module based on number of I/O operations | |
| US20240264763A1 (en) | Proactive rebalancing of data among storage devices that are part of a virtual disk | |
| US10956084B2 (en) | Drive utilization in multi-tiered systems with read-intensive flash | |
| US12019532B2 (en) | Distributed file system performance optimization for path-level settings using machine learning | |
| US11334390B2 (en) | Hyper-converged infrastructure (HCI) resource reservation system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRASNER, JONATHAN I.;DUQUETTE, JASON JEROME;SIGNING DATES FROM 20190502 TO 20190503;REEL/FRAME:049094/0548 |
|
| AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;AND OTHERS;REEL/FRAME:050405/0534 Effective date: 20190917 |
|
| AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;AND OTHERS;REEL/FRAME:050724/0466 Effective date: 20191010 |
|
| AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
| AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169 Effective date: 20200603 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST AT REEL 050405 FRAME 0534;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0001 Effective date: 20211101 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050405 FRAME 0534;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0001 Effective date: 20211101 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050405 FRAME 0534;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0001 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 050405 FRAME 0534;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058001/0001 Effective date: 20211101 |
|
| AS | Assignment |
Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0466);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060753/0486 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0466);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060753/0486 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0466);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060753/0486 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (050724/0466);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060753/0486 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL INTERNATIONAL L.L.C., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |